added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2022-01-26T14:24:24.512Z
|
2022-01-25T00:00:00.000
|
246281932
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41467-022-28158-2.pdf",
"pdf_hash": "3c86d1610242ada859523727d43fb7ae044d42f1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41793",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "6d298fd630c950ea4802b16cf0ec344ef879905a",
"year": 2022
}
|
pes2o/s2orc
|
USP44 regulates irradiation-induced DNA double-strand break repair and suppresses tumorigenesis in nasopharyngeal carcinoma
Radiotherapy is the primary treatment for patients with nasopharyngeal carcinoma (NPC), and approximately 20% of patients experience treatment failure due to tumour radioresistance. However, the exact regulatory mechanism remains poorly understood. Here, we show that the deubiquitinase USP44 is hypermethylated in NPC, which results in its downregulation. USP44 enhances the sensitivity of NPC cells to radiotherapy in vitro and in vivo. USP44 recruits and stabilizes the E3 ubiquitin ligase TRIM25 by removing its K48-linked polyubiquitin chains at Lys439, which further facilitates the degradation of Ku80 and inhibits its recruitment to DNA double-strand breaks (DSBs), thus enhancing DNA damage and inhibiting DNA repair via non-homologous end joining (NHEJ). Knockout of TRIM25 reverses the radiotherapy sensitization effect of USP44. Clinically, low expression of USP44 indicates a poor prognosis and facilitates tumour relapse in NPC patients. This study suggests the USP44-TRIM25-Ku80 axis provides potential therapeutic targets for NPC patients.
N asopharyngeal carcinoma (NPC) is an epithelial tumour arising from the nasopharyngeal mucosa with a unique geographical distribution. It is prevalent in South China, South-Eastern Asia and North Africa 1,2 . Radiotherapy is the primary therapeutic method for NPC because the disease is highly sensitive to ionising radiation 2 . Recently, the developments of more accurate tumour localisation methods, better radiotherapy techniques and combined therapy have greatly improved patient survival. However,~20% of patients suffer regional recurrence or distant metastasis due to radioresistance [3][4][5] . However, the exact regulatory mechanisms underlying radioresistance in NPC are poorly understood.
DNA double-strand breaks (DSBs) are the most critical type of DNA damage induced by irradiation (IR), and the majority of DSBs are repaired via the non-homologous end joining (NHEJ) pathway 6,7 . The Ku80-Ku70 heterodimer binds rapidly and tightly to the ends of DSBs and further recruits many other factors required for NHEJ-mediated DNA repair, including DNA-dependent protein kinase catalytic subunit (DNA-PKcs), the XRCC4-LIG4-XLF ligation complex, and APTX and APTF proteins; thus, Ku80 plays an essential role in the initiation of the NHEJ-mediated DNA repair pathway 8,9 . Ku80 is tightly regulated by ubiquitination and deubiquitination mediated by E3 ubiquitin ligases (E3s) and deubiquitinating enzymes (DUBs), respectively. For example, the ubiquitination of Ku80 is mediated by multiple E3s, including RING finger domain-containing protein (RNF) RNF8 10 , RNF126 11 and RNF138 12 . Conversely, the ubiquitin carboxyl-terminal hydrolase L3 (UCHL3), which belongs to the DUB family, can directly deubiquitylate Ku80 13 . However, how Ku80 is recruited to damaged DNA remains obscure.
DNA methylation, a type of epigenetic modification, is closely associated with tumour initiation and progression, especially in NPC [14][15][16] . Frequent methylation of the CpG islands of the ubiquitin-specific protease (USP) USP44 is an early event in colorectal neoplasia 17 . However, the functions and mechanisms of USP44 in NPC have not yet been investigated. USP44 is involved in cell cycle regulation, cell differentiation and DNA repair processes 18,19 . For example, USP44 acts as a tumour suppressor by inhibiting the activation of APC to prevent the missegregation of chromosomes 20,21 . USP44 can also regulate stem cell differentiation by reversing the mono-ubiquitination of H2B-K120 22 . In addition, in the DSB response, USP44 counteracts the RNF168-mediated polyubiquitination of histone H2A to inhibit the recruitment of downstream repair factors 23 .
Here, we show that hypermethylation of USP44 promotes radiotherapy resistance in NPC. USP44 is hypermethylated in NPC, which is associated with its downregulation. USP44 enhances the sensitivity of NPC cells to radiotherapy in vitro and in vivo through the USP44-TRIM25-Ku80 axis. USP44 recruits and stabilises the tripartite motif-containing (TRIM) protein TRIM25 by removing its K48-linked polyubiquitin chains at Lys439, which further facilitates the degradation of Ku80 and inhibits its recruitment to DSBs, thus enhancing DNA damage and inhibiting NHEJ-mediated DNA repair. Low expression of USP44 is associated with tumour relapse and a poor prognosis in NPC patients. The USP44-TRIM25-Ku80 axis provides potential targets for NPC treatment and prognostic prediction.
Results
Promoter hypermethylation of USP44 downregulates its expression in NPC. Our previous methylation microarray study (GSE52068) analysed genome-wide DNA methylation between normal nasopharyngeal (n = 24) and NPC tumour (n = 24) samples 24 , from which we identified seven hypermethylated CpG sites in the promoter of USP44 (Fig. 1a). Among the 7 CpG sites, site cg00927554 was the most hypermethylated ( Supplementary Fig. 1a), and this result was confirmed in another published microarray dataset (GSE62336, Supplementary Fig. 1a) from Hong Kong. Thus, we selected it for further validation by bisulfite pyrosequencing (Fig. 1b). The cg00927554 site of the USP44 promoter was more significantly hypermethylated in NPC tissues than in normal tissues (Fig. 1c, d). The average methylation rate of this site was more than 90% in NPC cell lines but was onlỹ 10% in normal NP69 cells (Fig. 1e). In addition, we found that NPC cell lines and tissue samples had much lower USP44 mRNA and protein expression levels than the immortalised nasopharyngeal epithelial NP69 cells and normal tissue samples (Fig. 1f-i).
The demethylating drug DAC was used to verify whether the downregulation of USP44 resulted from the hypermethylation of its promoter. DAC treatment substantially decreased USP44 methylation levels but increased USP44 mRNA levels in NPC cells compared with NP69 cells (Fig. 1j, k). Moreover, TCGA database analysis using the GEPIA tool showed USP44 promoter hypermethylation and downregulated mRNA expression, and this negative correlation was observed in eight other solid tumour types ( Supplementary Fig. 1b-d). Taken together, these data illustrate that the promoter hypermethylation of USP44 results in its downregulation in NPC.
USP44 enhances the radiosensitivity of NPC cells in vitro.
Through Gene Set Enrichment Analysis (GSEA) of the GSE12452 dataset, we identified that compared to NPC samples with high USP44 expression, those with low USP44 expression were remarkably enriched in gene sets related to radiation response pathways (Fig. 2a). To further investigate the effect of USP44 after DNA damage in NPC cells, we constructed SUNE1 and HONE1 cells with stable overexpression or transient knockdown of USP44 ( Supplementary Fig. 2a, b). Overexpression of USP44 severely impeded the colony formation and cell proliferation of NPC cells after IR ( Fig. 2b and Supplementary Fig. 2c). Conversely, knockdown of USP44 in NPC cells improved cell survival and proliferation after DNA damage caused by IR ( Supplementary Fig. 2d, e), which were confirmed by knockout of USP44 expression in SUNE1 cells ( Supplementary Fig. 2f). One of the most common effects of IR is cell cycle arrest 25 . An increasing proportion of cells in the G2/M phase indicates that cells are more sensitive to IR [26][27][28] . DNA damage after IR also leads to a strong cell apoptosis response 29 . We found that the combination of IR and USP44 overexpression significantly induced the G2/M phase arrest and apoptosis of NPC cells (Fig. 2c, d). However, knockdown of USP44 in NPC cells upon IR significantly reduced G2/M phase arrest and apoptosis (Supplementary Fig. 3a, b). The microtubule poison nocodazole can arrest cells in the G2/M phase when H3S10 P is highly abundant [30][31][32][33][34] . We arrested cells in the G2/M phase with nocodazole and investigated the effect of IR treatment and USP44 knockout on G2/M cell cycle arrest. Our results revealed that the percentage of H3S10 P positive cells was obviously enhanced upon IR induction, which could lead to DNA damage and arrest cells in the G2/M phase. Knockout of USP44 decreased the percentage of H3S10 P positive cells and inhibited the IRinduced G2/M cell cycle arrest ( Supplementary Fig. 3c). These results elucidated that USP44 sensitises NPC cells to IR through G2/M phase arrest and apoptosis induction, indicating an essential role of USP44 in the DNA damage response.
USP44 promotes the degradation of Ku80 by enhancing its ubiquitination. Mass spectrometry analysis identified Ku80, which had the greatest number of peptide ions matched to USP44, as the potential target of USP44 (Fig. 3a, Supplementary Fig. 4a and Supplementary Table 1). Co-IP also verified the exogenous interaction between USP44 and Ku80 (Fig. 3b). Immunofluorescence staining further confirmed the colocalisation of HA-USP44 and Ku80 in the nucleus (Fig. 3c). Ku80 and Ku70, as a complex, recognise DSBs and recruit other NHEJ proteins in the DNA repair process 35 . Therefore, we next sought to determine how the interaction between USP44 and Ku80 proteins affects radiosensitization in NPC. We found that overexpression of USP44 decreased the protein expression level of Ku80 in a dose-dependent manner but did not affect its mRNA expression level ( Fig. 3d and Supplementary Fig. 4b). Overexpression of USP44 significantly promoted the degradation of Fig. 3e and Supplementary Fig. 4c, d). To further investigate whether USP44 promotes the degradation of Ku80 through the ubiquitin-proteasome pathway or lysosomal pathway, we treated HEK293T cells with MG132, a proteasome inhibitor, and CQ, a lysosome inhibitor, after co-transfection with USP44 and Ku80, and we found that USP44-mediated destabilization of Ku80 was reversed by MG132 but not by CQ, suggesting that USP44 downregulates the Ku80 protein through the ubiquitin-proteasome pathway (Fig. 3f). USP44 belongs to the DUB family and possesses ubiquitin hydrolase activity 22,23 . We, therefore, examined the effects of USP44 on the ubiquitination of Ku80 and found that overexpression of USP44 surprisingly increased the polyubiquitination of Ku80 ( Fig. 3g and Supplementary Fig. 4e). These results suggest that USP44 promotes the polyubiquitination of Ku80, which results in its degradation via the ubiquitin-proteasome pathway.
USP44 recruits TRIM25 to ubiquitinate Ku80 and further leads to its degradation. The above results showed that USP44 promotes the ubiquitination and degradation of Ku80. However, USP44 acts as a DUB and usually stabilises the target protein through the deubiquitination process 36 . We hypothesised that USP44 may recruit an E3 ligase to promote the ubiquitination and degradation of Ku80. We then found the E3 ligase TRIM25 by mass spectrometry (Fig. 3a, Supplementary Fig. 4a and Supplementary Table 1). Consistent with our hypothesis, TRIM25 could exogenously and endogenously interact with both USP44 and Ku80 (Fig. 3h). The TRIM25 protein contains three domains, including RING finger, protein kinase C-related kinase homology region 1 (HR1) and N-terminal PRY/SPRY domains. Truncation co-IP revealed that USP44 could interact with the HR1 and PRY/ SPRY domains of TRIM25 but not the RING finger domain ( Supplementary Fig. 5a), suggesting that the HR1 and PRY/SPRY domains are important for the interaction between USP44 and TRIM25. Immunofluorescence staining also revealed colocalisation of USP44, Ku80 and TRIM25 in NPC cells (Supplementary Fig. 5b). Therefore, TRIM25 may function as the E3 ligase between USP44 and Ku80 and eventually lead to the degradation of Ku80. We then checked whether TRIM25 could affect the stability of Ku80 and found that overexpression of TRIM25 accelerated Ku80 decay rates but did not affect the mRNA expression of Ku80; this effect could be reversed by MG132 but not CQ (Fig. 3i, j and Supplementary Fig. 5c). Overexpression of TRIM25 significantly promoted the degradation of Ku80, and knockout or knockdown of TRIM25 inhibited the degradation of Ku80 through treatment with CHX ( Fig. 3k and Supplementary Fig. 5d, e). As expected, overexpression of TRIM25 notably increased the polyubiquitination of Ku80 ( Fig. 3l and Supplementary Fig. 5f).
In addition, the knockdown of TRIM25 reversed the increased polyubiquitination of Ku80 caused by ectopic expression of USP44 (Fig. 3m). The above results showed that USP44 recruits TRIM25 to ubiquitinate Ku80 and further leads to its degradation.
USP44 deubiquitinates and stabilises TRIM25 to promote Ku80 ubiquitination. TRIM25 recruited by USP44 acts as a scaffold protein between USP44 and Ku80. Next, we wondered whether the stability of TRIM25 is regulated by USP44. To our surprise, overexpression of USP44 stabilised the TRIM25 protein and prolonged its half-life in both HEK293T cells and NPC cells and could be further accumulated in the presence of MG132 (Fig. 4a-c and Supplementary Fig. 6a, b). Furthermore, overexpression of USP44 inhibited the K48-linked but not K63-linked ubiquitination of TRIM25 ( Fig. 4d and Supplementary Fig. 6c). Conversely, knockdown or knockout of USP44 enhanced the K48-linked but not K63-linked ubiquitination of TRIM25 ( Fig. 4e and Supplementary Fig. 6d). Nevertheless, USP44 (C282A), a deubiquitinase-inactive mutant of USP44, lost its ability to stabilise and deubiquitinate TRIM25 (Fig. 4f, g and Supplementary Fig. 6e), indicating that the ubiquitin hydrolase activity of USP44 is involved in the regulation of TRIM25. We then generated three K/R substitutions (KR) mutants of TRIM25 according to mass spectrometry analysis (Fig. 4h) for denature-IP assays. When without USP44 overexpression, the ubiquitination levels of all TRIM25 mutants (K283/284R, K439R or K509R) were weaker than that of the wild-type TRIM25 (WT), indicating that all the TRIM25 KR mutants can be ubiquitinated, including K439R. Comparing the amount of K439-ubiquitin smeared in cells with USP44 expressing to the same signal in cells without USP44 expressing clearly demonstrated that USP44 targeted this mutant for deubiquitylation as well. While TRIM25 ubiquitinated on other lysines certainty did not appear resistant to USP44 deubiquitylation, thus USP44 might have the highest activity toward K439 ubiquitination (Fig. 4i). Hence, USP44 recruits TRIM25 and impairs the Lys439-mediated K48-linked ubiquitination of TRIM25 and further inhibits its degradation. We further found that overexpression of USP44 reduced the Ku80 expression with or without IR and knockdown of TRIM25 could rescue the USP44-mediated Ku80 degradation, which was validated by western blot analysis ( Fig. 4j and Supplementary Fig. 7a-d). Taken together, our results reveal that USP44 stabilises TRIM25 by removing its K48-linked polyubiquitin chains at K439, which further promotes the ubiquitination and degradation of Ku80.
Knockout of TRIM25 reverses the radiosensitizing effect of USP44 in vitro. To validate whether USP44 stabilises TRIM25 to Fig. 1 Promoter hypermethylation of USP44 downregulates its expression in NPC. a Heatmap clustering of seven hypermethylated CpG sites in the CpG islands of USP44 in normal nasopharyngeal epithelial tissues (n = 24) and NPC tissues (n = 24). Columns: individual samples; rows: CpG sites; blue: low methylation; red: high methylation. b Schematic illustration of the bisulfite pyrosequencing region in the USP44 promoter. Red region: input sequence; blue region: CpG islands; TSS: transcription start site; red text: CG sites used for bisulfite pyrosequencing; blue text: the most significantly altered CG site in the USP44 promoter. c, d Bisulfite pyrosequencing analysis of the USP44 promoter region (c) and statistical analysis of methylation levels (d) in normal (n = 8) and NPC (n = 8) tissues. e The methylation levels of the USP44 promoter region between NP69 and NPC cell lines (SUNE1, CNE1, CNE2, HNE1, and HONE1) were determined through bisulfite pyrosequencing analysis. f, g RT-PCR analysis of relative USP44 mRNA expression in the NP69 cell line and NPC cell lines (f) and in normal (n = 13) and NPC (n = 15) tissues (g). h, i Representative western blot analysis of USP44 protein expression in NP69 cells and NPC cell lines (h), together with normal and NPC tissues (i). j, k USP44 methylation levels measured by bisulfite pyrosequencing analysis (j) and relative USP44 mRNA levels measured by RT-PCR analysis (k) in NP69 cells and NPC cell lines with (DAC+) or without (DAC−) DAC treatment. Data in d and g are presented as the mean ± SEM, and those in e, f, j, and k are presented as the mean ± SD; the P values were determined using the two-tailed Student's t-test; n = 3 independent experiments. Source data are provided as a Source Data file.
degrade Ku80 and thus exerts a radiosensitizing effect, we performed a comet assay to measure DSBs remaining at various times after IR treatment in Vector + sgNC, USP44 + sgNC and USP44 + sgTRIM25 grouped SUNE1 or HONE1 cells (Supplementary Fig. 8a). While the levels of DNA damage indicated by comet tails gradually returned to baseline in the Vector + sgNC cells 24 h after IR treatment, it remained higher in the USP44 + sgNC cells, suggesting there were delays in DNA repair in the USP44 overexpression cells. Moreover, TRIM25 knockout reversed these DNA damage, suggesting that USP44 has a negative impact on DSB repair by regulating TRIM25 (Fig. 5a). Consistent with the notion that USP44 impedes DSB repair, ectopic expression of USP44 enhanced the formation of DSB marker γH2AX foci induced by IR, which could be reversed by knockout of TRIM25 (Fig. 5b). Furthermore, laser microirradiation and live-cell imaging analysis also indicated that USP44 overexpression remarkably resulted in impaired recruitment of GFP-Ku80 at DSB sites, and this effect could be largely reverted by TRIM25 knockout (Fig. 5c). Together, all these observations strongly suggested that the DSB repair activity is impaired by USP44 overexpression, which could be reversed by TRIM25 knockout.
To test whether NHEJ is the pathway affected by USP44, we performed the NHEJ report assay to see the effect of the USP44-TRIM25 axis on the NHEJ repair. As the schematic shows, when the EJ5-GFP plasmids are transfected into NPC cells, GFP will not be produced. While if we infect the cells with the adenoviruses expressing endonuclease I-SceI, the endonuclease I-SceI will recognise and cut the I-SceI sites to produce DSBs, then if the DSBs are repaired through the NHEJ-mediated pathway, the GFP will be restored ( Supplementary Fig. 8b) 11,37,38 . As with the results of knockdown of Ku70 (a known essential protein for NHEJ repair) 39,40 , overexpression of USP44 significantly decreased GFP expression and thus inhibited NHEJ-mediated DNA repair, and knockout of TRIM25 reversed this inhibitory effect ( Fig. 5d and Supplementary Fig. 8c, d). These results demonstrate that USP44 could inhibit NHEJ-mediated DNA repair by targeting TRIM25. More importantly, the suppressive effects on NPC cell survival, proliferation, G2/M phase arrest and apoptosis induced by USP44 overexpression were almost completely recovered by knockdown of TRIM25 ( Fig. 6a-d). The suppressive effect of USP44 overexpression on NPC cell survival was also reversed by re-expression of Ku80 ( Supplementary Fig. 8e). Overall, these results indicate that the TRIM25-Ku80 axis is a functional target of USP44 that mediates its radiosensitizing effect in NPC.
USP44 increases the radiosensitivity of NPC cells in vivo.
To determine whether USP44 promotes the radiosensitivity of NPC cells in vivo, we generated subcutaneous tumour xenograft models. Compared with the control group, the USP44 group exhibited reduced xenograft growth in terms of the size, volume, and weight of the excised tumours, especially after IR, indicating that the tumours in the USP44 group were much more sensitive to IR (Fig. 7a-c). The protein levels of TRIM25 and caspase 3, a cell apoptosis-related protein, were increased, and the levels of Ku80 were decreased in the tumours of the USP44 group compared with those of the control group (Fig. 7d, e). Moreover, the radiosensitization effect of USP44 was almost completely rescued by the knockout of TRIM25 in vivo ( Fig. 7f-h). These data suggest that overexpression of USP44 regulates TRIM25/Ku80 expression, thus inhibiting cell proliferation and activating cell apoptosis to promote the radiosensitivity of NPC cells in vivo.
Low expression of USP44 indicates poor prognosis and is associated with tumour relapse. To further investigate the clinical significance of USP44 protein levels in NPC patients, we conducted IHC staining of 376 NPC tissues with an antibody against USP44. We found positive expression of USP44 in cells from NPC tissues in both the cytoplasm and nucleus, and the samples were grouped according to staining intensity (weak, moderate or strong) (Fig. 8a). We combined these results with the clinical data and found that locoregional recurrence was evidently related to weak USP44 staining in tumour samples (Fig. 8b). We divided these NPC patients into high USP44 expression or low USP44 expression groups for Kaplan-Meier analysis, which revealed significant differences in locoregional recurrence-free, disease-free and overall survival (Fig. 8c-e). Lower USP44 expression was significantly correlated with a higher risk of relapse, disease and death (Supplementary Table 2). Further analysis identified USP44 expression level, WHO type and TNM stage as independent prognostic indicators for NPC prognosis (Fig. 8f-h). Besides, IHC staining indicated that the USP44 expression was negatively correlated with the Ku80 expression in NPC tissue samples ( Supplementary Fig. 9a, b). Taken together, our findings show that low expression of USP44 indicates a poor prognosis and is associated with tumour relapse in NPC patients.
Discussion
Our current findings demonstrate that the promoter of USP44 is generally hypermethylated in NPC, which leads to the downregulation of USP44. USP44 significantly enhances NPC radiosensitivity in vitro and in vivo by stabilising the E3 ligase TRIM25, which further degrades Ku80 via the ubiquitinproteasome pathway in the NHEJ-mediated DNA repair process. Moreover, we revealed that reduced expression of USP44 indicates a poor prognosis and is associated with tumour relapse in NPC patients. Radiotherapy is the main treatment regimen for NPC [41][42][43] , but some patients exhibit radioresistance, thus leading to poor therapeutic efficacy and a poor prognosis 44,45 . The role of DNA methylation has been extensively explored in the pathogenesis and development of cancers, including NPC 46 . Several aberrantly methylated genes have been reported as potential prognostic biomarkers for NPC [47][48][49] . However, the exact regulatory mechanisms remain to be elucidated. Therefore, exploring and clarifying the molecular mechanisms of tumorigenesis and progression caused by DNA methylation is vital for improving the prognosis and providing potential targets for NPC treatment. Hence, we analysed genome-wide DNA methylation between normal human nasopharyngeal samples and NPC tumour samples 24 and found that the CpG islands of USP44 were frequently hypermethylated in NPC samples; these results were confirmed by analysis of another published microarray dataset and by bisulfite pyrosequencing. TCGA database analysis also revealed that the promoter of USP44 was broadly hypermethylated in eight other solid tumours. In addition, hypermethylation of the USP44 promoter was found in colon cancer and breast cancer 17,50 , suggesting that the mechanisms of USP44 in suppressing NPC may be shared by other tumours.
DUBs oppose the function of E3 ligases 51,52 . Various DUBs, such as USP7, USP22 and USP28, participate in the complex process of tumorigenesis and progression [53][54][55] . USP44 has been reported to act as a tumour suppressor that regulates cell cycle arrest and DSB responses by modulating H2B mono-ubiquitylation 20,23 . Our study Fig. 3 USP44 ubiquitinates and degrades Ku80 by recruiting TRIM25. a SDS-PAGE of HA-immunoprecipitated proteins separated from SUNE1 cells stably overexpressing HA-USP44. Red lines indicate the proteins of interest. b Co-IP with anti-HA or anti-FLAG antibody in SUNE1 cells revealed the exogenous association of USP44 and Ku80. c Immunofluorescence staining revealed the cellular location of exogenous HA-USP44 (green) and endogenous Ku80 (red) at 0.5 h after exposure to 6Gy IR. Scale bars, 10 μm. d USP44 inhibited Ku80 protein expression but not its mRNA expression in a dose-dependent manner. e The effect of CHX treatment and greyscale analysis of the results in 293T cells transfected with FLAG-Ku80 and HA-USP44 or the empty vector plasmids, as well as in sgNC or sgUSP44 SUNE1 cells. f The effect of MG132 and CQ treatment in 293T cells transfected with the indicated plasmids. g HEK293T cells transfected with FLAG-Ku80, HA-Ub and HA-USP44 or the empty plasmids were subjected to denature-IP and immunoblotted with the indicated antibodies. h Co-IP assay detecting the exogenous association of USP44 and TRIM25 and the endogenous association of USP44, TRIM25 and Ku80 in NPC cells. i TRIM25 inhibited Ku80 protein expression but not its mRNA expression in a dose-dependent manner. j The effect of MG132 and CQ treatment in 293T cells transfected with FLAG-Ku80 and FLAG-TRIM25 or the empty vector plasmids. k The effect of CHX treatment and greyscale analysis of the results in 293T cells transfected with FLAG-Ku80 and MYC-TRIM25 or the empty vector plasmids, as well as in sgNC or sgTRIM25 SUNE1 cells. l, m HEK293T cells transfected with the indicated plasmids or siRNAs were subjected to denature-IP and then immunoblotted with the indicated antibody. Data in d, e and i, k are presented as the mean ± SD; the P values were determined using the two-tailed Student's t-test; n = 3 independent experiments. Source data are provided as a Source Data file. showed that USP44 arrested NPC cells in the G2/M phase indicated by H3S10 P fluorescence. USP44 could also cause G2/M phase arrest by preventing the premature activation of the APC to regulate mitotic checkpoint and binding to the centriole protein centrin to regulate centrosome positioning 20,56 . We found that USP44 promoted G2/M phase arrest, apoptosis induction and radiosensitization of NPC cells through the TRIM25-Ku80 axis in vivo and in vitro. Our findings uncover an uncovered mechanism by which USP44 regulates the cell cycle and DSB response to induce a tumour suppressor effect in NPC. First, we found that USP44 interacts with the Ku80 protein. Interestingly, we then found that overexpression of USP44 facilitated the degradation of the Ku80 protein rather than stabilising it. We supposed that there might be an intermediate molecule between USP44 and Ku80 that is responsible for the subsequent degradation of Ku80. As expected, we identified an E3 ligase, TRIM25, which acts as a scaffold protein between USP44 and Ku80. It has been reported that TRIM25 interacts with PCNA and p53 in the DNA repair process 57,58 . However, we found that USP44-TRIM25 interaction could degrade Ku80 to inhibit NHEJ-mediated DNA, which combined with G2/M phase arrest and apoptosis induction to subsequently enhance radiosensitivity in NPC. On the one hand, USP44 recruits and stabilises TRIM25 by removing its K48-linked polyubiquitin chain at Lys439. On the other hand, TRIM25 ubiquitinates Ku80 to trigger its degradation through the ubiquitin-proteasome pathway. The ubiquitination of Ku80 is important for its recruitment to and release from DSBs 10,12 . Through laser microirradiation and live-cell imaging analysis, we found that USP44 remarkably impaired the recruitment of Ku80 at DSBs upon laser micro-IR, and this effect could be largely rescued by TRIM25 depletion. This finding reveals an uncovered mechanism by which TRIM25 regulates DSB repair and radiotherapy resistance by targeting Ku80 for ubiquitination. Although USP44 has been reported as a prognostic indicator in lung cancer, gastric cancer and breast cancer 56,59,60 , its prognostic value in NPC remains unknown. We found that the USP44 expression level was significantly correlated with a higher risk of locoregional recurrence, disease progression and death. Low USP44 expression was an independent predictor of poor clinical outcomes in NPC patients. This finding provides a predictive index of curative effects for NPC treatment. In conclusion, Fig. 8i shows our working model. In normal tissues, USP44 recruits and stabilises TRIM25 by removing the K48-linked polyubiquitin chains of TRIM25, and TRIM25 degrades Ku80 by promoting the polyubiquitination of Ku80, which inhibits its recruitment to DSBs and further impairs NHEJ-mediated DNA repair and enhances NPC radiosensitivity. In NPC, hypermethylation of the USP44 promoter leads to its downregulation at the mRNA and protein levels, which blocks the antitumour effect of the USP44-TRIM25-Ku80 axis. Our research has laid a foundation for better understanding the mechanisms of radioresistance.
Methods
Clinical specimens. We collected 19 fresh-frozen NPC specimens and 17 normal nasopharyngeal epithelial specimens, as well as 376 paraffin-embedded locoregionally advanced NPC specimens between January 2006 and December 2009, from Sun Yatsen University Cancer Center (Guangzhou, China). None of the patients who provided specimens had been treated with anticancer therapies before biopsy. The tumournode-metastasis (TNM) stages were reclassified according to the 7th edition of the American Joint Committee on Cancer (AJCC) Cancer Staging Manual 61 , as the pathological types were classified according to WHO types. All patients underwent radical radiotherapy combined with platinum-based chemotherapy. Plasmid construction and transfection. The USP44, TRIM25 and Ku80 coding regions were separately tagged with HA, MYC and FLAG and cloned into empty loading plasmids to obtain the overexpression plasmids pSin-EF2-puro-USP44-HA, pSin-EF2-puro-TRIM25-MYC, pSin-EF2-puro-TRIM25-FLAG and pSin-EF2puro-Ku80-FLAG. The XRCC5 coding regions were cloned into the pEGFPN1 plasmid to obtain the plasmid pEGFPN1-Ku80. In addition, the plasmids pEnterkana-Ku80-FLAG, pCMV-kana-TRIM25-FLAG, pCMV-kana-Ub (WT)-HA and pEnter-kana-vector plasmids were purchased from Vigene Bioscience (China). PRK-HA-Ub (K48O or K63O) was a gift from Professor Bo Zhong (Wuhan University, China). USP44 shRNA sequences #1 and #2 were obtained according to the shRNA sequence prediction website Portals. The shRNAs were synthesised and cloned into the pLKO.1-RFP vector to obtain PLKO.1-shUSP44 #1/2 plasmids. The TRIM25 siRNA (siTRIM25) was purchased from RiboBio (China). The shRNA and siRNA sequences are listed in Supplementary Table 3.
For transient transfection, the indicated cells were transfected with overexpression plasmids and siRNA using Lipofectamine 3000 (Invitrogen) according to the manufacturer's protocols and harvested 24-48 h post-transfection. The PLKO.1-shUSP44 #1/2 plasmids were transiently transfected into NPC cells with stable USP44 overexpression because of the low expression of USP44 in NPC cells. For stable transfection, HEK293T cells were co-transfected with lentivirus packaging plasmids pMD2.G and pSPAX2 using polyethylenimine (PEI) transfection reagent to collect the virus supernatant. NPC cells were infected with the virus supernatant, screened with puromycin (0.5-1 μg/ml) 48 h post-infection and harvested for RT-qPCR and western blotting assays to determine the expression efficiency of the target gene.
CRISPR/Cas9-mediated generation of knockout (KO) cells. The single guide RNA (sgRNA) sequences targeting the USP44 or TRIM25 genomic sequence were designed using an online sgRNA design tool (https://benchling.com) and cloned into pX458 plasmids. These constructs were transfected into cells using Lipofectamine 3000 transfection reagent. The sgUSP44 cells were constructed upon SUNE1 cells with stable USP44 overexpression. Cells with green fluorescence were then sorted with a flow cytometer (MoFlo Astrios) 36 h after transfection, and single colonies were obtained by serial dilution and amplification. Clones were identified by immunoblot with anti-USP44 or anti-TRIM25 antibodies. The sgRNA sequences (5′-3′) are listed in Supplementary Table 3. Fig. 4 USP44 deubiquitinates and stabilises TRIM25 to promote Ku80 ubiquitination. a USP44 promoted Ku80 protein expression but not its mRNA expression in a dose-dependent manner. b, c The effect of CHX (b), MG132 and CQ (c) treatment in 293T cells transfected with FLAG-TRIM25 and HA-USP44 or the empty vector plasmids, as well as in sgNC or sgUSP44 SUNE1 cells. d, e HEK293T cells transfected with HA-USP44 or the empty vector (d) and sgNC or sgUSP44 SUNE1 cells (e) co-transfected with FLAG-TRIM25 or MYC-TRIM25 and a vector encoding HA-WT-Ub or its mutants (HA-K48O-Ub or HA-K63O-Ub) were subjected to denature-IP and immunoblotted with the indicated antibodies. f HEK293T and NPC cells transfected with vector plasmid, HA-USP44 or HA-USP44 (C282A) were immunoblotted with the indicated antibodies. g HONE1 cells transfected with the vector plasmid, HA-USP44 or HA-USP44 (C282A) together with MYC-TRIM25 and HA-K48O-Ub were subjected to denature-IP and immunoblotted with the indicated antibodies. h Mass spectrometry analysis of TRIM25 ubiquitination sites. i HEK293T cells were transfected with the vector plasmid or HA-USP44, HA-Ub and Flag-TRIM25 WT or KR mutants, subjected to denature-IP with anti-Flag beads and then analysed by immunoblot with an anti-HA or anti-Flag antibody. j SUNE1 and HONE1 cells exposed to IR (6Gy) transfected with the indicated plasmids and siRNAs were fixed 0.5 h later and co-immunostained with the anti-Ku80 antibody. Scale bars, 10 μm. Data in a and b are presented as the mean ± SD; the P values were determined using the two-tailed Student's t-test); n = 3 independent experiments. Source data are provided as a Source Data file.
Bisulfite pyrosequencing analysis. Fresh-frozen specimens and cell lines were treated with the AllPrep RNA/DNA Mini Kit (Qiagen) or EZ1 DNA Tissue Kit (Qiagen), respectively, to extract genomic DNA. Genomic DNA was modified by bisulfite using the EpiTect Bisulfite Kit (Qiagen). PyroMark Assay Design Software 2.0 (Qiagen) was used to design the USP44 bisulfite pyrosequencing primer and PCR primer listed in Supplementary Table 3. The sequencing reaction and methylation level quantification were performed with the PyroMark Q96 ID System (Qiagen).
RT-qPCR assay. Total RNA from fresh-frozen specimens and cell lines was extracted with the AllPrep RNA/DNA Mini Kit (Qiagen) or TRIzol reagent (Invitrogen). Complimentary DNA was produced using random primers and M-MLV reverse transcriptase (Promega). RT-qPCR was performed using SYBR Green PCR master mix (Applied Biosystems) and a CFX96 Touch sequence detection system (Bio-Rad). Relative gene expression was calculated by the 2-ΔΔCT equation with GAPDH as an internal control. The primer sequences used for the RT-qPCR assay are listed in Supplementary Table 3.
Western blot assay. Fresh-frozen specimens were ground in liquid nitrogen and lysed to obtain total protein. Cell lines were lysed and sonicated to obtain total protein. Total protein was separated by SDS-PAGE (Genscript) and transferred to PVDF membranes (Millipore). The membranes were blocked in 5% skim milk and incubated overnight with primary antibodies. Following incubation with HRPlinked secondary antibodies, the bands of interest were detected by the X-ray film method. The antibodies used are listed in Supplementary Table 4. Unprocessed scans of immunoblots are provided as Supplementary Fig. 10.
Cell viability assay. The cells were plated into 96-well plates at densities of 800 HONE1 cells or 1000 SUNE1 cells per well. On the indicated days (days 0, 1, 2, 3 and 4), 10 μl Cell Counting Kit-8 (CCK-8) reagent (Dojindo) per well was added to the 96-well plates. After incubation at 37°C for 2 h, the absorbance of each well at 450 nm was detected on a spectrophotometer.
Flow cytometry analysis of cell cycle and apoptosis. The Cell Cycle and Apoptosis Kit (Keygen Biotech) was applied to detect the cell cycle distribution and apoptosis rate of each sample. For cell cycle analysis, serum-starved cells were collected 8 h after 6Gy IR or no IR, washed in PBS and fixed in 70% ice-cold ethanol overnight. After washing, each sample was stained with 500 μl RNase A: PI (1:9, v/v) dyeing solution and screened. The cell cycle distribution was detected using an ACEA NovoCyte flow cytometer and analysed with NovoExpress 1.3.0 software. For apoptosis analysis, cells were collected 24 h after 6Gy IR or no IR and washed twice with PBS. Each sample was resuspended in 500 μl binding buffer, screened and incubated with 5 μl Annexin V-FITC and 5 μl PI fluorescent dyes. The apoptosis rate was detected using a cytoFLEX flow cytometer and analysed with CytExpert 2.2 software. FITC−/PI− cells were considered viable cells, FITC+/PI− cells were considered early apoptotic cells and FITC+/PI+ cells were considered late apoptotic or dead cells. The gating strategy is provided in Supplementary Fig. 11.
Mass spectrometry and co-immunoprecipitation (co-IP) assay. For the IP assay, cells were lysed on ice with IP lysis buffer and sonicated. Total protein was immunoprecipitated overnight at 4°C with 3 μg of the indicated antibodies. The immune complexes were added to Pierce TM Protein A/G Magnetic Beads (Thermo Scientific) and then washed with IP Wash Buffer. The collected immune complexes were separated by SDS-PAGE and stained with Coomassie blue. Mass spectrometry was performed by Huijun Biotechnology (China). The proteins of interest in the Co-IP were detected by western blot assay. The antibodies used are listed in Supplementary Table 4.
Immunofluorescence and confocal microscopy. Cells were fixed in 0.4% paraformaldehyde, permeabilized in 0.5% Triton X-100, blocked in 1% BSA-PBS and incubated overnight at 4°C with primary antibodies. The coverslips were then stained with secondary antibodies, stained with 4′,6-diamidino-2-phenylindole (DAPI, Sigma) and sealed to prevent quenching. Fluorescence images were captured using a confocal scanning microscope (LSM880 with Fast Airyscan, ZEISS). The antibodies used for immunofluorescence are listed in Supplementary Table 4.
Denature-IP assay. All the ubiquitin assays were performed in denaturing conditions. The denature-IP assays were done according to the methods in previous papers 62,63 . At 24 h post-transfection, cells were lysed on ice in NP-40-containing lysis buffer supplemented with EDTA-free protease inhibitor cocktail (Roche). Cell lysates were denatured at 95°C for 5min in the presence of 1% SDS, and immunoprecipitated according to the methods of co-IP with specific antibodies. The lysates and immune complexes were subjected to western blot analysis.
Comet assay. Cells were exposed to IR (6Gy) and harvested at the indicated time points after IR. Neutral comet assays were conducted with the SCGE DNA Damage Detection Kit (KeyGentec) and stained with propidium iodide (PI). The quantitation of tail moments was analysed with CaspLab-Comet Assay Software, and 20 cells were scored for each case.
Laser microirradiation. DSBs were generated with a UVA laser using a pulsed sub-cell illumination system under a 60 × objective lens for live-cell imaging. Cells were seeded on 35-mm glass-bottom dishes (Nest, China), transiently transfected with GFP-Ku80 overnight, visualised 24 h after transfection with a Nikon AX confocal microscope and micro-irradiated with a λ = 365 nm, 16 Hz pulse, 65% energy UVA laser of a Micropoint Ablation System (Andor, USA). Consecutive images were captured at 10 s interval for 10 min.
NHEJ reporter assay. The EJ5-GFP plasmid was generously provided by Professor Muyan Cai (Sun Yat-sen University Cancer Center, China). The NHEJ reporter assay was performed according to the methods as previously described 64 . Cells were seeded in 6-well plates, transfected with EJ5-GFP and infected with I-Scelexpressing adenovirus after 18 h. The medium was replaced after 14 h to avoid adenovirus toxicity. Cells were harvested after 72 h, and the percentage of GFPpositive cells was quantitated by flow cytometry to assess NHEJ-mediated DNA repair efficiency. The gating strategy is provided in Supplementary Fig. 11.
Murine xenograft growth of NPC. Female BALB/c nude mice (6-8 weeks old; n = 64) were purchased from Charles River Laboratories (Beijing, China) and housed in barrier facilities on a 12 h light/dark cycle at temperature 18-22°C and humidity 50-60%. Mice were randomly assigned for tumour injection and administered a subcutaneous injection of 3 × 10 5 SUNE1 cells. The tumour volume and bodyweight of the injected mice were measured every three days from day seven after tumour injection. After the diameter of xenograft tumours reached~5 mm, the mice were locally irradiated with 8Gy once, while the control mice were not irradiated. Tumour volume was calculated using the following formula: length × width 2 × 0.5. After 28 days, tumour samples were paraffin-embedded for immunohistochemistry (IHC) analyses. All experimental protocols were approved by the Institutional Animal Care and Use Committee of Sun Yat-sen University and complied with the Declaration of Helsinki. We did our best to minimise animal suffering. The maximal tumour diameter was 20 mm permitted by our ethics committee, and the maximal tumours size was not exceeded in our study.
IHC. Sections were deparaffinized, rehydrated, preincubated with hydrogen peroxide, blocked with goat serum (Beyotime), incubated with primary antibodies and labelled with HRP rabbit/mouse secondary antibodies (Dako REAL TM EnVision TM ), stained with diaminobenzidine (Sigma) and counterstained with haematoxylin. Images were obtained with an AxioVision Rel.4.6 computerised image analysis system (Carl Zeiss). All sections were scored by two experienced pathologists according to the immunoreactive score (IRS) system 65 . The intensity of staining was scored as follows: 0 (negative staining), 1 (weak staining), 2 (moderate staining) and 3 (strong staining). The percentage of positive tumour cells was scored as follows: 1 (<10%), 2 (10-35%), 3 (35-70%) and 4 (>70%). The IRS scores were calculated as the product of the staining intensity score and the score of percentage of positive tumour cells. The antibodies used in the IHC analysis are listed in Supplementary Table 4.
Statistics and reproducibility. All statistical analyses were performed using the SPSS version 22.0 statistical software and Graph-Pad Prism version 6.0.1 software. Differences between groups were analysed using unpaired two-tailed Student's t-tests or χ 2 Fig. 5 USP44-TRIM25 increases DSBs by impeding Ku80 recruitment. The Vector + sgNC, USP44 + sgNC and USP44 + sgTRIM25 SUNE1 or HONE1 cells were stably constructed. a Representative comet images and quantitative analysis of tail moments for 6Gy-IR-induced DNA damage in the indicated SUNE1 or HONE1 cells, measured by the comet assay. Scale bars, 10 μm. b Representative images and quantitative analysis of the number of γH2AX foci in the indicated SUNE1 and HONE1 cells with or without 6Gy-IR exposure. Scale bars, 10 μm. c The indicated SUNE1 and HONE1 cells were transfected with GFP-Ku80 and then subjected to laser micro-IR and live-cell imaging. Scale bars: 10 μm. d The indicated SUNE1 cells were transfected with EJ5-GFP, infected with or without I-SceI adenovirus and analysed for GFP positivity by flow cytometry. Data in a, b and d are presented as the mean ± SD; the P values were determined using the two-tailed Student's t-test; n = 20 (a), n = 10 (b), n = 3 (d) repeats from three independent experiments. Source data are provided as a Source Data file. Fig. 7 USP44 promotes the radiosensitivity of NPC cells in vivo. The SUNE1 cells stably transfected with indicated plasmids were implanted subcutaneously into female BALB/c nude mice to construct xenograft growth models and exposed to 8Gy IR or not. a-c Macroscopic images (a), average volume (b) and average weight (c) of the excised tumours for each group (n = 10). d, e Representative images of immunohistochemical staining and IHC scores for USP44, TRIM25, Ku80 and caspase 3 expression in the excised tumours from each group (n = 5 for IR+USP44 group and n = 10 for the other three groups). Scale bars, 50 μm. f-h Macroscopic images (f), average volume (g) and average weight (h) of the excised tumours for each group (n = 6). Data in b, c, g, h are presented as the mean ± SEM, and those in e are presented as the mean ± SD; the P values were determined using the two-tailed Student's t-test. Source data are provided as a Source Data file.
test. Data were presented as the mean ± SD or mean ± SEM and P < 0.05 was considered significant. Survival curves were constructed using the Kaplan-Meier method, and the differences among groups were compared by the log-rank test. Multivariate analysis with a Cox proportional hazards regression model was used to determine independent prognostic factors. The strength of the relationship is evaluated using the Spearman correlation. Unless otherwise indicated, the experiments were performed independently in triplicate, and n is indicated in the figure legends. Reporting Summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
The microarray data used in this study are available in the Gene Expression Omnibus (GEO; http://www.ncbi.nlm.nih.gov/geo/)under accession codes GSE12452, GSE52068 and GSE62336. The data used in this study for gene expression profiling interactive analysis (GEPIA; http://gepia.cancer-pku.cn/index.html) are available in The Cancer Genome Atlas (TCGA; https://tcga-data.nci.nih.gov/). All the other data supporting the findings of this study are available within the article and its Supplementary Information files. The key raw data have been deposited to Research Data Deposit public platform (https://www.researchdata.org.cn/), with an approval number of RDDB2021760690. Source data are provided with this paper. Fig. 8 Low expression of USP44 indicates a poor prognosis and is associated with tumour relapse in NPC patients. a Representative images of immunohistochemical staining for USP44 protein expression is graded according to the intensity of staining in 376 NPC tissues. Scale bars, 50 μm. b Correlations of locoregional recurrence status with the level of USP44 expression detected by IHC. The P value was determined using the two-tailed χ 2 test. c-e Kaplan-Meier analysis of locoregional recurrence-free survival (c), disease-free survival (d) and overall survival (e) according to the USP44 expression levels. The P values in c-e were determined using the log-rank test. f-h Forest plots of multivariate Cox regression analyses showing the significance of different prognostic variables in NPC locoregional recurrence-free survival (f), disease-free survival (g) and overall survival (h). i Proposed working model of USP44. USP44 recruits and stabilises TRIM25 by removing the K48-linked polyubiquitin chains of TRIM25, and TRIM25 degrades Ku80 by promoting its polyubiquitination and inhibits its recruitment to DSBs, which further inhibits the NHEJ pathway and enhances NPC radiosensitivity. In NPC, hypermethylation of the USP44 promoter leads to its downregulation at the mRNA and protein levels, which blocks the anticancer effect of the USP44-TRIM25-Ku80 axis. Source data are provided as a Source Data file.
|
v3-fos-license
|
2021-04-27T05:12:37.606Z
|
2021-04-01T00:00:00.000
|
233394760
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/21/8/2722/pdf",
"pdf_hash": "9210224fd796f34373f59561de43fce3b96a1494",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41794",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "9210224fd796f34373f59561de43fce3b96a1494",
"year": 2021
}
|
pes2o/s2orc
|
JLGBMLoc—A Novel High-Precision Indoor Localization Method Based on LightGBM
Wi-Fi based localization has become one of the most practical methods for mobile users in location-based services. However, due to the interference of multipath and high-dimensional sparseness of fingerprint data, with the localization system based on received signal strength (RSS), is hard to obtain high accuracy. In this paper, we propose a novel indoor positioning method, named JLGBMLoc (Joint denoising auto-encoder with LightGBM Localization). Firstly, because the noise and outliers may influence the dimensionality reduction on high-dimensional sparseness fingerprint data, we propose a novel feature extraction algorithm—named joint denoising auto-encoder (JDAE)—which reconstructs the sparseness fingerprint data for a better feature representation and restores the fingerprint data. Then, the LightGBM is introduced to the Wi-Fi localization by scattering the processed fingerprint data to histogram, and dividing the decision tree under leaf-wise algorithm with depth limitation. At last, we evaluated the proposed JLGBMLoc on the UJIIndoorLoc dataset and the Tampere dataset, the experimental results show that the proposed model increases the positioning accuracy dramatically compared with other existing methods.
Introduction
In recent years, location based service (LBS) has developed rapidly. However, due to severe signal attenuation and multipath effects, general outdoor positioning facilities (such as GPS) cannot work effectively in buildings [1]. Therefore, several types of indoor positioning technologies have been proposed, such as wireless local area network (WLAN), visible light, cellular networks and their combination technologies [2,3]. The indoor positioning based on Wi-Fi signals has the advantages of convenient deployment, low hardware cost and high real-time performance. However, Wi-Fi based indoor positioning faces the problem of the volatility of Wi-Fi signals and the high-dimensional sparseness of fingerprint [4]. This study focused on improving indoor positioning using a Wi-Fi fingerprint.
Generally, a Wi-Fi system consists of some fixed access points (APs) [5]. Mobile devices (such as laptops and mobile phones) that connect to Wi-Fi can communicate directly or indirectly with each other. Received signal strength (RSS) of the AP is usually used to pre-build a fingerprint database to infer the location of the mobile user. There are two stages in fingerprints positioning-the offline stage and the online stage [6]. The offline stage is to measure RSS readings of known locations from the surrounding access points and correlate them with these physical locations to build a fingerprint database. The collected data is the training set; In the online phase, the real-time sampled RSS vector of the target is compared with stored fingerprints for positioning, where the location of the best matched fingerprint is selected as the target location, the positioning result will be sent back to the requester.
Literature research faces two key problems in fingerprint based localization. Firstly, the observed RSS vectors contain a large number of missing values due to the obstruction of out-of-range APs, random noise, signal fluctuation or scanning duration [7], especially inside large buildings, such as shopping malls and hospitals, which results in extreme data sparsity. Traditional data dimensionality reduction methods, including principal component analysis (PCA) [8] and linear discriminant analysis (LDA) [9], treat all samples as a whole to find an optimal linear mapping projection with the smallest mean square error. But it has a poor reduction effect on complex data. With the development of neural networks, feature extraction and fusion have become more popular.
Another challenge is how to achieve high-precision and high-efficiency localization under multipath and noise fluctuations. The indoor propagation of Wi-Fi signals is easily affected by the human body, some obstacles, and walls, which affects the accuracy of fingerprint positioning. Traditional machine learning methods, including k-nearest neighbor (KNN) [10] and support vector machine (SVM) [11], are not effective in dealing with non-linear problems. Compared with these algorithms, the artificial neural network (ANN) [12] estimates the non-linear position from the input through the selected activation function and adjustable weights, and has the ability to approximate high-dimensional and highly nonlinear models. Notice that ANN is fully connected, the depth of the neural network is directly related to the complexity of its calculation, which may directly affect the accuracy of positioning results. In [13], a hybrid deep learning model (HDLM) is proposed to to enhance the localization performance of the existing Wi-Fi RSSI signal based positioning systems and reduce the positioning error, which uses RSSI heat maps instead of raw RSSI signals from APs. Hoang proposed a recurrent neural network for an accurate received signal strength indicator (RSSI) indoor positioning [14], using the results of different types of RNN, including long short-term memory (LSTM) [15] and gated recurrent unit (GRU) [16]. However, these algorithms still face challenges such as spatial ambiguity and RSS instability. In [17], a convolutional neural network (CNN) based indoor localization system with WiFi fingerprints is proposed, which has a 95% accuracy of floor-level localization on the UJIIndoorLoc dataset.
In summary, RSS based indoor positioning still faces the problem that noise and outliers affect high-dimensional sparse fingerprint data, and it is difficult to achieve high accuracy and high efficiency under multipath and noise fluctuations. To solve the above problem, this paper focuses on a novel feature extraction algorithm to reconstruct sparse fingerprint data to obtain better feature representation. Moreover, in order to reduce the space complexity and low training speed due to the pre-sorting algorithm of the existing gradient boosting model, a novel positioning model is introduced to disperse the processed fingerprint data into histograms and to divide the decision tree under the leafwise algorithm with depth limitation, which solves the problem of large space occupation and improves the calculation speed. The main contributions of this work are summarized as follows: (1) Aiming at the problem of extracting key features from sparse RSS data and reducing the influence noise and outliers of dataset, we propose a novel feature extraction algorithm, named joint denoising auto-encoder (JDAE), which reconstructs the sparseness fingerprint data for a better feature representation and restores the fingerprint data. (2) To achieve higher positioning accuracy under high efficiency, the LightGBM is introduced to the Wi-Fi localization by scattering the processed fingerprint data to histogram, and dividing the decision tree under the leaf-wise algorithm with depth limitation. (3) The proposed model is evaluated by the UJIIndoorLoc [18] and Tampere [19] datasets.
The experimental results show that the proposed model is superior to traditional machine learning methods, the room-level positioning accuracy can reach 96.73% on UJIIndoorLoc, which is nearly 10% higher than the DNN method [20], and the floorlevel positioning accuracy can reach 98.43% on Tampere, which is more predominant than current advanced methods.
The rest of this article is organized as follows-Section 2 introduces the background. Section 3 describes the architecture and the process of positioning based on our proposed model. In Section 4, we describe the preprocessing datasets, optimize the parameters of the model through experimental research and compare it with several benchmarks of positioning accuracy. Finally, we summarize the contribution of this work in Section 5.
Denoising Auto-Encoder
Auto-encoder is an unsupervised algorithm that automatically learns features from unlabeled data, which give a better feature description than the original data [21], and complete automatic selection of features, as shown in Figure 1. In [22], an AutLoc system is proposed to utilize an auto-encoder to improve the accuracy of indoor localization by preprocessing the noisy RSS by training the deep auto-encoder to denoise the measured data and then build the RSS fingerprints according to the trained weights. However, the positioning accuracy of this method can be further improved. Considering that datasets based on large buildings have strong sparsity, the output location information only depends on a small part of the dimensions of the input vector, which means the auto-encoder can effectively reduce the dimension of data, and the necessary feature information is retained. This conclusion will be confirmed in subsequent experiments. Unlike the auto-encoder, part of the input data is "corrupted" during the training process of the denoising auto-encoder (DAE) [23]. In addition to having the properties of minimal refactoring errors or sparseness, auto-encoders can also be required to have the property of robustness to partial data corruption. The denoising auto-encoder is a kind of auto-encoder which increases the robustness of coding by introducing noise. For the input vector x, we first randomly set the values of some dimensions of x to 0 according to a ratio and get a damaged vector. The corrupted vector input is then given to the auto-encoder and the original lossless input x is refactored. The "corrupted" process is equivalent to adding noise. The core idea of DAE is to encode and decode the "corrupted" original data, and then recover it to reduce the noise of the data and improve the robustness of the model.
The principle is shown in Figure 2, where f θ is the encoder, g θ is the decoder, and L(x, x ) is the loss function of the DAE network. The input data x is "corrupted" by noise according to the q D distribution. The current problem is to adjust the network parameters, by calculating L(x, x ), to make sure the final output x close to the original input x. In (1), W is the link weight from the input layer to the intermediate layer, and b is the bias of the intermediate layer.
The signal x 0 is decoded by the decoding layer and transport to the output layer becomes z. In (2), W is the link weight from the intermediate layer to the output layer, and b is the bias of the output layer. x is regarded as the prediction of x. We firstly randomly set the values of some dimensions to 0 according to a ratio to obtain a damaged vector x 0 . Then the damaged vector is reconstructed to a lossless output x as the input of the training model.
Classification and Regression Tree
GBDT (Gradient Boosting Decision Tree) is an integrated learning of additive models based on regression trees [24]. The main idea is to continuously add weak learning functions and to perform feature splitting to grow a tree. In [25], Wang propose an algorithm named Subspace gradient boost decision tree (Subspace-GBDT) to obtain a strong classifier, which reduce the uncertainty caused by a single fingerprint, and the multiple fingerprints are based on the signal Subspace and RSSI, where the signal Subspace represents the characteristic representation of the received array signal. GBDT uses classification and regression tree (CART) as a weak learning function, which refers to a decision tree that uses a binary tree as a logical structure to complete linear regression tasks. The CART classification tree algorithm uses Gini coefficient instead of information gain. The smaller Gini coefficient, the better model features. Assuming that the dataset has K categories, the probability of the k-th category is p k , the Gini coefficient expression of the probability distribution is: For the sample set D, assuming that the sample has K categories, the number of k-th categories is C k , then the Gini coefficient expression of sample D is: According to value a of a certain feature A, divide D into D 1 and D 2 , then under the condition of feature A, the Gini coefficient expression of sample D is: Therefore, for the CART classification tree, after the calculated Gini coefficient of each feature to the data set D, the feature A with the smallest Gini coefficient and the corresponding eigenvalue a are selected.
According to this optimal feature and optimal eigenvalue, the data set is divided into two parts D 1 and D 2 , the left and right nodes of the current node are established at the same time. Until the Gini coefficient is less than the threshold, the decision tree subtree is returned, and the current node stops recursion. Notice that each time a tree is added, actually is to learn a new basic learner h(.) to fit the final prediction.
System Model
The LightGBM used in this paper is an improvement based on the algorithm of GBDT. Assume that the region of interest has N APs and M reference points (RPs), the RSS input set can be defined as f = { f 1 , f 2 , ..., f M }, and the corresponding location set is l = {l 1 , l 2 , ..., l M }. The GBDT algorithm can be regarded as an additive model composed of K trees: where g( f i ) represents the predicted output, which is the predicted position in the model, .., F iN } is the RSS value set of the i-th sample, and F ij is the j-th eigenvalues (i.e., RSS value) of RP i . Obviously, our goal is to make the predicted value g( f i ) of the tree group as close as possible to the true value l i = (x i , y i ), and have the largest possible generalization ability. According to the characteristics of the sample, each tree will fall into the corresponding leaf node and will correspond to a score. After completing training and getting K trees, the score corresponding to each tree is added to get the predicted value of the sample. In each iteration, on the basis of the existing tree, a tree is added to fit the residual between the prediction result of the previous tree and the true value. The integrated learner obtained by (t − 1)-th iteration is g t−1 ( f ), the focus of the t-th training is to minimize the loss function (8) with the square loss functions (9) and (10): where r represents the residual. Each step of the GBDT algorithm needs to fit the residual of the previous model when generating the decision tree, and uses the fastest descent approximation method, which means the negative gradient of the loss function is used as the approximate value of the residual in the lifting tree algorithm. The negative gradient of the loss function of the i-th sample in the t-th iteration is: the residual obtained in the previous step is used as the new true value of the sample, and the data ( f i , r it ), i = 1, 2, .., N is used as the next tree training data to obtain a new regression tree, the corresponding leaf node area is R jt , j = 1, 2, ..., J, where J is the number of leaf nodes of the t-th regression tree. For leaf area j, we calculate the best fit value as: Then, we update the strong learner and get the final learning function g K ( f ).The gradient boosting algorithm improve the robustness of data outliers through the loss function, which is greatly improved compared to the traditional machine learning algorithm.
GBDT can handle various types of data flexibly. However, due to the dependence between weak learners, it is difficult to train data in parallel, which results in relatively low operating efficiency of the model. Therefore, high dimensional data will increase the complexity of the model. LightGBM is a high-performance gradient boosting framework based on decision tree algorithm released by Microsoft in 2017 [26], which can be used in sorting, classification, regression and other machine learning tasks. LightGBM has been optimized on the GBDT algorithm to speed up the training of GBDT model without compromising accuracy.
In the improved gradient boosting model based on Wi-Fi positioning, XGBoost [27] uses the pre-sorting algorithm to reduce the amount of calculation to find the best split point. But it still needs to traverse the positioning data set during the node splitting process, which increases the space complexity and training speed. Compared with XGBoost, LightGBM uses the histogram algorithm to process the positioning data set and the leaf-wise split strategy in the process of Wi-Fi-based positioning, which solves the problem of large space consumption due to pre-sorting and improves the calculation speed.
Firstly, in our positioning fingerprint, AP j is regarded as the j-th feature of fingerprint data, and F j = {F 1j , F 2j , ..., F Mj } is defined as a set of eigenvalues contained in AP j . Then, a histogram decision tree algorithm is imported to discretize F j into a histogram with a width of k. Instead of the traditional pre-sorting idea, each of these precise and continuous values is divided into a series of discrete domains. The histogram accumulates the required statistics according to the discrete value, and traverses to find the best positioning AP feature and the corresponding eigenvalue as the segmentation point. No additional storage of pre-classification results is needed, only discrete values of features can be saved, and memory consumption can be reduced to one-eighth of the original value. The histogram is shown in Figure 3.
Considering the high-dimensional sparsity of the fingerprint data, the features represented by many APs are mutually exclusive, which means they usually don't take non-zero values at the same time. According to the exclusive feature bundle (EFB) algorithm of LightGBM, the complexity of fingerprint feature histogram construction changes from O(data * f eature) to O(data * bundle), and bundle << f eature, which greatly accelerates the training speed of the gradient boost model without affecting the positioning accuracy. Secondly, the traditional decision tree splitting strategy is to use level-wise to find the best positioning AP feature and the corresponding feature value as the split point. However, the AP feature of the same layer are treated indiscriminately, and many APs have lower split gain, which brings unnecessary cost. Therefore, a leaf-wise algorithm with depth limitation is used to find the feature with the largest split gain, which means the best feature of the fingerprint data can be found from all current leaves, and then split, to reduce more errors and obtain better accuracy. As shown in Figure 4, compared with level-wise, leaf-wise reduce more errors and get better accuracy when the number of splits is the same; However, when leaf-wise grow deeper, which causes decision tree over-fitting. Therefore, the maximum depth limit is added to prevent over-installation and ensure high efficiency.
Feature Extraction Algorithm
Since each iteration of the gradient boosting algorithm adjusts the sample according to the prediction result of the previous iteration, as the iteration continues, the bias of the model will continue to decrease, which leads to a model more sensitive to noise. In the indoor positioning dataset, the outliers caused by multipath signals and NLOS will have an impact on the training of the database.
To solve the problem of extracting key features from sparse RSS data and reducing the influence noise and outliers of a dataset, we propose a feature extraction algorithm, called the joint denoising auto-encoder (JDAE), aiming to extract key features from sparse RSS data and reduce the influence of noise and data outliers. Considering the sparseness of our positioning dataset, the input dimension is mostly 0. If we use DAE directly for a certain probability of input zeroing, it is probably that the dimension of our zeroing itself is 0, which make it ineffective. Thus, we add an auto-encoder in front of DAE for feature extraction, reducing the sparseness of the dataset. The feature output obtained by the auto-encoder is almost non-zero value for each dimensional, which is better for probability zeroing through DAE.
The architecture of JDAE is shown in Figure 5. From input layer to feature layer is the part of auto-encoder, x means the input RSS data, h (1) is the hidden layer of auto-encoder, and f means the feature data processed by the auto-encoder. This part is to extract key features from sparse Wi-Fi data.
After getting reconstructed features, next part is to introduce denoising auto-encoder to reduce the influence of noise and data outliers. The denoising auto-encoder randomly partially use the damaged input to solve the identity function risk, to make the auto-encoder denoised. The dropout in the auto-encoder network refers to randomly letting the weights of some hidden layer nodes of the network not work during model training. We apply dropout layyer to the input layer instead of the hidden layer. The damage ratio generally does not exceed 0.5, and Gaussian noise can also be used to damage the data. The feature is robustly obtained from the damaged input, which can be used to restore its corresponding noise-free fingerprint data. The "⊗" part in the feature layer means "corrupted" features according to our setting, and h (2) is the hidden layer of denoising auto-encoder. After processing dataset by JDAE, the output layer d is imported to the LightGBM model.
System Architecture
The positioning method based on LightGBM is divided into two stages, the offline training stage and the online positioning stage. In the training stage, the RSS of each predefined RP is collected in the database, and RP i has a corresponding fingerprint vector where N is the number of available features (i.e.RSS) of all APs. Note that x i and y i represent position coordinates, which are different from the meaning in the feature extraction diagram above, R i is the corresponding room ID. Considering using the large building dataset, the algorithm complexity of the location regression prediction is too high, and it is difficult to find some benchmarks to compare. So we changed the positioning problem to a room classification problem, which has the advantage of reducing the complexity of the algorithm and comparing it with existing advanced methods. The coordinates (x i , y i ) are not used here as output, only room IDs are used. After standardizing the dataset, the proposed JDAE is introduced, aiming to extract key features from sparse RSS data and reduce the influence of noise and data outliers. And then, LightGBM is imported to classify the processed data, and adjust the input parameters according to the results to obtain the optimal model; In the online stage, the proposed model will localize each location by matching the received fingerprinting measurement and sending back the room ID to the mobile user. The detailed algorithm is shown in Figure 6. The mapping relationship between the location and the Wi-Fi signal data is learned through LightGBM.
The idea of training the processed dataset is to transform the positioning problem into a multi-classification problem through position discretization, and each position corresponds to a category. Then, the samples are trained and the results of each decision tree are fused to get the final classification result. The steps of using the algorithm to train the fingerprint are as follows: (1) Firstly, a certain location is selected as the sampling point, the Wi-Fi fingerprint data are collected as all the characteristics of the sample. The histogram method algorithm is used to discretize the eigenvalues of the sample into K integers, and construct a histogram of width K for each feature. Then, according to the discrete value of the histogram, each AP point is used as the feature of the dataset, and the AP point corresponding to the minimum loss function value and the corresponding eigenvalue is calculated as the best split point for each iteration; (2) In order to prevent the built fingerprint database model from being too complicated and over-fitting, it is necessary to limit each split of the node. Only when the gain is greater than the threshold, the split is performed and when a tree reaches the maximum depth, it stops continuing to split; (3) When generating a decision tree, the gradient boost algorithm is used to make the predicted result continuously approach the real result, and offline training is completed through the learning of multiple decision trees. In online positioning stage, each testing Wi-Fi data is normalized, and sent to the trained multi-classification model for positioning.
Data Preprocessing
The UJIIndoorLoc [18] dataset used in this paper covers three buildings of Jaume I University, with four or five floors and an area of nearly 110,000 square meters. It can be used for classification (for example, actual building and floor identification) or regression (estimation of longitude and latitude). It was created by more than 20 different users and 25 android devices. The database consists of 19,937 sets of training data and 1111 sets of testing data. As shown in Table 1, the 529 attributes contain RSS values, the coordinates of the locations and other useful information. Each Wi-Fi fingerprint can be characterized by the detected APs and corresponding RSS values. One Wi-Fi fingerprint consists of 520 intensity RSS values.
However, we found that all the room ID of the given testing set is 0, which means that the testing set can only achieve floor positioning, not room-level positioning. Therefore, we consider dividing the training set directly into training sets and testing sets. Using the K-fold cross-validation [28] method, we firstly divide the dataset into K subsets of mutual exclusions of the same size by layered sampling, and then each time, the K − 1 subset of them are used as a training set, and the remaining one as a testing set, so that we can get the K-group training/testing set, the K-time model can be learned, and the average of K test results as an evaluation result. Considering the selection of K values, if there is a small-scale dataset, usually choose 5, which means 80% of the data set as a training set, 20% as a testing set; for large-scale data of tens of thousands of magnitude, the K value usually takes 10, 20, or 50. Here we select 20, and each iteration, the number of testing sets is about 1000. On UJIIndoorLoc dataset, the value of the input RSS data varies from −104 dbm to 0 dbm, and is nomalized for model training. In [29], different data representations of RSS fingerprints may affect the success rate and error. For any AP that is not detected in a measurement, its RSS value is marked as 100 dbm, and we denote these RSS values as 0.
where i is the AP identifier, RSS ij the j-th RSS value of RP i , the minimum value is the lowest RSS value considering all fingerprints in the database and the AP, and β is a mathematical constant. The normalization changes the range of values for each feature to [0,1] by scaling.
The β value can be set to 1. The results in [29] show that the normalized data tend to express the RSS value with the best performance, and tame the fluctuating RSS signal. Therefore, the normalized data are used to represent the Wi-Fi fingerprint in this paper. The Tampere dataset covers two buildings of Tampere [19] University of Technology. In the first building, there are 1478 sets of training data and 489 sets of testing data. 312 attributes include Wi-Fi fingerprints (309 AP) and coordinates. The intensity value is expressed as a negative integer ranging from −100 dBm to 0 dBm. The Wi-Fi fingerprint consists of 309 intensity values. In the second building, there are 583 sets of training data and 175 sets of testing data, including 357 attributes of Wi-Fi fingerprints (354 AP) and coordinates. The Tampere dataset uses floor height as floor representation instead of floor number. In this experiment, adjusted optimal model is verified with the Tampere to test the performance of JLGBMLoc.
The experiment is equipped with a laptop with Intel i5-6300 CPU, using python-3.7.6 on tensorflow environment to implement the model building. The parameters used in the initial optimization are shown in Table 2. The loss function is the mean square error (MSE). The training batch is set to 60, and the patience parameter in the early stop is set to 5, feature fraction takes 0.8, which is equivalent to the learning rate. After one iteration, the weight of the leaf node will be multiplied by the coefficient. The purpose is to weaken the influence of each tree to make sure the later decision tree has more learning space.
Performance Evaluation of JDAE
The performance of the model is evaluated by comparing the performance of the model with the state-of-the-art methods. Two datasets, UJIIndoorLoc and Tampere, are used for experimental research. The ratio of the number of correct matching positions to the total number of positions is used as the accuracy rate to evaluate the effects of each method. The accuracy in this work is defined as follows, N A means correctly predicted number of samples, and N means total number of samples.
Firstly, we use UJIIndoorLoc to optimize the parameters of the model, the adjusted optimal model is verified with the Tampere dataset to test the performance of JLGBMLoc with different datasets. We firstly train the model in floor positioning, and then use the trained model to test room positioning accuracy.
The fixed parameters of LightGBM and default values is given in Table 2, and used alone to train UJIIndoorLoc without feature extraction. As shown in Figure 7, after completing the iteration, the loss function value reaches 0.51. The positioning accuracy of testing data finally reached 91.04%, which is nearly 10% higher than the DNN method in floor-level positioning [20]. The running time of LightGBM is 5.5 s, the speed of which is almost two times higher than XGBoost [27]. Then, different auto-encoder models are built to evaluate the best performance. The comparison result is shown in Figure 8. When the hidden layer and output layer are set to 128 and 64 respectively, the floor success accuracy rate reaches the highest 95.59%. Therefore, for auto-encoder, we choose the hidden and output layer to be 128 and 64. The weights of the denoising auto-encoder is initialized. If the weight of the network is initialized too small, the signal will gradually shrink during transmission between layers and it will be difficult to produce any effect. The initialization method will automatically adjust the most appropriate random distribution according to the number of input and output nodes of a certain layer of network, which is to make the weight meet the 0 mean value. Assuming the number of input nodes in input dimension and the number of output nodes in output dimension, the variance of uniform distribution is 2/(input + output), and the form of random distribution can be uniform distribution. Here we make the input and output dimensions equal. The CDF (cumulative distribution Function) of two methods are shown in Figure 9. Compared with single LightGBM model, the accuracy has been further improved. Not only that, our JDAE method is 6% more accurate than the method using a single auto-encoder in [30].
LightGBM Parameter Optimization
Firstly, we tune num − leaves, which is an important parameter to improve accuracy. Maximum depth represents the depth of the set tree; the greater the depth, the greater possibility of overfitting. Due to the leaf-wise algorithm used by LightGBM, number of leaves is used when adjusting the complexity of the tree. The approximate conversion relationship is: num_leaves ≤2 (max_depth) −1. (17) Secondly, when there is not enough training data, or over-training, it results in overfitting. As the training process progresses, the complexity of the model increases and the error on the training data decreases, but the error on the testing set increases. The over-fitting of the model on the training dataset is reduced by constraining the lambda − l1 and lambda − l2 norm of the parameters, which effectively prevent the model from over-fitting. After adjusting our model, lambda − l1 and lambda − l2 is taken as 0.01, and the optimal accuracy rate in floor classification reaches 97.07%. And then, learning rate is adjusted. In order to make the gradient improvement model have better performance, the value of the learning rate needs to be set in the appropriate range. The learning rate determines how quickly the parameters move to the optimal value. If the learning rate is too large, it is likely to cross the optimal value, but if the learning rate is too small, the optimization efficiency may be too low and the algorithm cannot converge. The initial learning rate is 0.05. When the learning rate is 0.6, the accuracy reaches 99.32%, and when it gets to 1, the accuracy drops to 95%. Therefore, the learning rate is set to 0.6. The results analysis is shown in Figure 10. When locating the rooms, we changed the number of classes (considering 15 floors and about 250 rooms). In Figure 10, the learning rate, the num − leaves and other parameters are consistent with the floor positioning, the highest accuracy is also achieved in the room positioning. Here, we consider each floor as a group, and the corresponding room on the floor can be counted as an element of this group.Therefore, the selection of floor positioning parameters can be largely consistent with that of room positioning.
Finally, we optimize the parameter min_split_gain, which means the minimum gain for splitting the decision tree. After testing, the optimal parameter is 0.02. The parameters of the model is shown in Table 3. We use the optimized model to position and randomly selected about 1000 sets of data covering 50 rooms in the UJIIndoorLoc dataset for testing. We compare JLGBMLoc with DNN [20], CNNLoc [17] and LightGBM. The room success rate comparison is shown in Figure 11, and the room accuracy rate reaches 96.73%. As the complexity of room-level positioning will be much higher than floor positioning, the accuracy will be slightly reduced. Experiments show that our proposed model can achieve room-level positioning, and the accuracy is more predominant than current advanced methods.
Model Comparison
The performance of the model is evaluated by comparing JLGBMLoc with several state-of-the-art methods. Not only that, we test the accuracy of the position in Tampere. The regression of deep learning cannot be judged by the accuracy of classification problems. We calculate the MSE to detect the deviation between the predicted and true values of the model, and the MSE of coordinate regression is 4.22. Considering that regression prediction does not have a good baseline comparison, we used the height classification of the Tampere dataset to achieve floor positioning and compared it with current advanced methods. The initial accuracy rate is 95.45%, which is the parameter used before testing UJIIndoorLoc. After changing the parameter num_class to 5, and lambda − l1 to 0.02 on Tampere, the accuracy rate increased to 98.43%. Benchmark methods include KNN [10], 13-KNN, DNN [20], CNN and CNNLoc [17]. The model comparison is shown in Figure 12. The floor success rate of JLGBMLoc on UJIIndoorLoc is 99.32%, which on Tampere is 98.43%. The results show that the performance of our model is better than other benchmarks, which proves its high accuracy and scalability in different scenarios and datasets.
Conclusions
In this paper, we proposed a novel indoor positioning method, named JLGBMLoc. A novel feature extraction algorithm was proposed to reconstruct the sparseness fingerprint data, and LightGBM was introduced to the Wi-Fi localization. We evaluated the proposed JLGBMLoc on the UJIIndoorLoc dataset and the Tampere dataset; the experimental results showed that the proposed method has a room-level positioning accuracy of 96.73%, a floor-level positioning accuracy of 99.32% on the UJIIndoorLoc, and a floor-level accuracy of 98.43% on the Tampere. Experimental results proved that the proposed JLGBMLoc increases the positioning accuracy dramatically compared with other existing methods.
|
v3-fos-license
|
2019-01-30T01:44:46.407Z
|
2018-10-19T00:00:00.000
|
125953807
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-4434/6/4/110/pdf?version=1539930540",
"pdf_hash": "03fbc0a987b0f6cc044b719e707174d2c4bd5c63",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41800",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"sha1": "03fbc0a987b0f6cc044b719e707174d2c4bd5c63",
"year": 2018
}
|
pes2o/s2orc
|
Close Binaries and the Abundance Discrepancy Problem in Planetary Nebulae
Motivated by the recent establishment of a connection between central star binarity and extreme abundance discrepancies in planetary nebulae, we have carried out a spectroscopic survey targeting planetary nebula with binary central stars and previously unmeasured recombination line abundances. We have discovered seven new extreme abundance discrepancies, confirming that binarity is key to understanding the abundance discrepancy problem. Analysis of all 15 objects with a binary central star and a measured abundance discrepancy suggests a cut-off period of about 1.15 days, below which extreme abundance discrepancies are found.
Introduction
Heavy element abundances in planetary nebulae (PNe) may be calculated from bright, easily-observed collisionally-excited lines (CELs; typical fluxes of 10-1000, where F(Hβ) = 100) or from the much fainter recombination lines (RLs; typical fluxes of 0.01-1 on the same scale).For 70 years, it has been known that RL abundances exceed those from CELs, with the so-called abundance discrepancy factor (adf ) ranging from 2-3 in the majority of cases, up to nearly three orders of magnitude in the most extreme cases (see, e.g., [1][2][3]).Many mechanisms have been proposed to account for the discrepancy.These include: • Temperature fluctuations [4] • Strong abundance gradients [5] • Density inhomogeneities [6] • Hydrogen-deficient clumps [2] • X-ray irradiation of quasi-neutral material [7] • κ-distributed electrons [8] Abundance discrepancies in H II regions behave quantitatively differently from those in PNe, suggesting that (at least) two mechanisms are responsible [9].
The work in [10] noted that in several cases, the most extreme values of the abundance discrepancy occurred in PNe known to have formed through the ejection of a common envelope (CE; [11,12]), having a binary central star with such a short period that the orbital radius is less than the radius of the PN's Asymptotic Giant Branch (AGB) progenitor [13].The work in [14] strengthened this connection by observing NGC 6778, known to have a binary central star with a period of 3.68 hours [15], and noted by [16] to show a very strong recombination line spectrum.New deep spectroscopic observations revealed an extreme abundance discrepancy as predicted from the binary nature of the central star, together with spatial patterns also seen in several other high-adf nebulae: the adf is not constant across the nebula, but rather strongly centrally peaked.In the case of NGC 6778, the value derived from the spatially-integrated spectrum is ∼18, while the spatially-resolved values peak at ∼40 close to the central star.
To investigate this connection further, we observed ∼40 PNe with known binary central stars, to measure their chemistry, predicting that we should find high abundance discrepancies in a significant fraction of them.The nebulae were observed in 2015-2016 using FORS2 at the VLT in Chile, with spectra covering wavelengths from 3600-9300 Å at a resolution of 1.5-3 Å.
Results
Spectra of sufficient depth to obtain recombination line abundances were obtained for eight objects in our sample.The observations of NGC 6778 were published in [14].The seven additional objects included Hf 2-2, already known to have an extreme abundance discrepancy [2], which we included in our sample as a benchmark to verify our methodology (our results from the integrated spectrum are in good agreement with those of [2]) and to study spatially.Emission lines in the spectra were measured using the code ALFA [17], which operates autonomously, first fitting and subtracting a continuum from each spectrum and then optimising Gaussian fits to emission lines using a genetic algorithm.The code detected ∼100 lines in each spectrum.These line fluxes were then analysed using NEAT [18], which also fully autonomously carries out an empirical analysis, calculating temperatures and densities from traditional CEL diagnostics, as well as from hydrogen continuum jumps, helium emission line ratios and oxygen recombination line ratios.The code then determines ionic abundances using a three-zone scheme and corrects for unseen ionisation stages using the scheme of [19].
For four objects, their angular size and the depth of the spectra obtained permitted a spatially-resolved analysis.In these cases, we generally found that the adf was strongly centrally peaked; this was most clearly seen in Hf 2-2 and NGC 6326.In Fg 1, the adf showed central peaking, but the adf was also seen to be higher at the outer edge of the bright inner region of the nebula.NGC 6337 was thought to be a bipolar nebula viewed edge-on [20]; on-sky, it appears as a ring, and the highest values of the adf were seen at the inner edge of the ring.Figure 1 shows the variation of the adf along the slit for each of these objects.
Discussion
Figure 2 shows the measured adfs of 207 objects in rank order, with H II regions highlighted in purple and PNe with close binary central stars shown in blue.This shows that the adfs of H II regions are mostly lower than the median value of 2.3, while those of post-CE PNe are typically much higher.Henceforth, we refer to abundance discrepancy factors of less than 5 as "normal", factors between 5 and 10 as "elevated" and those greater than 10 as "extreme".
The Binary Period
We have measured the abundance discrepancy for the first time in six post-common envelope PNe; the integrated spectra reveal five extreme and one elevated adf.Nine further post-CE nebulae with measured abundance discrepancy are found in the literature, of which two have normal adfs, two have elevated adfs and five have extreme adfs.Thus, the majority of post-CE nebulae have extreme abundance discrepancies, but a few have much lower values.In Table 1, we list some key observational parameters for all 15 objects.We searched for a relationship between the period of the binary central star and the abundance discrepancy.A continuous relationship would suggest that the mechanism giving rise to extreme adfs operates regardless of the binary period, but its magnitude is determined by it.On the other hand, if a threshold period can be identified that divides objects with elevated adfs from those with normal adfs, then it would imply that the mechanism is only triggered for shorter period binaries.Figure 3 shows the relation between the two properties, and although the number of points remains quite small, there is clearly no simple relationship between orbital period and adf.Instead, it suggests a threshold period, as three groups of objects can be identified: those with periods of less than one day all have elevated or extreme adfs; the objects with periods longer than 1.2 days have normal adfs; and the several objects with periods of around 1.15 days have a wide range of adfs, including Fg 1 with an adf of ∼46, Hen 2-283 with an adf of 5.1 and the Necklace, with no measured adf, but with no RLs detected in deep spectra by [25], and thus likely a normal value.
Detection biases mean that the observed period distribution of central star binaries is strongly concentrated to lower values, and thus, objects with periods of days rather than hours are quite rare [13].The absence of high-adf objects with longer periods could arise by chance, given that only two longer period objects appear in our sample of 15 objects.However, the likelihood of the two longest period binaries having the two lowest adfs by chance alone if the two properties were uncorrelated is just under 1%.A likely third such object is MyCn 18, for which [26] recently reported a binary period of 18.15 days, while [27] measured an adf of 1.8.The probability of the three lowest adfs coinciding with the three longest periods by chance is much less than 1% and strengthens the hypothesis that only when the binary period is shorter than a threshold value of around 1.15 days will the PN exhibit an elevated or extreme adf.Further measurements of adf in objects with known binary orbital period are of course still necessary to better constrain this proposed relationship. .Also plotted is a point for the Necklace, which has a period of 1.16 days and an unmeasured abundance discrepancy, but with an upper limit reported to be low and plotted here as a factor of 3.0.A horizontal line indicates an adf of 5.0, which we consider the dividing line between "normal" and "elevated", and a vertical line indicates the period of 1.15 days, which roughly divides objects with low and extreme adfs.
Stellar Abundances
The work in [28] suggested that there might be a relation between central star abundances and nebular abundance discrepancies, based on their discovery of the extreme abundance discrepancy of NGC 1501, which has a hydrogen-deficient central star, in common with several other then-recently identified extreme adf objects.However, they also noted that several extreme adf objects with H-rich central stars were known, concluding that no clear relationship existed.The work in [24] measured the adf in several objects with known H-deficient central stars, and did not find any elevated or extreme abundance discrepancies.
Given that extreme adfs can be reproduced by invoking cold hydrogen-deficient clumps embedded in a hot gas of normal composition, a source for these clumps needs to be identified.Two possibilities have been discussed; firstly, a very late thermal pulse (VLTP), in which a single star experiences a thermal pulse after having begun its descent of the white dwarf cooling track [29]; the second scenario is a nova-like eruption relying on a binary central star.Additional and more complex scenarios are possible: ref. [30] suggested that some combination of VLTP and nova in which the former triggered the latter could explain the properties of the hydrogen-deficient knot in Abell 58.Given the lack of observational constraints on such scenarios, we consider only these two relatively simple cases.The two scenarios make contrasting predictions for the central star abundances.The VLTP scenario would result in a hydrogen-deficient central star, and indeed, that scenario is commonly invoked specifically as a mechanism for creating such stars.Meanwhile, the nova-like scenario is as yet ill-defined.An eruption in which some hydrogen is neither burned nor ejected would be required to leave behind a hydrogen-rich post-nova object.
We compiled all available literature central star classifications, to compare adfs of nebulae with H-deficient central stars to those of nebulae with H-rich central stars (excluding nebulae with weak emission line stars (wels), which have a high likelihood of being due to either nebular contamination [31] or irradiation of the secondary [15]).The left-hand panel of Figure 4 shows a quantile-quantile (Q-Q) plot comparing the quantiles of the distributions of adfs, for nebulae with H-deficient central stars and those without.In such a plot, for two different datasets, if the two sets are drawn from the same underlying probability distribution, the points will lie close to the line of y = x.Populations drawn from differing probability distributions will diverge from the 1:1 relation.In this case, the points indeed mostly lie close to y = x.In the right-hand panel of the figure, we show the Q-Q plot for PNe with binary central stars against those without a known binary central star.The large deviations from the y = x line indicate that the adfs of nebulae with binary central stars come from a strongly differing distribution to those without.This argues against a VLTP as the source of hydrogen-deficient material, while a nova-like outburst remains plausible.
Electron Density
As well as the proposed cut-off period of around 1.15 days separating elevated and extreme adf objects from normal adf objects, we have identified a relation between electron density and adf.In the seven objects studied in this work, all except Hen 2-283 have extreme abundance discrepancies, and while Hen 2-283 has an electron density of ∼3200 cm −3 , the other objects all have densities of <1000 cm −3 .To investigate this further, we compiled literature measurements of the electron density from lines of [O II], [S II], [Cl III] and [Ar IV], for all objects with a measured adf. Figure 5 shows the adf against electron density for each of the four diagnostics.These figures clearly show that the highest adfs only occur in the lowest density objects.Dashed lines indicate two bounds inside which almost all objects lie: a lower limit of adf = 1.3 and an upper limit of adf < 1.2×10 4 n −0.8 e .The low densities associated with extreme adfs point to a low ionised mass, as found by [10] for the extreme-adf objects Ou 5 and Abell 48, and consistent also with the finding by [32] that post-CE nebulae have systematically lower ionised masses and surface brightnesses compared to the overall PN population.
Conclusions
The relationships between central star binarity, nebular density and adf show that common envelope evolution and nebular chemistry are strongly connected.We conclude that adfs provide a reliable way to infer the presence of a close binary central star; any object with an extreme adf must host a very short period binary.These objects will tend to have electron densities of 1000 cm 3 .Meanwhile, if an object has a low adf, but a morphology indicative of binarity, we would predict that its binary period should be much longer than one day.One possible mechanism that could account for these observations is if the shortest period post-CE binaries experience fallback following the ejection of the common envelope, triggering an outburst of enriched material, which gives rise to extreme adfs.The implications of our results are discussed in greater detail in [33].
Figure 2 .
Figure 2. Two hundred and seven measurements of adf (O 2+ ) available in the literature as of September 2018, including the new measurements presented in this paper, shown in rank order.Objects with close binary central stars are highlighted in blue, H II regions in purple and the objects studied in the current work in orange.A full list of the individual objects and references used to compile this figure is available at https://www.nebulousresearch.org/adfs.
Figure 3 .
Figure 3. Abundance discrepancy for O2+ plotted against binary period for the 15 objects where both are known: nine literature values (purple dots) and seven from this study (red squares).Also plotted is a point for the Necklace, which has a period of 1.16 days and an unmeasured abundance discrepancy, but with an upper limit reported to be low and plotted here as a factor of 3.0.A horizontal line indicates an adf of 5.0, which we consider the dividing line between "normal" and "elevated", and a vertical line indicates the period of 1.15 days, which roughly divides objects with low and extreme adfs.
Figure 4 .
Figure 4. Q-Q plots for adf for (l) nebulae with hydrogen-deficient central stars against those without; (r) nebulae with close binary central stars against those without.When samples are drawn from the same underlying distribution, points in a Q-Q plot will lie close to y = x.
Figure 5 .
Figure 5. adf against electron density estimated from [O II] (top left), [S II] (top right), [Cl III] (bottom left) and [Ar IV] (bottom right) line ratios.Planetary nebulae with binary central stars are shown with red points; other PNe are shown with purple points; and H II regions are shown with light blue points.Dashed lines indicate the empirical limits inside which almost all objects are found (see the text for details).
Table 1 .
Properties of the 15 nebulae with close binary central stars and a measured adf.
|
v3-fos-license
|
2017-04-18T21:52:54.885Z
|
2013-03-01T00:00:00.000
|
5590265
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.18632/aging.100534",
"pdf_hash": "d65712fb8ef66e409c3f4c0ba0d2601463c553aa",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41801",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "3c3babc8a59482a7bd7b1e4b89fd06c45e5a5802",
"year": 2013
}
|
pes2o/s2orc
|
SIRT1 activators: the evidence STACks up.
SIRT1 is the mammalian ortholog of silent information regulator 2 (Sir2) found in Saccharomyces cerevisiae and functions as a NAD+-dependent deacetylase. SIRT1 appears to promote healthy aging and is implicated in the prevention of many age-related pathologies [1]. At the cellular level, SIRT1 controls lipid and glucose homeostasis, DNA repair and apoptosis, circadian clocks, inflammation and mitochondrial biogenesis. The biological effects of SIRT1 are mediated by its ability to deacetylate several key transcription factors such as Peroxisome proliferator-activated receptor-ϒ coactivator 1 alpha (PGC-1α), p53, and FOXO proteins [2].
For many years there has been interest in characterizing sirtuin-activating compounds (STACs) that can modulate the ability of SIRT1 to deacetylate substrate proteins. These compounds would have the potential of reducing the incidence of multiple age-related diseases. Resveratrol and a series of chemically unrelated synthetic molecules have been described as potential STACs [3-7]. The original reports demonstrated activation by using an enzyme assay that contained a fluorescently labeled peptide substrate. However, the validity of these findings was challenged when others demonstrated that activation was dependent on the presence of the fluorophore on the substrate. Multiple studies followed, some in favor and some against [8-10]. However, two new studies, one by Hubbard et al., (2013) in this month's issue of Science and a second one by Lakshminarasimhan et al., (2013) in recent of Aging, appear to elegantly resolve this controversy.
Figure 1
Model of allosteric activation of SIRT1 by sirtuin activating compounds (STACs). (A) SIRT1 acting on a substrate with a hydrophobic signature (yellow) in the absence of a STAC. (B) Binding of a STAC alters the N-terminal structure of SIRT1 but the absence ...
Lakshminarasimhan and colleagues used a mammalian acetylome microarray system to determine whether natural deacetylation sites can respond to resveratrol-dependent SIRT1 activation. After testing almost 7,000 peptides, surprisingly, very few of them exhibited increased deacetylation in the presence of resveratrol. They found that deaceylation by SIRT1 was preferentially activated when the substrates contained large, mainly hydrophobic residues at several positions C-terminal to the acetyl-lysine. These results provided a clear potential explanation of why the Fluor-de-Lys fluorophore, which is bulky and hydrophobic, may replace the peptide chain immediately C-terminal to the acetyl-lysine, likely mimicking a natural hydrophobic residue.
Indeed, Hubbard and colleagues provided strong support for such a model, and extended the above findings by demonstrating that a series of STACs, including the Sirtris compounds and resveratrol, directly activate SIRT1 through an allosteric mechanism. They first determined that the fluorophore caused activation only when it was directly adjacent (+1) to the acetyl-lysine. Similar to the findings of Lakshminarasimhan et al., (2013) the fluorophore moieties were dispensable if replaced with naturally occurring hydrophobic amino acids. For instance, native peptide sequences of PGC-1α and FOXO3a supported activation by STACs which was dose-dependent and mediated through a lowering of peptide Km. When the aromatic or hydrophobic amino acids at position +1 or +6 of PGC-1α or +1 of FOXO3a were mutated to alanine, activation by STACs was blocked. To establish the mechanism of activation, they screened SIRT1 mutants that were unable to be activated by resveratrol. Substitution of a glutamate for lysine at position 230 in the structured N-terminal domain attenuated (or abolished) SIRT1 activation by 117 chemically diverse STACs independent of the substrate. Altering this residue did not reduce the basal catalytic activity of SIRT1 or significantly change the Vmax or Km of several substrates but rather specifically inhibited activation by STACs. Finally, they reconstituted SIRT1 KO myoblasts with wild-type or mutant SIRT1 and observed STAC-induced increases in mitochondrial mass and ATP content only in wild-type-reconstituted myoblasts, thus demonstrating that the effect of STACs on mitochondrial function is clearly SIRT1-dependent and direct. This work elegantly describes a SIRT1-dependent mechanism of “assisted allosteric activation” for STACs, providing a putative molecular explanation for the previous controversy.
The findings by both groups that only a small subset of SIRT1 substrates have increased deacetylation by SIRT1 in the presence of STACs is promising for future therapeutic intervention strategies, since the selectivity of STACs could be far more targeted than previously anticipated. For instance, the SIRT1 substrates PGC-1α and FOXO3a, but not p53, have hydrophobic residues needed for activation by STACs, thus one could envision that STACs would have a greater impact on cellular metabolism and less of an impact on p53 stability and the cell cycle. This selectivity may allow the use of STACs in the treatment of SIRT1-dependent metabolic diseases while avoiding some of the adverse pro-oncogenic effects of p53 deacetylation and destabilization.
Of course, full proof of such molecular mechanisms will only come from crystal structure analysis of SIRT1 in the presence and absence of STACs, work that will also allow for the design of more efficacious and diversely targeted STACs. It will be interesting to determine whether SIRT1 STACs or similarly structured compounds could influence other sirtuins. In this regard, glutamate 230 is not conserved in any of the other mammalian sirtuins, suggesting that the allosteric mechanism may work specifically for SIRT1. Overall, these studies provide solid new evidence for a molecular mechanism of action for these compounds, and likely set up the basis for hypothesis-driven pharmacological applications of these STACs in the near future.
SIRT1 is the mammalian ortholog of silent information regulator 2 (Sir2) found in Saccharomyces cerevisiae and functions as a NAD + -dependent deacetylase. SIRT1 appears to promote healthy aging and is implicated in the prevention of many age-related pathologies [1]. At the cellular level, SIRT1 controls lipid and glucose homeostasis, DNA repair and apoptosis, circadian clocks, inflammation and mitochondrial biogenesis. The biological effects of SIRT1 are mediated by its ability to deacetylate several key transcription factors such as Peroxisome proliferator-activated receptor-ϒ coactivator 1 alpha (PGC-1α), p53, and FOXO proteins [2].
For many years there has been interest in characterizing sirtuin-activating compounds (STACs) that can modu-Editorial late the ability of SIRT1 to deacetylate substrate proteins. These compounds would have the potential of reducing the incidence of multiple age-related diseases. Resveratrol and a series of chemically unrelated synthetic molecules have been described as potential STACs [3][4][5][6][7]. The original reports demonstrated activation by using an enzyme assay that contained a fluorescently labeled peptide substrate. However, the validity of these findings was challenged when others demonstrated that activation was dependent on the presence of the fluorophore on the substrate. Multiple studies followed, some in favor and some against [8][9][10] www.impactaging.com Lakshminarasimhan and colleagues used a mammalian acetylome microarray system to determine whether natural deacetylation sites can respond to resveratroldependent SIRT1 activation. After testing almost 7,000 peptides, surprisingly, very few of them exhibited increased deacetylation in the presence of resveratrol. They found that deaceylation by SIRT1 was preferentially activated when the substrates contained large, mainly hydrophobic residues at several positions C-terminal to the acetyl-lysine. These results provided a clear potential explanation of why the Fluor-de-Lys fluorophore, which is bulky and hydrophobic, may replace the peptide chain immediately C-terminal to the acetyl-lysine, likely mimicking a natural hydrophobic residue.
Indeed, Hubbard and colleagues provided strong support for such a model, and extended the above findings by demonstrating that a series of STACs, including the Sirtris compounds and resveratrol, directly activate SIRT1 through an allosteric mechanism. They first determined that the fluorophore caused activation only when it was directly adjacent (+1) to the acetyllysine. Similar to the findings of Lakshminarasimhan et al., (2013) the fluorophore moieties were dispensable if replaced with naturally occurring hydrophobic amino acids. For instance, native peptide sequences of PGC-1α and FOXO3a supported activation by STACs which was dose-dependent and mediated through a lowering of peptide Km. When the aromatic or hydrophobic amino acids at position +1 or +6 of PGC-1α or +1 of FOXO3a were mutated to alanine, activation by STACs was blocked. To establish the mechanism of activation, they screened SIRT1 mutants that were unable to be activated by resveratrol. Substitution of a glutamate for lysine at position 230 in the structured N-terminal domain attenuated (or abolished) SIRT1 activation by 117 chemically diverse STACs independent of the substrate. This residue is outside the catalytic site of SIRT1 and is highly conserved. Altering this residue did not reduce the basal catalytic activity of SIRT1 or significantly change the Vmax or Km of several substrates but rather specifically inhibited activation by STACs. Finally, they reconstituted SIRT1 KO myoblasts with wild-type or mutant SIRT1 and observed STAC-induced increases in mitochondrial mass and ATP content in wild-type-reconstituted but not mutant-expressing myoblasts, thus demonstrating that the effect of STACs on mitochondrial function is clearly SIRT1-dependent and direct. This work elegantly describes a SIRT1-dependent mechanism of "assisted allosteric activation" for STACs, providing a putative molecular explanation for the previous controversy.
The findings by both groups that only a small subset of SIRT1 substrates have increased deacetylation by SIRT1 in the presence of STACs is promising for future therapeutic intervention strategies, since the selectivity of STACs could be far more targeted than previously anticipated. For instance, the SIRT1 substrates PGC-1α and FOXO3a, but not p53, have hydrophobic residues needed for activation by STACs, thus one could envision that STACs would have a greater impact on cellular metabolism and less of an impact on p53 stability and the cell cycle. This selectivity may allow the use of STACs in the treatment of SIRT1-dependent metabolic diseases while avoiding some of the adverse pro-oncogenic effects of p53 deacetylation and destabilization.
Of course, full proof of such molecular mechanisms will only come from crystal structure analysis of SIRT1 in the presence and absence of STACs, work that will also allow for the design of more efficacious and diversely targeted STACs. It will be interesting to determine whether SIRT1 STACs or similarly structured compounds could influence other sirtuins. In this regard, glutamate 230 is not conserved in any of the other mammalian sirtuins, suggesting that the allosteric mechanism may work specifically for SIRT1. Overall, these studies provide solid new evidence for a molecular mechanism of action for these compounds, and likely set up the basis for hypothesis-driven pharmacological applications of these STACs in the near future.
|
v3-fos-license
|
2023-07-01T05:11:18.063Z
|
2023-01-01T00:00:00.000
|
259293749
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://academic.oup.com/synbio/advance-article-pdf/doi/10.1093/synbio/ysad012/50662815/ysad012.pdf",
"pdf_hash": "51223948ca5325b90a46092bfa21f6ece97244fb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41802",
"s2fieldsofstudy": [
"Engineering",
"Biology"
],
"sha1": "51223948ca5325b90a46092bfa21f6ece97244fb",
"year": 2023
}
|
pes2o/s2orc
|
pBLAM1-x: standardized transposon tools for high-throughput screening
Abstract The engineering of pre-defined functions in living cells requires increasingly accurate tools as synthetic biology efforts become more ambitious. Moreover, the characterization of the phenotypic performance of genetic constructs demands meticulous measurements and extensive data acquisition for the sake of feeding mathematical models and matching predictions along the design-build-test lifecycle. Here, we developed a genetic tool that eases high-throughput transposon insertion sequencing (TnSeq): the pBLAM1-x plasmid vectors carrying the Himar1 Mariner transposase system. These plasmids were derived from the mini-Tn5 transposon vector pBAMD1-2 and built following modular criteria of the Standard European Vector Architecture (SEVA) format. To showcase their function, we analyzed sequencing results of 60 clones of the soil bacterium Pseudomonas putida KT2440. The new pBLAM1-x tool has already been included in the latest SEVA database release, and here we describe its performance using laboratory automation workflows. Graphical Abstract
Among these approaches, random insertion of genes with transposases is a straightforward method to produce genetic diversity, develop new strains or to identify essential regions within the genome (13)(14)(15)(16). The use of random transposase insertions has allowed researchers to pin-point genomic locations important for cell survival (14), to create minimal genomes (15) or to perform strain development (17,18). Its combination with high-throughput sequencing methods (TnSeq) has accelerated the exploration of genomic locations that allow stable genetic insertions or higher expression yields (7,13,14,19). Toward this goal, mariner transposases are the preferred transposition system for random integration due to the simplification in library creation and bioinformatic analysis, increase in sequencing depth and lower biases for fitness calculations (7,(20)(21)(22).
A variety of transposase systems are available, allowing alternatives depending on whether the goal is targeted sequencing methods, transposition efficiency or a more comprehensive exploration of the genomic sequence space. The two most commonly used transposases for random genomic insertions into bacteria are the Tn5 and mariner transposase systems (13,23). Although these transposases are random, they exhibit specific sequence preferences and integration biases as a result. Tn5 inserts more frequently into GC-rich genomic regions (6,24), whereas mariner transposases target TA sites with a slight sequence bias in the flanking nucleotides (24). Mariner transposase insertions are often considered to be less biased and are therefore frequently used for evaluating gene essentiality (19,23,24). The biases observed in transposase experiments are largely dependent on the GC content of the target genome and thus varies between species (24,25). Other factors such as DNA bendability, which is hard to anticipate, can also bias transposon insertions (26). Of the currently available random transposase systems, only the Tn5 transposase system has been included in a broad-host-range plasmid following Standard European Vector Assembly (SEVA) standardization guidelines (6,(27)(28)(29)(30).
Standardization of molecular biology tools provides the means to increase reproducibility, protocol exchange and optimized laboratory automation workflows (31)(32)(33). In this work, we sought to increase the available genomic integration toolbox for bacteria by creating a broad-host-range plasmid set that includes the hyperactive form of the Himar1 mariner transposase (MarC9) (26) and that is compatible with SEVA standardization guidelines. Hyperactive forms of transposases ensure a higher transposition and are better suited for in vivo work (26). In addition, Himar1 mariner transposase, isolated from the hornfly Haematobia irritans (26), has been successfully used in several gram-negative bacterial species such as Escherichia coli (26), Pseudomonas putida (34), Pseudomonas aeruginosa (35), Aggregatibacter actinomycetemcomitans (36) Caulobacter crescentus (24), Rhizobium leguminosarum 24) and Vibrio cholerae (24), gram-positive such as Bacillus subitilis and Streptococcus pneumoniae (7) as well as several Mycobacteria (24,37).
The new vector set, termed pBLAM1-x (Born to Life Again mini-Mariner transposon) is available with three different antibiotic resistances to facilitate sequential insertions (pBLAM1-2: Kanamycin; pBLAM1-4: Streptomycin; and pBLAM1-6: Gentamycin). It has been included in the SEVA 4.0 update (30) and is available through the SEVA database. We showcase their use for genomic insertions in the microbial chassis P. putida KT2440, as well as the genotypic characterization via open-source automation. The compatibility of the automation workflow presented in this work with other SEVA systems eases its implementation in laboratories that already follow SEVA standardization guidelines. The use of the pBLAM1-x set to facilitate TnSeq is not addressed in this work as the use of mariner transposases in P. putida and other pseudomonads (22,34,35,38), as well as other gram-negative, gram-positive and mycobacterial systems has been previously validated (24, 39. However, this feature remains an advantage to accelerate genotype-to-phenotype relationships of inserted synthetic genetic circuits and design-build-test cycles.
Overall, our results demonstrate the use of this vector set to introduce genomic modifications. This addition to the synthetic biology toolkit facilitates protocol standardization as well as massive parallel sequencing of insertion libraries.
Reagents
Plasmids were obtained using the E.Z.N.A. plasmid DNA mini kit II (Omega Bio-Tek) from bacterial cultures (Table S1), PCR products were purified using the DNA clean and concentrator kit-5 (Zymo Research). Oligonucleotides were synthesized by Integrated DNA Technologies (IDT-DNA) (Tables S2-S4). Restriction enzyme DpnI was obtained from New England Biolabs; Phusion polymerase, Phire Green Hot Start II PCR Master mix and Phire Hot Start II PCR Master mix were obtained from ThermoScientific.
pBLAM1-x set construction
The pBLAM1-x set has been constructed sequentially. First, the hyperactive Tn5 transposase tnpA gene in pBAMD1-2 was replaced with the hyperactive mariner transposase gene marC9 through modified restriction free cloning (40) adding a final step after DpnI digestion ('magic-touch PCR' modification: 98 ∘ C 38 s, 64 ∘ C 2 min, 72 ∘ C 5 min, 98 ∘ C 8 s, 58 ∘ C 2 min, 72 ∘ C 5 min, 98 ∘ C 8 s, 50 ∘ C 2 min, 72 ∘ C 5 min, 72 ∘ C 5 min). Oligonucleotides F1-Mariner-pBAMD1-2 and R1-Mariner-pBAMD1-2 (Table S2) with sequence homology to marC9 gene and the sequence surrounding tnpA gene in pBAMD1-2 were used to amplify marC9 from pMarC9-R6K (Addgene plasmid #89477) (41). The purified PCR product was then used in a second PCR reaction using pBAMD1-2 as template to replace the Tn5 transposase gene for marC9. Primer design and PCR conditions were those recommended in the restriction free cloning tool (42). The second PCR product was digested for 2 h at 37 ∘ C with DpnI (New England Biolabs) to eliminate the template pBAMD1-2, followed by a 'magic-touch PCR' step and transformed into competent E. coli pir + cells. Upon verification of the correct replacement of tnpA for marC9, the mosaic elements ME-I and ME-O recognized by TnpA were replaced by inverted repeats termed IR-I and IR-O, which contain MmeI restriction sites, recognized by MarC9 transposase. This step was done through a single-step site-directed mutagenesis protocol (43) using primers F1-IR-PacI, R1-IR-SpeI, F1-IR-SpeI and R1-IR-PacI (Table S2). Primer pairs F1-IR-PacI/R1-IR-SpeI and F1-IR-SpeI/R1-IR-PacI were used in separate reactions to substitute ME-O and ME-I in pBAMD1-2 containing marC9 for 16 cycles using an annealing temperature of 50 ∘ C for 30 s, 30 s of extension at 72 ∘ C and standard conditions for Phusion polymerase (Thermo Scientific) in a 25 μL volume. Then, reactions were mixed to a total of 50 μL, 0.5 μL of polymerase was added and the reaction continued for 12 additional cycles modifying the annealing temperature to 55 ∘ C. The reaction was digested with DpnI, subjected to 'magic-touch PCR' and transformed. The obtained plasmid was termed pBAMD1-2-Mariner-IR-PacI/SpeI. Lastly, a SacI site in marC9 was edited to allow for traditional cloning using the standardized MCS. Whole-plasmid site-directed mutagenesis (44) was done with partially overlapping primers F-QC-SacI-MarC9 and R-QC-SacI-MarC9 (Table S2). The PCR reaction was run following the standard conditions for Phusion polymerase using 10 ng of pBAMD1-2-Mariner-IR-PacI/SpeI template in a 50 μL reaction for 12 cycles, melting temperature of 55 ∘ C and an extension time of 1 min/kb. The reaction was subjected to the same processing as in the previous cloning steps prior to transformation. This last modification yielded the final plasmid pBLAM1-2.
Conjugation into Pseudomonas putida KT2440
The pBLAM1-x set were used as donor plasmids in triparental matings to integrate their cargo into the genome of P. putida KT2440 in a manner similar to that previously reported (6). The bacterial strains used are described in Table S1. Briefly, PIR2 cells carrying either pBLAM1-2, pBLAM1-4 or pBLAM1-6 were mixed with the helper strain HB101/pRK600 and the recipient strain P. putida KT2440 in a 1:1:1 ratio based on their OD 600nm . All cells were washed with 10 mM MgSO 4 prior to mixing in a 1:1:1 ratio to reduce the amount of antibiotics in the media, which differs for each strain. After mixing, cells were centrifuged at 5000 g/5 min, resuspended in 10-20 μL of 10 mM MgSO 4 and a single drop was spotted in a Luria-Bertani agar plate. Plates were incubated at 30 ∘ C for 5 h. Subsequently, the whole cell-patch was plated without diluting or in a 10 -2 dilution in a 145 mm round M9-citrate agar plate plus the corresponding cargo antibiotic. Libraries had noticeable colonies by 48 h. Conjugation reactions were carried out for 5 h and in triplicate to minimize the occurrence of repeated sequences due to replication or integration biases.
Library genotyping
Libraries made with the pBLAM1-x set through conjugation into P. putida KT2440 were picked into 96-well plates containing M9citrate plus the cargo antibiotic and subjected to an automated workflow (45) to identify the region of genomic integration of individual variants; 96 clones were picked for libraries generated with pBLAM1-2 and pBLAM1-4 and 52 clones for that of pBLAM1-6. Our automation workflow uses an open-source liquid handler (OT-2, Opentrons, USA) to: (i) inoculate 96-well plates containing Luria-Bertani plus the cargo antibiotic and M9-citrate media plus ampicillin, (ii) perform counter-selection by selecting colonies that do not grow in the presence of ampicillin and grow in the presence of the cargo antibiotic, (iii) prepare glycerol stock plates for each library and a master PCR plate, (iv) do subsequent colony PCRs to detect spurious integration events (6) using primer pairs PS3/PS4 and PS5/PS6 (Table S3) and (v) perform arbitrary PCRs (6,46). An optional step to verify the correct integration of the cargo module containing the selective antibiotic marker was also carried out using primer pairs PSMCS and ME-O-Km-Ext-R, ME-O-Sm-Ext-R or ME-O-Gm-Ext-R (Table S4). Arbitrary PCRs were done as previously reported (6) by two subsequent PCR amplifications using primer pairs ARB6 and ME-O-Km-Ext-F, ME-O-Sm-Ext-F or ME-O-Gm-Ext-F for a first amplification and ARB2 and ME-O-Km-Int-F, ME-O-Sm-Int-F or ME-O-Gm-Int-F for a second amplification (Table S4). After amplification through arbitrary PCRs, a 96-well plate containing 20 variants from each library was sent to sequencing to Macrogen-Europe with oligonucleotides ME-O-Km-Int-F (pBLAM1-2), ME-O-Sm-Int-F (pBLAM1-4) or ME-O-Gm-Int-F (pBLAM1-6) (Table S4). Sequencing results were aligned and annotated with a python-based script that uses command-line blastn and the genome file and annotation file for P. putida KT2440 available at https://www.pseudomonas.com. The annotation script is also described in the mentioned automated workflow (45). The location of each insertion in the genome of P. putida KT2440 was mapped using the online tool Proksee available at https://proksee. ca.
General plasmid features
The standardization of constructs is an overarching goal within the field (31,32). Here, we present a transposase vector set using guidelines of the Standard European Vector Architecture (SEVA) format (27)(28)(29)(30) (Figure 1). The use of the Tn5 transposition system integrated in a plasmid following SEVA criteria has already been validated for the random integration of cargoes in gram-negative bacterial genomes (6,47). Here, we expand on the available transposition tools compatible with SEVA criteria by constructing a set of vectors with the Himar1 mariner transposase system with proven broad-host applicability in bacteria (24).
The new pBLAM1-x (Born to Life Again mini-Mariner transposon) vector set contains the hyperactive Himar1 mariner transposase marC9 26 gene. It derives from the broad-host-range mini-Tn5 vector pBAMD1-2 and contains the same modular features for replication, selection and conjugation (6). These modules can be amplified by the standardized oligonucleotides PS1-PS6 and are flanked by specific restriction sites, thus facilitating the creation of standardized protocols for cloning, characterization and automatization (27)(28)(29)(30). In addition, the maintenance of the ampicillin resistance selection marker in the plasmid's backbone also allows users to create common protocols for screening spurious integration events. To allow the combined insertion of different genes, we have created versions with antibiotic resistances in their cargo to either kanamycin, streptomycin or gentamicin ( Figure 1A).
The inserted region is flanked with Type IIS MmeI restriction enzymes to allow for the generation of TnSeq libraries ( Figure 1B) and the multiple cloning site (MCS) is SEVA-compatible ( Figure 1C).
Evaluation of conjugation efficiency and insertion in Pseudomonas putida KT2440 through an automated workflow
The ability of the pBLAM1-x set to integrate into bacterial genomes was assessed in P. putida KT2440. The conjugation efficiency of the pBLAM1-x set was shown to be between 10 −3 and 10 − 6 conjugants/recipient cells (pBLAM1-2 4.1 × 10 − 3, pBLAM1-4 2 × 10 −4 pBLAM1-4; pBLAM1-6 1.3 × 10 −6 ). However, conjugation efficiencies and insertion biases are largely dependent on the host organism (24) and will likely be different for other organisms. The occurrence of spurious integration events and genotyping was done through an automated workflow that exploits common SEVA features (45). This demonstrated that the number of clones growing in the presence of the backbone antibiotic ampicillin is low (6.3% for pBLAM1-2, 6.2% for pBLAM1-4 and 3.8% for pBLAM1-6).
The correct integration of the cargo module containing the selective antibiotic marker was verified except for 3 out of 23 colonies of the pBLAM1-4 library. Analysis of the sequencing results of arbitrary PCRs of 20 colonies per library revealed the successful insertion across the genome of P. putida KT2440 ( Figure S1, Table S5) with some insertional biases. Note that integrations using plasmid pBLAM1-6 had a higher insertional bias, probably due to its lower conjugation efficiency (three orders of magnitude lower than for pBLAM1-2) and/or host-specific interplay. The Himar1 mariner transposase targets TA sites across the genome with some reported sequence biases (24), which in the case of the high GC content (61.6%) genome of P. putida KT2440 (48) means that it can still theoretically integrate at ∼10 5 TA sites. The Tn5 SEVA-like transposition system shows an insertion bias toward insertion at genomic locations flanked by G/C pairs (6,24). Therefore, the pBLAM1-x set can serve as a complementary SEVA-like tool to uniformly target the whole genome of P. putida KT2440 by employing the same methods to characterize genomic insertions.
The combination of SEVA features with the advantages of Himar1 transposase for random insertion and creation of nextgeneration sequencing libraries make this plasmid set a relevant tool to accelerate the directed evolution or evaluation of complex genetic circuits in synthetic biology.
Supplementary Data
Supplementary Data are available at SYNBIO Online.
|
v3-fos-license
|
2018-12-21T09:41:55.721Z
|
2013-11-20T00:00:00.000
|
67844664
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.19026/rjaset.6.3516",
"pdf_hash": "721d5b93e4a2d8b754d36270dc7424abce5f6bc2",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41803",
"s2fieldsofstudy": [
"Business"
],
"sha1": "86f8fd56f285614d67c51ee7e9dd78fd3abd4966",
"year": 2013
}
|
pes2o/s2orc
|
Dividend Policy from the Signaling Perspective and its Effects on Information Asymmetry among Management and Investors
This study attempts to examine the relevance of dividend policy from the signaling perspective and its effects on information asymmetry among management and investors and Compare the relative information content of them. Based on sampling, 88 firms from Tehran Stock Exchange (TSE) were selected and examined during 2003 to 2010. The findings show that the dividend policy (Divisible profit proportion) has positive and significant relation with market information asymmetry namely when the dividend policy increases the information asymmetry increases, too. On the other side, the test findings indicate the investors are sensitive to the EPS changes and when the EPS changes are positive their dividend increases, but when the divivend of the company decreases the information boggles their mind and information asymmetry increases. By virtue of the findings it may conclude when EPS and DPS changes are not in the same direction the internal and external information asymmetry of the company increases by changing profit division policy.
INTRODUCTION
Stock exchanges as a formed market provides the facilities necessary for the shares buyers and sellers in a manner that they may convert their money to stock exchange and vice versa. Considering stock exchange is an organization to equip the deposits and direct them to active investment and useful to the community and state economics it is very important to study it.
The profit gained by successful companies may be invested in operational assets, in gaining stock exchange, to repay the debts or distributed between the shareholders. Dividend policy is one of the subject interesting in financial literature in recent years and until now different studies have been done concerning to justify the reasons and the way distributing the profit between the shareholders and their attention to dividend profit and the subject is proposed as the 'Dividend Profit Enigma' in financial literature (Amidu and Abor, 2006).
Cash shares profit has a special position for the company owners because of objectivity and tangibility and the company investors have a special interest in the subject in order to know the capacity creating liquidity and distributing it between the shareholders because the data not only present a clear figure of actual company situation, but also create the possibility to assess next situation. Also the subject is important to the companies managers because it provides important data about the company direction process and market assessment of their operation. Hence, the companies managers pay attention to it as 'Dividend Policy', but it is more important to find why the companies have a selected the 'Dividend Policy' than the 'Policy' itself; this may solve the problems concerning to take important economical decisions for different beneficent groups specially the investors because the defining reasons and factors from finding the root not only help to justify the companies behavior, but also provides some device to foresee the next movement and direction in the field.
The 'Dividend Policy' subject may be discussable in viewpoint of information asymmetry and signaling theory; in this regard information asymmetry is due to a potential contradiction between managers' and shareholders' profits; hence, when the managers who are owner sell some of their shares to the investors without any role in the management the information asymmetry increases (Rozeff, 1992).
Financial accounting and reporting may be considered as the strategies by which it is possible to solve the problems concerning representativeness and information asymmetry and convert the inter organizational data to the outer ones by contemplated ways (Scott, 2003).
But here the question is which accounting data incorporated into the financial reports to decrease the information asymmetry and receive the signs from the capital market should be considered more important? Hence, in this study the dividend policy from the signaling perspective and its effects on information asymmetry among management and investors.
LITERATURE REVIEW
In the study we are to test and examine the dividend policy from the signaling perspective and its effects on information asymmetry among management and investors. By virtue of some view the cash profits paid by a company is an appropriate criterion to foresee and shares market operation (Ball and Brown, 1986;Rapp, 2010). Also the company profitability is important as an important factor influencing the 'Dividend Policy' because the profitable companies have more tendencies to pay more shares profit. So it is expected that there would be a positive relation between the company profitability and shares profit payment (Change and Rhee, 1990;Ho, 2003;Aivazian et al., 2003). On the other hand, information asymmetry in great companies have more investors and beneficent than the little ones and this makes the investors try more to have the data and the information advantage is not limited to someone's and the inter organization people (Scott, 2003;Myers and Majluf, 1984); that is why the paid shares profit relationship is different in the great companies with the little ones. So information asymmetry may influence the relation (Rapp, 2010;Jong et al., 2011). On the other hand, by virtue of actual literature it is supposed that the managers consider the 'Dividend Policy' as a device to signal to the market and transfer the data to the investors; for instance, Miller and Modigliani (1961) state that the joint stock companies follow dividend fixation and believe that any change in the 'Dividend Policy' is assessed exactly as a signal from next company profitability by the investors and if the income changes in any amount, it leads to a change in the 'Dividend Policy'.
Also McMenamin believe that practically a change in the 'Dividend Policy' influences the company shares price; any increase in the dividend shares profit increases the shares price and any decrease in the dividend shares profit decreases the shares price; in other words, a change in shares profit payment is considered as a signal for next profit view of the company by the shareholders and investors. Generally an increase in the shares profit payment is considered a positive signal and indicates that the positive data about next profit of the company increases the shares price. Also a decrease in the shares profit payment is considered as a negative signal for next company profit view and decreases the shares price.
Previous studies concerning dividend policy, information asymmetry and signaling theory are as follows: First of all by virtue of two experimental and measuring attitudes Lintner (1956) began to analyze the 'Dividend Policy'. His study led to present 15 variables influencing the 'Dividend Policy' including the company size, capital cost, tendency to support financially from out, share profit, profit and ownership fixation. The findings of the Lintner's study showed that the companies consider the profit payment amount as their goal and modify their 'Dividend Policy' on the basis of the amount. Besides, he found that the companies follow a fixed and defined 'Dividend Policy' and the managers believe that the investors prefer the companies with a fixed 'Dividend Policy' to ones without it. On this basis he concluded that even if the companies sustain a considerable decrease in their net profit, they do not like to decrease the dividend profit and they usually pay the same dividend profit as the previous year. Also he states that any change in the dividend profit amounts is done only on the basis of an essential change in the company operations and the company increases the dividend profit only if the managers believe that a permanent increase is created in the income. Aharony and Swary (1980) show that the companies increase their cash shares profit when they expect an increase in next profits. So any increase in the casg shares profit is a message indicating an improvement in the company operation. Zeckhauser and Pound (1990) state that the shares' and shareholders' profit is considered as a signal. The presence of great shareholders may decrease the shares profit use as a signal for a good operation of the company because the shareholders themselves are a valid signal. Miller and Modigliani (1961) state that any change in the 'Dividend Policy' is assessed as a signal for next company profitability by the investors because the companies follow a fixed 'Dividend Policy'. The best signaling model (Information asymmetry models) have been presented by Bhattacharya (2008), Miller and Rock (1985), John and Williams(1985) and Ambarish et al. (1987). Experimental studies show that there are positive reaction to dividend profit increase and a negative one to dividend profit decrease. Also it should be noted that the market reaction to the dividend profit decrease is more than its increase. The relation between dividend policy and representativeness cost is a discussable subject in the companies financial literature and it is examined that how the 'Dividend Policy' may be used to decrease the representativeness costs. Borokhovich et al. (2005) examined the relation between the independence of the board of directors and shares profit payment as a sample including 192 American companies in 1992-1999. Their findings were similar to the ones reported in the Bathala and Rao (1995). Amidu and Abor (2006) examined and defined the proportion of shares profit payment based on the financial data from the companies accepted in African stock exchange during six years. In the study the organizational ownership was considered as a representative for representativeness cost and sale growth and market value to book value was considered as the representative of the investment occasions. The study findings indicate a positive relation between the shares profit payment and risk proportion, liquidity and tax flow and also a negative relation between shares profit payment and risk, organizational ownership, market growth and value proportion to the book value and showed that there is no significant relation between the risk and organizational ownership. Basiddig and Hussainey (2010) in their study, "The Relation Between Information Asymmetry and Dividend Policy In Great Britain" used the multiple regression model. They found that there is a negative significant relation between the shares profit policy in GB and information asymmetry. The findings show that it may consider the information asymmetry as an important and essential factor to define the shares profit payment policy for GB companies.
By virtue of a recent study by Walker and Hussainey (2009) some evidences were presented about the level of the information of the companies and deciding about shares profit payment policy; the findings are by virtue of the signaling theory. Of course, the relation between these variables are not clear yet. Some researchers such as Al-Najjar and Hussainey (2010) found that there is a negative and significant relation between these two variables in viewpoint of statistics. In fact, the shares profit payment policy has a negative relation with different levels of company asymmetry information.
Al-Najjar and Hussainey (2009) has examined the experimental relations between the shares profit payment and enjoyment, too. Their findings that the companies with more enjoyment probably pays more shares profit than the unprofitable companies. His findings show that there is a positive relation between these two variables. Hae-young et al. (2011) examine the relation between the ownership concentration and information asymmetry between the well-informed and unaware managers and examine different mechanisms influencing the relations. Having examined a big sample of the Korean companies with an ownership intensely concentrative it was found that when the ownership concentration increases the information asymmetry increases, too. Also they found that the ownership concentration has a positive relation with the information asymmetry through increasing relative deliberate commerce system. Valipour et al. (2009) examined the information asymmetry and dividend policy in Tehran stock exchange. Their findings show that there is an inverse and significant relation between the information asymmetry and dividend policy. Other findings show that there is a significant relation between the dividend Policy and shares output, but there is no significant relation between the dividend Policy and company size and there is not the proportion of book value to market value about the shareholders' rights proportion. Twaijry (2007) studied the data from 300 companies in 2001-2005 selected from the companies accepted randomly in Kuala Lumpur order to know the variables expected to influence dividend policy and the shares profit payment proportion in an efficient market. The study findings showed that the shares profit has not an important influence on the next companies profit growth, but it has a negative and significant relation with the financial lever of the companies. Also he found that the profit of each share and book value of the shares have a positive and significant relation with the proportion of the shares profit payment.
Abdel Salam et al. (2008) examined the dividend policy in 50 Egyptian companies in [2003][2004][2005].They showed that there is a positive and significant relation between organizational ownership and company efficiency. Bhattacharya (1979) and Williams (1985) believe that in the signaling theory in comparison with other ones (Investors) the managers have more information about the company value. Hence, the investors reviews exactly the actual changes in the shares dividend policy. Some researchers such as Deshmukh (2003) believe that notwithstanding higher information than the company asymmetry information the level of shares profit payment is higher than the income rate and vice versa. By virtue of the shares profit policy as a symbol for next company operation there is a positive symbol in the relations between the shares profit policy and information asymmetry which is foreseeable. Thus, it is possible to foresee a positive relation between the shares profit policy and the enjoyment. Lee (2010) find empirical evidence of managers of Australian companies catering to the retail investors' preference for dividends when setting dividend policy, even when they are minority shareholders, so long as the proportion of these retail shareholders relative to the total shareholder base is high. Your results are robust when controlled for the factors of size, profitability, financial leverage, signaling, agency costs and franking credits. Wang et al. (2011) results are consistent with the dividend policies of developing economies in general. they also find that dividend payouts among dividendpaying firms and the likelihood that a firm will pay a dividend, are increasing in State ownership. Their findings are consistent with the State's need for cash flow as a partial motivation for continued State ownership of a significant portion of the corporate economy and support the agency and tax clientele explanations for dividend policy. Baba (2009) shows that a higher level of foreign ownership is associated with a significantly higher probability of dividend payouts. A choice-to change model, estimated with a random-effects generalized ordered probit method, shows that a higher level of foreign ownership is associated with a significantly higher (lower) probability of an increase (no change) in dividends, while a larger 1-year increase is associated with a significantly higher (lower) probability of an increase (decrease). Chan et al. (2004) explain that in many of the behavioral finance theories return predictability stems from investors' over-or under-reaction to patterns, i.e., trends and consistency in recent financial information. Trends and consistency in financial performance are identified using time-series observations of quarterly and annual operating performance data. They distinguish financial performance from the firm's shareprice performance, which is measured using stock returns. Arosa et al. (2010) shows that for family firms, the relationship between ownership concentration and firm performance differs depending on which generation of the family manages the firms.
Venus Lun and Quaddus (2011) show that sales growth is positively related to the use of electronic commerce and firm size. To understand how firm size affects firm performance, they use a Structural Equation Model (SEM) to examine their structural relationships. They findings indicate that firm size positively influences sales growth. On the other hand, sales growth affects the profitability of a firm. Eliyasiani and Jia (2010) find that there is a positive relationship between firm performance and institutional ownership stability, accounting for the shareholding proportion. This relationship is robust to the employment of ownership turnover measures used in the literature and consistent with the view that stable institutional investors play an effective role in monitoring. When they disaggregate institutional investors into pressure-insensitive and pressuresensitive categories, they find that stable shareholding of each group has a positive impact on performance, with the first group exerting a larger effect. The channels of the effect include, but are not limited to, decreased information asymmetry and increased incentive-based compensation. Wang (2010) show that the investment expenditures by Taiwan's firms positively affect financial performance and the increased borrowings jeopardize company's profits. However, the financing decisions of China's firms have a positively effect on their capital expenditures. The findings suggest that firms across the Strait adopt different strategies in financial decision environments.
Another strand of literature suggests that corporate risk management alleviates information asymmetry problems and hence positively affects firm value. Information asymmetry between managers and outside investors is one of the key market imperfections that make hedging potentially beneficial (Dionne and Ouederni, 2011).
Some proponents of this theory argue that stock price changes with dividend announcements occur because investors consider these announcements as signals of management's earnings forecasts. Thus, investors are less concerned with the actual dividend and are more concerned with the information content of the dividend announcements. This theory is known as the information content or signaling hypothesis (Besley and Brigham, 2008).
Signaling models contributed to the corporate finance literature by formalizing ''the informational content of dividends" hypothesis. However, these models are under criticism as the empirical literature found weak evidences supporting a central prediction: the positive relationship between changes in dividends and changes in earnings. They claim that the failure to verify this prediction does not invalidate the signaling approach. The models developed up to now assume or derive utility functions with the single-crossing property. They show that, in the absence of this property, signaling is possible and changes in dividends and changes in earnings can be positively or negatively related. Signaling models were the main tool that formalized the original intuition (Araujo et al., 2011).
HYPOTHESES DEVELOPMENT
The signaling theory states that the shareholders and investors know that the managers have more information about next company views (information asymmetry) and use the dividend policy and the policy supporting financially by which to signal the shareholders and investors with little information (McMenamin, 1999).
In the study some evidences are presented in relation to the dividend policy and the strategies and models to use optimally the accounting data in order to assess dividend policies and information asymmetry in viewpoint of signaling theory while the previous findings showed that the dividend policies influence the information asymmetry during the case examination. By virtue of the dividend policy we may receive the management signals and decrease the information asymmetry. In other words, by virtue of dividend policy we may foresee the next operation of above companies and the shares of the investing companies were more interested in recent years; hence, Hypothesis is proposed as follows:
H1:
The signs concerning profitability and dividend sent to the market by the company influence the information asymmetry among management and investors.
The statistical society of the research was chosen among the manufacturing firms accepted in Tehran Stock Exchange during 2003 to 2010. The samples were chosen according to the following criterion: • The fiscal year end of the firms was 29 th of Esfand and there is no change in the fiscal year. • They are not among financial investment and broker firms. Financial Leverage (FL) Financial leverage was computed on the basis of the total debts in proportion to total assets (Heaney et al., 2007). Controlling var.
Firm Size (FS) Firm size was computed on the basis of yearly sale logarithm (Arosa et al., 2010). Controlling var.
Growth Average (GA) Growth average was computed on the basis of the assets growth average and sale growth average divided by two. Controlling var.
Growth Opportunities (GO) Growth opportunities are computed on the basis of the market value of each share in proportion to the book value (Aggarwal and Kyaw, 2010). Controlling var.
Fixed Assets ratio (FA) Fixed assets ratio was computed on the basis of book value of the fixed assets in proportion to total assets (PP and E/total assets) (Cui and Mak, 2002). Controlling var.
ROA Profit in proportion to total assets. In the study we used ROA because the most of the companies with high profitability pay more cash profit (Wei and Xiao, 2009). Controlling var.
Dividend Ratio (DR) Dividend ratio is computed on the basis of the cash profit of each share in proportion to the profit of each share (Manos et al., 2012).
• The data necessary to compute study operational parameters should be available. • It should be accepted in stock exchange at least since 2005 and be active in the stock exchange to the end of the study time.
Considering the abovementioned limitations, 88 firms were selected and examined. Having defined the statistical sample the study parameters data are examined, collected and computed for the companies selected for each year by virtue of mentioned limits as follows: We use two following models to test our hypotheses: Spread = PEPS -EPS/EPS Spread = α + β1 Sig 1 DR + β2 Sig 2 DR + β3 Sig 3 DR + β4 FS + β5 FL + β6 GA + β7 GO + β8 FA + β9 ROA Spread is used to estimate the asymmetry between management and owners. The model was designed by Autore and Kovacs (2006). 'PEPS' is the 'i' company profit foreseen during 't' time. 'EPS' is the profit of each share of the 'i' company during 't' time.
The Sigs of the third model are categorized and defined as follows: Then by virtue of above categorization and the codes related to the each company they are divided into three groups to be used in the regression model: Then by Wald Test the significance of the difference of the artificial variables coefficients is estimated and compare the relation between dividend policy and information asymmetry among management and investors in from the signaling perspective.
The definitions of controlling variables are presented in Table 1.
RESULT ANALYSIS
Descriptive statistics of the variables are presented in Table 2.
The Spread variables are to evaluation asymmetry in market and the information asymmetry between management and investors. Spread were used to test the hypothesis; the general model to test the hypothesis is as follows: Spread = α + β1 Sig 1 DR + β2 Sig 2 DR + β3 Sig 3 DR + β4 FS + β5 FL + β6 GA + β7 GO + β8 FA + β9 ROA Tables and analyses related to are Table 3 and 4.
By virtue of above table we see the artificial and control variables coefficients of growth average and its opportunities are significant. Also the regression model is significant and its definition coefficient is about 32%. Thus, final equation related to the Spread variable is as follows as the index assessing the data asymmetry between the company investors and management: Spread = -2.9973 + 13.2147 Sig 1 DR + 9.5969 Sig 2 DR + 4.9832 Sig 3 DR + 2.2582 GA + 0.028 GO The Wald test is shown in Table 5 to compare related artificial variables coefficients.
Above Wald test shows that the difference between the Sig 1 and Sig 2 variables coefficients is significant but not between the Sig 1 and Sig 3 and the Sig 2 and Sig 3 variables have not become significant; it shows that when information asymmetry increases in market we should send positive EPS and negative DPS signals to the market. Also by virtue of lack of relation between the variables of the groups 2 and 3 we find that positive EPS or negative DPS solely may not influence the data asymmetry between the investors; in other words, general finding indicates that the market investors do not pay attention to the signals from dividend policy signal and either same or opposite directions of negative EPS and positive DPS are not very effective. Considering the DR variable coefficient is positive when profitability is higher the data asymmetry is higher, too. On the other hand, the test findings indicate the investors are sensitive to the EPS changes and when the EPS changes are positive their dividend increases, but when the divisible profit of the company decreases they are at a loss to understand and information asymmetry increases.
CONCLUSION
The findings show that the dividend policy (Divisible profit proportion) has positive and significant relation with market information asymmetry namely when the dividend policy increases the information asymmetry increases, too. Also the information indicate that the market investors do not pay attention to the quality signaling divisible profit policy and the either same or opposite directions of EPS and DPS are not very effective. Considering the DR variable coefficient is positive when profitability is higher the data asymmetry is higher, too. Considering the DR variable coefficient is positive when profitability is higher the data asymmetry is higher, too. By virtue of the findings it may conclude when EPS and DPS changes are not in the same direction the internal and external information asymmetry of the company increases by changing profit division policy.
|
v3-fos-license
|
2021-03-29T05:22:55.557Z
|
2021-03-01T00:00:00.000
|
232381633
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8247/14/3/208/pdf",
"pdf_hash": "89ebb33fe51586ad79dae3f4cffe09dc87fa3a64",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41805",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"sha1": "89ebb33fe51586ad79dae3f4cffe09dc87fa3a64",
"year": 2021
}
|
pes2o/s2orc
|
Tryptophanol-Derived Oxazolopyrrolidone Lactams as Potential Anticancer Agents against Gastric Adenocarcinoma
Gastric cancer is one of the deadliest cancers in modern societies, so there is a high level of interest in discovering new drugs for this malignancy. Previously, we demonstrated the ability of tryptophanol-derived polycyclic compounds to activate the tumor suppressor protein p53, a relevant therapeutic target in cancer. In this work, we developed a novel series of enantiomerically pure tryptophanol-derived small molecules to target human gastric adenocarcinoma (AGS) cells. From an initial screening of fourteen compounds in AGS cell line, a hit compound was selected for optimization, leading to two derivatives selective for AGS gastric cells over other types of cancer cells (MDA-MB-231, A-549, DU-145, and MG-63). More importantly, the compounds were non-toxic in normal cells (HEK 293T). Additionally, we show that the growth inhibition of AGS cells induced by these compounds is mediated by apoptosis. Stability studies in human plasma and human liver microsomes indicate that the compounds are stable, and that the major metabolic transformations of these molecules are mono- and di-hydroxylation of the indole ring.
Introduction
Cancer is considered a worldwide health problem, and its occurrence can be associated to a combination of environmental factors and genetic alterations [1]. According to the World Health Organization (WHO), it is estimated that in 2018, cancer contributed to 9.5 million deaths worldwide [2]. Gastric cancer (GC) ranks third in the list of deadliest cancers [1], and its occurrence and mortality are highly influenced by region and culture [3]. The survival rate of GC has not improved much over the last years. Patients with GC in early-stage, usually, do not have symptoms, which hinders the early detection of this cancer. For this reason, most patients present advanced GC and, in these cases, radical surgery is the first-line approach and the only curative treatment [4]. In the cases that surgery is not recommended, alternative treatments can be used, such as chemotherapy, radiotherapy, and immunotherapy. However, these therapeutic options only achieve modest results, and the poor response of this cancer to chemotherapy is, typically, associated to chemoresistance mechanisms [5,6]. Moreover, the severe side effects associated to drug-related toxicity are frequent [7,8]. Consequently, the discovery of new alternative therapeutics for the treatment of GC, with low cost and minimal side effects, is still urgently needed. In the last decades, the discovery of cellular mechanisms associated to malignancies has been intensive, and many anticancer agents were developed to disrupt specific biological pathways. With this, the discovery of new scaffolds increased, as well as the interest in new therapeutic applications to scaffolds already known. For example, the indole scaffold is associated to many pharmacological activities in medicinal chemistry, including antimicrobial, antioxidant, antiviral, and anticancer [9,10]. It is considered a privileged scaffold, commonly found in many natural products (e.g., alkaloids and microbial hormones) and synthetic molecules with medicinal value (e.g., compounds 1 and 2, Figure 1) [11]. Other examples are tryptophanol-based small molecules (e.g., compounds 3-6, Figure 2), reactivators of the p53 pathway, that showed in vitro antiproliferative activity in colon and breast cancer cells [12][13][14][15][16][17]. Specifically, tryptophanol-derived isoindolinones 4-5 presented promising in vivo antitumor results in xenograft mouse models, without cytotoxicity and genotoxicity [13,14,16]. Based on these results, and on reported results with pyrrolidonebased small molecules with anticancer activity [18,19], we envisioned that the merge of these two scaffolds could lead to compounds with interesting anticancer properties [15]. Herein, we report the synthesis of 29 enantiopure tryptophanol-derived oxazolopyrrolidone lactams (compounds 7 and 8, Figure 2), their antiproliferative activity in human gastric adenocarcinoma (AGS) cell line, and in vitro stability and metabolic studies with this scaffold. Scheme 3. Synthesis of (R)-tryptophanol-derived oxazolopyrrolidone lactams 7n-u. Reaction conditions: (a) Pd(PPh 3 ) 2 Cl 2 , aq. sol. Na 2 CO 3 (1 M), 1,4-dioxane, 100 • C, 2-5 h.
The absolute configuration of the new formed stereogenic center C-7a was established by X-ray analysis of compound 8b (Figure 3). The 13 C NMR spectroscopy data of compound 8b was used as reference to confirm the stereochemistry of the other derivatives. For compounds 7a-i and 8a-g, the signals of C-3, C-7a, and C-7 appear between 55.5-56.5, 101.7-102.6, and 35.0-35.4 ppm, respectively.
The spectral data obtained for compounds 7j and 7j' indicate that the major diastereomer 7j has (3R, 7aR, 7S) configuration, while the minor diastereoisomer 7j' has (3R, 7aR, 7R) configuration [21]. In particular, the methyl group appears in the 1 H NMR spectra as a doublet at 1.12 ppm for 7j and at 0.60 ppm for 7j', and in the 13 C NMR spectra at 13.96 ppm for 7j and at 16.40 ppm for 7j'. Moreover, the methyl group induces a shift in the C-7 that appears at 39.7 ppm for compound 7j and at 41.3 ppm for compound 7j'. The chemical shift of C-3 appears in a higher field for diastereoisomer 7j' (54.8 ppm). The absolute configuration of diastereomers 7j and 7j' was further confirmed by X-ray crystallography ( Figure 3). . X-ray crystallographic structures of compounds 8b, 7j, and 7j' (crystallographic information file (CIF) data can be found in the Supplementary Materials Tables S1-S15).
Effect of Tryptophanol-Derived Oxazolopyrrolidone Lactams on Cell Viability and on Apoptosis
To perform a structure-activity relationship (SAR) study, a first series of tryptophanolderived oxazolopyrrolidone lactams containing different substituents on the phenyl ring (R 1 ) at position C-7a was synthesized (compounds 7a-g and 8a-g, Table 1). In the design of this new compounds series, a diversity of substituents with electron donating properties (-CH 3 and -OCH 3 groups) and electron withdrawing properties (-F, -Cl, -Br, and -SO 2 CH 3 groups) were chosen. Both series of enantiomers, (S)-and (R)-tryptophanol derivatives, were synthesized to evaluate the impact of compound's stereochemistry on the antiproliferative response of AGS cells. The activity of the target compounds was assessed using the MTT reduction assay. In general, (R)-tryptophanol-derived oxazolopyrrolidone lactams were more active than the corresponding enantiomers, except for derivative 8b with a para-fluoro substituent (7a-g vs. 8a-g). From the first screening at 100 µM, analogues 7a (R 1 = H), 7b (R 1 = F), and 8e (R 1 = CH 3 ) showed moderate antiproliferative activity, while compounds 7g and 8g (R 1 = SO 2 CH 3 ) did not induce appreciable cytotoxicity. Remarkably, compounds 7c-e and 8c revealed an antiproliferative response higher than 85%. The presence of chlorine or bromine substituents at R 1 had a positive impact on the antiproliferative activity, for both enantiomers (compounds 7c-d and 8c-d). The derivative 7c (R 1 = Cl) exhibited the highest activity and was selected for chemical derivatizations to improve the antiproliferative activity of this scaffold in AGS cells. Table 1. Screening of (R) and (S)-tryptophanol-derived oxazolopyrrolidone lactams 7a-g and 8a-g in AGS cell line. Four sites were identified for suitable structural modifications in compound 7c: metaposition of the C-7a phenyl ring (compounds 7h and 7i, Scheme 1), position C-7 of the pyrrolidone ring (compounds 7j and 7j', Scheme 1), alkylation of the N-indole (compounds 7k-m, Scheme 2) and C-C couplings in the C-7a phenyl ring (compounds 7n-u, Scheme 3).
(R)-tryptophanol-derived oxazolopyrrolidones 7h and 7r showed similar antiproliferative activity to 7c, while 7j, 7o, and 7s were more active than the hit compound 7c. The presence of a pyridine (compound 7t) or a dioxane ring (compound 7u) led to a decrease of the antiproliferative effect in AGS cells. Additionally, meta-fluoro and para-methoxy substituents on the phenyl ring (compound 7i) resulted in a non-significant cell death. Compounds 7n (R 1 = p-Cl-Ph), 7p (R 1 = p-OH-Ph), and 7q (R 1 = p-CH 2 OH-Ph), with bulky substituents, displayed moderate antiproliferative activity at 50 µM. The results also suggest that the presence of a meta-chloro substituent or electron withdrawing groups are important for the activity (7r and 7s vs. 7n and 7o, 7r, and 7s vs. 7p and 7q). Interestingly, the two diastereomers 7j and 7j' had different effects in AGS cells. Diastereomer 7j, with (3R, 7R, 7aS) configuration, had a high antiproliferative effect, while diastereomer 7j' (3R, 7R, 7aR) had almost no effect, suggesting that the C-7a stereochemistry is also decisive for the antiproliferative activity of tryptophanol-derived oxazolopyrrolidone lactams in AGS cells. Table 2. Screening of (R)-tryptophanol-derived oxazolopyrrolidone lactams 7c, 7h-u, and 7j' in AGS cell line.
The substitution of the N-indole hydrogen (compound 7c) by ethyl (compound 7k), acetyl (compound 7l) or tert-butyloxycarbonyl (compound 7m) groups led to a decrease of activity, probably due to steric effects or because the establishment of a hydrogen bond might be important for the antiproliferative effect.
The IC 50 values of the most promising derivatives (7j, 7o, and 7s), as well as of the hit compound 7c, were determined in AGS cell line (Table 3). Trifluoromethyl derivative 7o (R 1 = p-CF 3 -Ph) and di-halogenated derivative 7s (R 1 = 3,4-Cl-Ph) were the most active derivatives with 2.3 times more potency than the hit 7c, respectively. We then tested compounds 7o and 7s in four cancer cell lines of other tumor types (Table 3): MDA-MB-231 (breast adenocarcinoma), A-549 (lung carcinoma), DU-145 (prostate cancer), and MG-63 (osteosarcoma). Both compounds were much less potent in lung carcinoma cells (IC 50 higher than 60 µM) but presented moderate activity in prostate cancer cell line DU-145 (Table 3). In osteosarcoma and breast cells, compound 7o was around two times more active than compound 7s. Compounds 7o and 7s were then evaluated in HEK 293T normal cell line [22] and, except for A-549 cells, showed selectivity towards all cancer cell lines over the non-cancer derived cell line ( Table 3).
The ability of compounds 7o and 7s to induce apoptosis was also explored by measuring caspase 3/7 activity in AGS cells. The assays showed that, after 48 h of compounds' incubation at 12.5 µM, there was a significant increase of caspase 3/7 activity, indicating that the antiproliferative activity is associated with apoptosis induction (Figure 4).
Stability Studies in PBS, Human Plasma, and Human Liver Microsomes and Identification of Metabolites
Preliminary stability studies can provide useful information about possible liabilities of new drug candidates. Understanding possible clearance mechanisms and how to modulate the metabolism to reduce metabolic liability of a new bioactive chemical entity is a fundamental step in drug development that allows access to a hit compound with desirable ADME attributes [23]. The in vitro phosphate saline buffer (PBS), plasma, and metabolic stabilities for compound 7s were evaluated. This compound showed chemical stability in PBS conditions and under plasmatic enzyme activity after 24 h of incubation, at 37 • C ( Figure 5A). The in vitro metabolic stability of compound 7s was determined upon incubation in human liver microsomes, in the presence of the Phase I cofactor NADPH ( Figure 5B). This compound demonstrated to be moderately stable [24,25], with a half-life (t 1/2 ) of 45 min (see Supplementary Materials Figure S1) and an intrinsic hepatic clearance (CL int ) of 22.8 min −1 ·mL −1 ·Kg −1 . Three main Phase I metabolites, stemming from monoand di-hydroxylation of the indole moiety, were identified by LC-HRMS/MS (liquid chromatography high resolution tandem mass spectrometry) analysis. The protonated molecule of the parent compound, 7s, is observed in the HRMS-ESI(+) full scan spectrum at m/z 477.1148 ± 3.6 ppm, with the characteristic dichlorine isotope cluster, and the base peak of the MS/MS spectrum is observed at m/z 304.0289 ± 0.3 ppm, which corresponds to the loss of the dichloro-biphenyl-dihydropyrrolone moiety from the protonated molecule (see Supplementary Materials Figure S2). A mass increment of 15.9944 u is observed for the protonated molecules of the two close eluting (major) metabolites at m/z 493.1100 ± 4.0 and m/z 493.1098 ± 3.7 ppm, which are, therefore, compatible with two isomer monohydroxylated metabolites of compound 7s, indicated with abbreviation mono-OH-7s ( Figure 5C, see Supplementary Materials Figure S3). The structural similarity of these two Phase I metabolites was further confirmed by the similar fragmentation patterns observed in the tandem mass spectra (see Supplementary Materials Figure Figure S4B). The observation of the fragment ion at m/z 162.0551 ± 1.2 ppm (the di-hydroxylated version of the mentioned diagnostic fragment ion for mono-OH-7s metabolites), represents an additional evidence that the main site of Phase I biotransformation is the indole ring. This constitutes an expected metabolic transformation [26], which is not linked with drug bioactivation processes [27], and, therefore, is not anticipated to be a toxicity red flag alert. Nonetheless, taking into consideration the moderate metabolic stability of the parent compound, it might be relevant to assess the activity of hydroxylated metabolites, following further improvement of this scaffold.
Chemistry
General information: THF was dried using sodium wire and benzophenone as indicator. (R)-Tryptophanol was obtained by reduction of (R)-tryptophan using lithium aluminum hydride [28]. Other reagents were obtained from commercial suppliers (Sigma-Aldrich, Alfa Aesar, and Fluorochem). General information concerning the equipment used for the elucidation of the products' chemical structures and product characterization (NMR, melting point, optical rotations, MS, and elemental analysis) are presented in our earlier publication [21]. Multiplicities in 1 H NMR spectra are given as: s (singlet), d (doublet), dd (double doublet), ddd (doublet of doublets of doublets), t (triplet), and m (multiplet). Compounds 7h, 7j, and 7j' showed purity ≥ 95% by LC-MS, performed in a LaChrom HPLC constituted of a Merck Hitachi pump L-7100, Merck Hitachi autosampler L-7250, and a Merck Hitachi UV detector L-7400. Analyses were performed with a LiChrospher ® 100 RP-8 (5 µm) LiChroCART ® 250-4 column at room temperature, using a mobile phase solution constituted of 65% acetonitrile and 35% Milli-Q water. Peaks were detected at λ = 254 nm.
General procedure for the synthesis of compounds 7a-j, 7j', and 8a-g: To a suspension of enantiopure tryptophanol (0.53 mmol, 1.0 eq.) in toluene (5 mL) was added the appropriate oxocarboxylic acid (0.58 mmol, 1.1 eq.). The mixture was heated at reflux for 10-25 h in a Dean-Stark apparatus. The reaction mixture was concentrated in vacuo and the residue obtained was dissolved in EtOAc (10 mL). The organic phase was washed with saturated aqueous solution of NaHCO 3 (15 mL) and brine (15 mL), dried over Na 2 SO 4 , filtered, and concentrated in vacuo. The residue was purified by silica gel flash chromatography using a mixture of EtOAc/n-hexane as eluent. ( Following the general procedure, to a solution of (R)-tryptophanol (0.102 g, 0.536 mmol) in toluene (5 mL) was added 3-benzoyl propionic acid (0.105 g, 0.590 mmol). Reaction time: 19 h. The compound was purified by flash chromatography (EtOAc/n-hexane 1:1) and recrystallized from EtOAc/n-hexane to give pale pink crystalline solid (0.166 g, 95%); α 25 D = −54.7 • (c = 2.0, MeOH); 1 H NMR spectra was found to be identical to the one reported [15] and obtained for compound 8a. Anal. Calcd. for C 21 General procedure for the synthesis of 7k-l: The (R)-tryptophanol-derived oxazolopyrrolidone lactam (0.129 mmol) was dissolved in dry DMF (5 mL), and the solution was cooled to 0 • C, under N 2 atmosphere. Sodium hydride (NaH) in 60% dispersion in mineral oil (0.250 mmol, 2.0 eq.) was added portion wise and the mixture stirred for 15 min. The appropriate protecting reagent (0.320 mmol, 2.5 eq.) was added and the reaction mixture stirred at room temperature for 3-6 h. After reaction completion, water (10 mL) was added followed by EtOAc (10 mL). The aqueous phase was washed with EtOAc (2x10 mL); the combined organic phases were washed with brine (10 mL), dried with MgSO 4 , and concentrated in vacuo. The residue was purified by silica gel flash chromatography using EtOAc/n-hexane as eluent.
Metabolite Identification
The 60 min aliquot was analyzed by LC-HRMS/MS, as previously described [36]. All spectra corresponding to metabolites were then manually checked. The mass deviation from the accurate mass of the identified metabolites remained below 5 ppm for the precursor and product ions. After their detection, structural characterization of the potential metabolites was based on tandem mass data (see Supplementary Materials Figures S2-S4).
Conclusions
A series of enantiopure tryptophanol-derived bicyclic lactams was prepared, and its antiproliferative activity was evaluated in AGS cells. From the first screening emerged compound 7c, a (R)-tryptophanol derivative with a para-chloro phenyl substituent, which was selected for further optimization. Introduction of an additional di-halogenated aromatic ring in 7c structure led to two derivatives 2.3-to 2.7-fold more active in AGS cells. These compounds also showed moderate activity in prostate cancer cells, representing useful hit compounds for further optimization in this type of cancer. More importantly, additional assays with the two compounds showed they are not toxic in normal HEK 293T cells, and that the antiproliferative activity in AGS cells occurs through apoptosis. Stability studies with the most potent derivative, compound 7s, showed that the compound is stable in PBS and human plasma. Moreover, incubation assays in human liver microsomes, followed by LC-HRMS/MS analysis, showed that this compound is moderately metabolically stable and that the major metabolites stem from mono-hydroxylation of the indole ring, which is not anticipated to be a toxicity red flag alert.
Supplementary Materials: The following are available online at https://www.mdpi.com/1424-824 7/14/3/208/s1: crystallographic information for compounds 7j, 7j', and 8b; LC-HRMS/MS data for compound 7s and its metabolites; NMR spectra of compounds 7h, 7j, 7j', 7o, and 7s. Institutional Review Board Statement: Ethical review and approval were waived for this study. This plasma was obtained from "Instituto Português do Sangue" that is the Portuguese institute of blood. The plasma was obtained from blood that was already out of date for use in medical procedures. This blood was to be destroyed if it were not used by us. The IPS makes agreements with the institutions so that it can be used for research purposes.
Informed Consent Statement: Not applicable.
Data Availability Statement: CCDC 2050433-2050435 contains the supplementary crystallographic data for this paper. These data are provided free of charge by The Cambridge Crystallographic Data Centre.
|
v3-fos-license
|
2018-12-05T12:41:33.345Z
|
2015-01-01T00:00:00.000
|
54920792
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://wnus.usz.edu.pl/cejssm/file/article/view/94.pdf",
"pdf_hash": "805cc89902d016384262c35c7f93963875a11ba7",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41806",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "6d517935ee7bd74d163101cfd675aab4e7a72e42",
"year": 2015
}
|
pes2o/s2orc
|
GENDER DIFFERENCES IN LIMB AND JOINT STIFFNESS DURING THE FENCING LUNGE
The aim of the current investigation was to examine gender differences in limb and joint stiffness characteristics during the fencing lunge. Ten male and ten female fencers completed simulated lunge movements. Lower limb kinematics were collected using an eight camera optoelectric motion capture system which operated at 250 Hz. Measures of limb and joint stiffness were calculated as a function of limb length and joint moments divided by the extent of limb and joint excursion. Gender differences in limb joint stiffness parameters were examined statistically using independent samples t-tests. The results showed firstly that both limb (male = 64.22 ±19.12, female = 75.09 ±22.15 N.kg.m) and hip stiffness (male = 10.50 ±6.00, female = 25.89 ±15.01 Nm.kg.rad) were significantly greater in female fencers. In addition it was also demonstrated that knee moment (male = 1.64 ±0.23, female = 2.00 ±0.75 Nm.kg) was significantly larger in females. On the basis of these observations, the findings from the current investigation may provide further insight into the aetiology of the distinct injury patterns observed between genders in relation to fencing.
Introduction
Epee fencing is a recognised Olympic discipline during which the athletes are required to make contact with their opponent with their sword (Sinclair et al. 2010).Clinical research investigating the prevalence of injury in both elite and recreational fencers has demonstrated that injuries and pain connected specifically to training/ competition were apparent in 92.8% of all fencers (Harmer 2008).Importantly it was also shown that a high proportion of all injuries were experienced by the lower extremities (Harmer 2008).The repetitive high impact dynamic motions associated with fencing training and competition are considered to expose the lower extremity musculoskeletal structures to high levels of strain (Sinclair et al. 2010;Greenhalgh et al. 2013;Sinclair and Bottoms 2014).The lunge movement in particular which is the foundation of the majority of offensive fencing motions repeatedly exposes fencers to potentially detrimental impact forces (Sinclair et al. 2010).
In clinical biomechanics literature the importance of lower limb stiffness is now becoming acknowledged (Butler et al. 2003), as researchers and clinicians attempt to achieve more knowledge of how the musculoskeletal system responds to applied loads and attain additional insight into the aetiology of chronic lower limb injuries.Limb stiffness calculated as a function of the vertical force that is applied to a body divided by the resultant deformation of the limb as a function of the applied load (McMahon and Cheng 1990).During dynamic movements the contact limb is represented using a spring mass system (Latash and Zatsiorsky 1993), during which the contact limb is symptomatic of a linear spring and the mass of the athletes body is representative of the overall point mass (McMahon and Cheng 1990).Clinically, higher limb stiffness has been linked to an increased risk from bone-related injuries whereas insufficient limb stiffness has also been linked to soft tissue injury (McMahon et al. 2012).
Fencing is undertaken by both male and female athletes, previous analyses have examined gender differences in the mechanics of the fencing lunge.Sinclair and Bottoms (2013) investigated the kinetics and lower body kinematics of the lunge movement as a function of gender.It was demonstrated that females exhibited significantly greater knee abduction and hip adduction of the lead limb.Sinclair and Bottoms (2015) examined differences between genders in patellofemoral forces between male and female fencers who performed the lunge movement.Their results showed that female fencers were associated with significantly greater patellofemoral kinetics than males.Sinclair and Bottoms (2014) explored gender differences in the load experienced by the Achilles tendon, it was demonstrated that males were associated with a significantly larger Achilles tendon loads than female fencers.
However, gender differences in limb and joint stiffness parameters during the fencing lunge have not yet been explored in biomechanical literature.Therefore the aim of the current investigation was to examine gender differences in limb and joint stiffness characteristics as a function of the lunge movement.
Participants
Ten male and ten female epee fencers volunteered to take part in the current investigation.All were injury free at the time of data collection and provided written informed consent in accordance with the declaration of Helsinki.Participants were active competitive epee fencers who engaged in training a minimum of 3 sessions per week and were all right handed.The mean characteristics of the participants were males; age 27.22 ±4.08 years, height 1.75 ±0.05 m and mass 74.31 ±6.05 kg and females; age 24.88 ±5.87 years, height 1.67 ±0.07 m and mass 63.52 ±4.66 kg.The procedure was approved by the University of Central Lancashire ethics committee.
Procedure
Participants completed 10 lunges during which they were required to hit a dummy with their weapon and then return to a starting point which was determined by each fencer prior to the commencement of data capture.This allowed the lunge distance to be maintained.The fencers were also required to contact a force platform (Kistler Instruments Ltd., Alton, Hampshire) embedded into the floor (Altrosports 6mm, Altro Ltd,) of the biomechanics laboratory with their right (lead) foot.The force platform sampled at 1000 Hz.
Vol. 11, No. 3/2015 Gender Difference in Limb Stiffness in Fencing Kinematic information was obtained using an eight camera optoelectric motion capture system (Qualisys Medical AB, Goteburg, Sweden) using a capture frequency of 250 Hz.The current investigation utilized the calibrated anatomical systems technique (CAST) to quantify kinematic information (Cappozzo et al. 1995).To define the anatomical frame of pelvis, thigh, shank and foot retroreflective markers were positioned unilaterally to the medial and lateral malleoli, medial and lateral epicondyle of the femur and greater trochanter.Rigid technical tracking clusters were positioned on the shank and thigh segments.The tracking clusters comprised four retroreflective markers mounted to a thin sheath of lightweight carbon fibre with length to width ratios in accordance with Cappozzo et al. (1997).Static trials were obtained with participants in the anatomical position in order for the positions of the anatomical markers to be referenced in relation to the tracking clusters, following which markers not required for tracking were removed.
Data processing
Retroreflective marker positions were identified using Qualisys Track Manager in and then exported as C3D files to Visual 3D (C-Motion, Germantown, MD, USA) for further analysis.Ground reaction force and retroreflective marker trajectories were filtered at 50 and 12 Hz using a low pass Butterworth 4th order zero-lag filter (Sinclair, 2014).Hip, knee and ankle joint kinematics were calculated using an XYZ sequence of rotations (where X represents sagittal plane; Y represents coronal plane and Z represents transverse plane rotations) (Sinclair et al. 2013).Newton-Euler inverse-dynamics were also adopted which allowed knee and ankle joint moments to be calculated.Kinetic/ kinematic measures from the hip, knee and ankle extracted for statistical analysis were 1) joint angular excursion (representing the angular displacement from footstrike to peak angle) and 2) peak joint moment.
Limb stiffness was quantified using a mathematical spring-mass model (Blickhan 1989).Limb stiffness was calculated from the ratio of the peak vertical GRF to the compression of the limb spring.Limb compression was calculated as the change in thigh length from footstrike to minimum thigh length during the stance phase (Farley and Morgenroth 1999).The torsional stiffness of the hip, knee and ankle joints were obtained as a product of the ratio of the change in joint moment to joint angular excursion (Farley and Morgenroth 1999).
Statistical analyses
Means and standard deviations were calculated as a function of gender for each outcome measure.Gender differences in limb and joint stiffness parameters were examined using independent samples t-tests with significance accepted at the p≤0.05 level.Effect sizes for all significant observations were calculated using partial eta 2 (pη 2 ).All statistical procedures were conducted using SPSS v22.0 (IBM SPSS, Inc., Chicago, IL, USA).
Results
Table 1 presents the gender differences in limb and joint stiffness.The results indicate that limb and hip joint stiffness parameters were significantly influenced as a function of gender.
Discussion
The current investigation aimed to determine whether there are gender differences in limb and joint when performing the lunge movement in fencing.To the authors' knowledge this represents the first study to examine gender differences in limb stiffness parameters in fencing.
The first key observation from the current investigation is that female fencers were associated with increased limb stiffness in relation to males.This finding does not agree with those of Granata et al. (2001) who showed that females were associated with reduced limb stiffness in relation to males when performing a hopping task.It is proposed that this difference relates to the different functional demands of hopping tasks in comparison to the fencing lunge (Sinclair and Bottoms 2013).This finding relates principally to the significant increases in limb compression that were observed in females as the peak vertical GRF magnitude did not differ significantly between genders.This observation may have relevance clinically as increased levels of limb stiffness, such as those observed in female runners, have been linked to the aetiology of bony injuries (McMahon et al. 2012).In addition to this, the findings from this study also confirmed that hip joint stiffness was significantly larger in female fencers.This observation relates to the significant reduction in hip excursion and corresponding increase in hip moment noted in females.Given the similar joint stiffness parameters observed between genders at the knee and ankle joints at appears that hip stiffness is the key contributor to the differences in limb stiffness.
A number of studies have examined the relationship between limb stiffness and athletic performance (Bret et al. 2002;Farley and Gonzalez 1996;Heise and Martin 1998;Hobara et al. 2010).The extent of limb stiffness has been shown to be related to the utilization of the stretch-shortening cycle in dynamic movements (Brughelli and Cronin 2008).During the eccentric phase enhanced limb stiffness as a function of a reduction in limb compliance allows for maximum return of the stored energy during the concentric phase (Latash and Zatsiorsky 1993).This suggests that female fencers may be able to make more effective use of the stretch shorten cycle than males, indicating that they may be able to recover their position and progress onto the next attack more efficiently.Therefore whilst the increased limb stiffness observed in female may place them at increased risk from lower limb injury, it may also promote a corresponding increase in movement efficiency around the piste.
The increase in peak sagittal knee moment may also provide insight into the distinct injury patterns in females.Female athletes are at much greater risk of developing patellofemoral pain than age matched males (Wilson 2007).This finding concurs with the observations of Sinclair and Bottoms (2015), who demonstrated that both knee moment and patellofemoral loads were greater in female fencers.The knee joint moment profiles from the current study indicate that the load at the knee is larger in female fencers.Therefore, this finding re-enforces the conclusions from Sinclair and Bottoms (2015) as the consensus regarding the aetiology of patellofemoral pain is that symptoms are the function of excessive knee joint loading (Fulkerson and Arendt 2000).
In conclusion, although gender differences the mechanics of the fencing lunge have been examined extensively, the current knowledge regarding the effects of gender on limb and joint stiffness parameters is limited.The present investigation therefore adds to the current knowledge by providing a comprehensive comparative evaluation of the limb and joint stiffness characteristics of male and female fencers.On the basis that hip/ limb stiffness and knee moment were shown to be significantly greater in female fencers, the findings from the current investigation may provide further insight into the aetiology of the distinct injury patterns that have been noted between male and female athletes.Clinically the outcomes from the current investigation indicate that female fencers may be more susceptible to overuse injuries than males.
Table 1 .
Limb and joint stiffness parameters as a function of gender * significant difference.
|
v3-fos-license
|
2019-12-19T09:16:14.831Z
|
2019-12-11T00:00:00.000
|
213006695
|
{
"extfieldsofstudy": [
"Sociology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://agrogeoambiental.ifsuldeminas.edu.br/index.php/Agrogeoambiental/article/download/1326/pdf",
"pdf_hash": "d950fcdb30c4519285fe25e6958d1d4edf8e0b6c",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41809",
"s2fieldsofstudy": [
"Environmental Science",
"Education"
],
"sha1": "2fb89c86c00cd3be4385273c19dbb2b6a793e607",
"year": 2019
}
|
pes2o/s2orc
|
Study of the environmental ecological environment: perception of children and young people of the schools from Obligado District, Department of Itapúa
For proper management and care of the environment in a citizenry, the implementation of education from an early age is necessary. Environmental education from its beginnings, when only general concepts about the environment were talked about, until today, with the influence of sustainable development, was intended to help the proper perception of the environment in people, for which we must become aware, besides transmitting to children and young people good practices that influence their daily lives and that constitute a way of life. From the study carried out, it is intended to reflect the perspective in knowledge, management, sensitivity and behavior relating it to environmental issues. Through the use of a mixed type survey, the sample under study was questioned about the level of knowledge about the environment resulting in regular knowledge, at which water and air were presented as an aspect of greater interest as a component environmental. Aware that there are environmental problems at different scales, more specifically, at regional level, environmental pollution is categorized in a first order of importance. Information about the environment is perceived to a greater extent through the media and even to a greater extent through television.
Introduction
One of the challenges facing environmental education is to imply the importance of discernment towards the environment, the ecosystem, and thereby contribute to the formation and training of young people and children, managers, planners and people in charge of making decisions, facilitating understanding, orienting their values and behaviors towards a harmonious relationship with nature. Another challenge in the social sector is radically transform the management structures and redistribution of natural resources. Both issues constitute true referential axes when talking about sustainable development.
Environmental education is undergoing changes in its terminology, focused on the dimension of sustainable development and, more specifically, on the problems involved with development. It focuses not only on formal and non-formal education because it is outside the school environment, but also on a deeper study of the relationships involving environmental quality, ecology, socio-economic factors and political trends, through a holistic view of problems (CARLSSON; MKANDLA, 1999;TILBURY, 1995), instead of achieving behavior modification, the purpose of previous versions of environmental education terminology. Breiting (1997), Calvo and Franquesa (1998) and Curiel (2001) establishes the objective of environmental education to develop in students the training for action, that is, the use as a reference framework for democracy, as well as dialogue, negotiation and consensus to resolve conflicts by affecting, especially, the participation of individuals in these procedures as an essential part of their training.
Behaviors focused on changing the paradigm of environmental education are considered a way of rethinking our relations with the biosphere, at the same time as an instrument of social transformation (NOVO, 2009). The globality and depth of the sustainable challenge requires the participation of everyone, in particular, of people who in the future can make decisions. It takes entrepreneurs, scientists, engineers, lawyers, pedagogues, who can provide solutions to sustainability problems in their jobs and in their competence framework (MARTÍNEZ; AZNAR; ULL; PIÑERO, 2003). Since the Belgrano Conference (1975), it was established that environmental education involves a permanent process in which individuals and the community become aware of their environment and acquire the knowledge, values, skills, experiences and determination that will allow them to act individually as collectively in the resolution of present problems as in those that will come in the future.
Caride Gómez (2017) mentions that an investigation in environmental education is, by its very nature, necessary and inexcusable, built on the scenarios that pedagogical knowledge enables in its convergence with knowledge in the social and environmental field, and should be reflected in the conceptual frameworks, epistemological, theoretical, meteorological, academic etc. Environmental education reveals the predominant relationships of the human being with the environment, the causes of environmental problems and their possible consequences (FLORES, 2012). Andrade Frich & Ortiz Espejel, 2008, suggest that through environmental education research, environmental development and management models can be established. Thus, at present, there are some changes in the terminology of this discipline, so that it includes in greater depth the dimension of sustainable development.
Compiled for the first time in the report of the World Commission on Environment and Development (1987) (UNESCO) called Our Common Future, the union of education with sustainable development. However, as the changes in terminology point out, in many cases, they do not help but rather generate new confusion, shifting attention to what is really important in the new currents of thought. It should also take into account the internal environment of the person, such as the psychobiological factors of each individual and the external environment, which covers the natural environment, the social environment, that is, the organization of social groups and the artificial or technological environment that designates all the things invented by the human species (CURIEL, 2001). Passing the terminology from an ecological perspective to have an integral vision between social and nature. Flores (2012) recognizes that environmental behaviors are not explained in themselves, but within the socio-cultural context in which they occur, making it possible to identify the opportunities offered by interaction and school or non-school work, as well as the types of restrictions that imposes, with its classification and social rank, being some of its characteristics: • The articulation of environmental aspects with educational aspects; • A complex object of study, with its concept of integral environment (natural, social and built); • The questioning of the practices that give rise to environmental problems; • The search for comprehensive and holistic answers.
On the other hand, Ramos (1992) cites that the work derived from educational research determines the pedagogical conditions, the modalities of intervention of the teaching staff, the most effective procedures for the assimilation of knowledge and the modification of concepts, values and attitudes of the public, however, since environmental education is the objective, it is constituted by the relationships between environmental and educational aspects. For some researchers, their borders are not so clear, because of the different biases addressed, for example, the ecological or the anthropogenic, causing confusion once again, such as focusing attention on conservation practices, which, although it corresponds to a perspective in education environmental, it does not constitute this type of education by itself (FLORES, 2012).
According to Morin, (1999), education is impossible without a reform of thought that leads to a true process of apprehension of man as a complex subject who thinks, feels, knows, values, acts and communicates. This principle is also valid for environmental education, at which technological information and communication strategies were also used for environmental education.
It is a priority that contemporary societies facilitate the feedback of environmental education with the needs of the communities in which the school is inserted. It also represents an opportunity for environmental education to reach its inhabitants, collaborating in the understanding of their problems and in the search for answers, as an alternative to environmental problems; in the community, you can develop educational actions that people tend to participate actively and self-organization to manage possible solutions. Salgado-Carmona and Sato (2012) mention that education linked to the community promotes respect for biological and cultural diversity so that societies strengthen and resist a capitalist model that devastates the relationships of human beings with each other and with their environment.
Through the link between the school and the community, it is intended to give value to the cultures of the communities, by promoting recognition of their identity and their relationship with others and the natural environment. Rodríguez and Hernández (2012) also demonstrated in their research that through environmental programs made up of concrete and viable actions, designed and executed by the students themselves, environmental problems of the school, community, where they stood out, can be mitigated; at school, the environmental actions in a fun and pleasant way; in the community, there are greater responsibilities for the commitment they assume with the population and the authorities, generating values of responsibility, maturity, discipline and companionship for the care of the environment of their community . Agro ecology projects are also presented, such as community gardens, where the promotion of critical awareness regarding consumption, as well as healthy eating. Where it is committed to an integrative environmental education that contemplates the union between disciplines and knowledge; theory and practice; epistemology, politics and ethics; actors and sectors; the local, regional and global; as well as past, present and future. Flores (2012) thus concluded that environmental education and research in environmental education will help to see environmental problems differently, without directly solving them, but generating information that promotes knowledge for the transformation in the relationship of human beings with their environment.
Perception is not necessarily supported by a neutral way of contemplating the world, certain problems must be privileged over others with varying degrees of importance influenced by interests and power relations. By means of the observation, reality cannot be distorted, so the study of perception is very important (FLORES; REYES, 2010).
In their research, Montaño and Conde (2012) studied the perception of environmental problems in the city of Arauca, where they investigated the assessment of the surveyed group among a youth group and an adult group, both groups received information from different sources on environmental issues, highlighting there are environmental problems, but there is a lack of tools to help valuing the environment, hence the need for more conscious citizenship forms of a more reinforced basic education towards the environment, educational campaigns, trainings, and encouragement of education. Environmental management in the rulers, fines and sanctions increases natural areas, respondents mention that the main problems in the region are related to the oil industry (region where these industries are located), garbage pollution, burning of garbage and pollution of rivers between the four most mentioned, also mentioning the influence of the media that can help generating education and awareness, focusing on those surveyed studies and TV as a means of influence. It is stated in articles of the same nature that the educational instance is not having an effect on the attitudes and beliefs of the students (MONTAÑO; CONDE, 2012).
To achieve the stated objective, the study has been classified as a non-experimental and descriptive design. Based on the above, the objective of this research was to evaluate the ecological-environmental perception of children and young people from schools in the Obligado District, Department of Itapúa, in order to determine their vision towards their environment and their perspective towards the future in relation to the environment.
Materials and methods
The investigation was carried out in the Obligado District, Department of Itapúa, Paraguay, in schools that have the third cycle of basic schooling, which corresponds to a total of eight schools. In four of them, the surveys were selected for their location or distribution, three urban and one rural, being one of the urban private-subsidized and the others public institutions. In the country, there is no instance of study and approval of this type of analysis. The research does not respond to the biomedical field and does not imply any risk. Therefore, the work and consequently the questionnaire applied, as it was not invasive, did not require going through an Ethics Committee or similar for its application. However, the instrument was developed strictly based on the objectives set in the investigation and made known, considered and consented to by the educational authorities of each institution under analysis.
The population under study, to which the survey was applied, was the third cycle, students age 12 to 15 years old from 7th to 9th grade, with 40 students in their entirety. The general data of the institutions surveyed are detailed in Table 1: The methodology adopted was through the non-experimental and descriptive method, through the formulation of a mixed survey (closed and open), with general consultations on the perception in the environmental field, which helped evaluating the assessment of environmental care and its importance in ecology, its care etc. The survey consists of five sections and their respective questions: » Environmental » knowledge: • Knowledge of the word environment.
• The degree of knowledge. » Environmental perception. » Selection of ecosystem components and their level of importance. » Importance of environmental balance. » Existence of environmental problems in the country. » Environmental situation in the country. » Environmental situation in the region. » World environment situation. » Level of impact of environmental problems. » Environmental management. » Access to information about the environment and its problems. » What are the main environmental risks at the country level? » Whom does the treatment of environmental problems correspond to? » Worldwide, impact on environmental problems. » Environmental sensitivity. » Compliance with global warming forecasts. » Human activities impact the environment. » It is time to enhance positive changes. » Level of impact on the environment of the aforementioned activities. » Incidence of activities mentioned when raising awareness. » Environmental behavior. » Description of the environment in which one lives. » Draw how you see the environment in the future.
The first four were multiple-choices questions, and the last one open-ended; among them, it was requested to describe how they observe the future of the environment, emphasizing their perspective on it. Quantitative and qualitative approaches were taken for data collection and analysis, the data is presented in graphs, weighting the results obtained.
For the multiple-choices questions, the following options were taken into account, each one amounts to: 0 = Never -Bad -No 1 = Fair (enough) 2 = Fairly 3 = Very much -Excellent
Results and discussion
The survey was conducted on 40 students, in order to obtain a general database on their knowledge, perception, management, sensitivity and their environmental behavior. In environmental issues, new knowledge is being generated, and it is very important to emphasize the assessment of the environment and its care since adolescence, thus facilitating the incursion of good practices related to environmental care. When asked if they know the word "environment", 72.5% answered "quite a lot", 15% "regularly" and 12.5% "a lot", which should be taken into account. In general, it was observed that these students are considered poorly informed, however, it can be stated respondents know, in this age group, a basic approach makes it easier for them to learn new concepts and also become active in caring for the environment, therefore, it can be inferred the country has issues related to the environment in its study program for the years surveyed.
Regarding the consideration related to the degree of understanding of the knowledge of environmental issues, the students responded that in 67.5% they are considered quite informed, 27.5% regular and 5% think they know a lot. These data were considered sufficient to point out how they perceive their understanding in relation to environmental issues, being they aware that they need to learn more and more, since they are at the beginning of their educational stage.
In the beginning of every person´s learning process, at schools, basic skills are instilled. In the early stages, it is important to promote a good perception even more, if one takes into account issues related to natural sciences.
In initial stages of the formation of a student, they are conceived with the basic competences, at which they also play the first impressions towards their environment.
Perceptions are the first awareness of something through the impressions that communicate the senses, which are focused on the second part of the survey; as mentioned by Vázquez and Elejalde (2013), a very important part of Environmental education and awareness is the perception of a person in understanding and making a judgment about environment care.
When asked about the level of importance of some items related to components of an ecosystem, water (62.5%) and air (55%) took first and second place, being able to interfere with this criterion campaigns environmentalists always address, followed by humans in third place, with 50% in order of importance, as it can be verified in Figure 1. Regarding whether the environmental balance is important, about 70% of the students considered it very important, in accordance to what mention Carlsson and Mkandla (1999) and Tilbury (1995) on growth related to what is sustainable development, which is increasingly applied in environmental education in schools, venturing into the Paraguayan basic education program.
More than half of them stressed that if there are environmental problems in the country, and that there are many, those are considered regular when comparing their region with the rest of the country, which is also seen as regular. At the global level of the environmental situation for the respondents, it is perceived as bad taking into account all these questions. The students see and evidence the existence of environmental problems in the world, country and region, and they are aware of these, being of utmost importance, as mentioned by some authors, the capacity for action is part of the objectives of environmental education (FIGURE 2). Following, there were some common environmental problems found worldwide, at which respondents were asked to give their consideration on the level of importance of these in the region they inhabit. At this point, it could be observed that more than half considered much, giving importance to environmental pollution (45%), high poverty rate (25%), lack of a roof or decent housing (22%), low level in education (22%), and social problems being considered of less importance (17.5%) when compared to other considerations, in general all environmental problems were classified (FIGURE 3). Another question was related to whether they considered there could be a solution to the environmental problems mentioned in the previous questioning, 75% considered it as enough or a lot of the possibilities of being able to reach a solution, being important this positivism is already part of the way of knowing one can get there if they get involved.
When asked about where or how they came to access information about the environment and its problems, they responded that some sources mostly on Television and through the Educational Institutions; in order of importance and less valued, the specific campaigns related to environmental issues (FIGURE 4). These data also met the findings of Montaño and Conde (2012), in which study, institutions and television were also highlighted, hence their influence over general education. Regarding some environmental problems that are frequently heard, they were asked to assess their importance at the country level, and greater percentage of importance was recorded in the contamination of water, waste and the disappearance of species and to a lesser extent the acid rains, however, these types of rain were not yet registered in the country, these results being reflected in Figure 5. In relation to what was observed by Montaño and Conde (2012), to the people interviewed for water pollution and for the management of the waste, these issues can be related to the different campaigns focused on those issues. Regarding environmental sensitivity, the students pointed out the following aspects: if they would come to believe that global warming forecasts would be fulfilled, 73% responded "quite a lot".
Likewise, when asked about whether human activities produce environmental impacts on the environment, they answered yes, and that they are aware of it, but they also said there is time to make changes to improve the quality of life, showing optimism with the positive change.
As Figure 6 shows, when consulting the activities that destroy the environment most, deforestation, non-recyclable waste and agrochemical waste were valued with a higher score, and the industry is less valued, and may be due to the environment where these students find each other. On sensitization, for which some activities were mentioned, it was shown that, through penalties for infractions, people would be influenced to a greater extent, and incentives were mentioned to a lesser extent. Among the most popular topics mentioned and observed in Figure 7, the most valued were the use and contamination of water, recycling and deforestation and, among the least mentioned, the defrosting of the polar caps. The influence of environmental education on the awareness towards the management of natural resources by people was positively valued. It is very important that the information reaches the whole community, since it is the most effective way. Secondly, politicians and administrative authorities are most likely aware of their importance and influence in decision making in the community.
When dealing with the subject on which one is interested in inquiring and on the opening of conducting conversations in a way to deal with the subject, 75% of respondents said that in particular they would be prone to initiate conversations.
Regarding the behavior of the respondents for their environment, through the general analysis of the responses, it was shown that 60% of the respondents are not in accordance with their environment, where pollution, excessive garbage and lack of conscious and collaborative people. 30% said they were satisfied, but with things to improve. They were also asked to make a design to see how they investigate or visualize the future of the environment, 70% of the respondents referred their future from a negative perspective. Destruction of biodiversity, cleaning and garbage were the most cited issues.
Conclusion
Through the students' perception of environment, the need for a basic education can be perceived, which cannot be isolated. It should be known that it is necessary for society to participate in order to reach an outcome, focusing on the use of technology and always making it clear that we are all part of the environment, being each other's attitude important. There is influence of informal education, mainly through television channels, which have a broad effect and influence children and young people on the prevention and care of the environment, however, with a tendency to always present negative aspects and not the positive advances that are achieved. When some results of the surveys were analyzed, a certain contradiction was perceived, they expressed, on one hand, hopes of change in the face of the current situation related to environmental problems; however, in their designs, the future is presented in a negative way, indicating there is uncertainty or fear, but there are anxieties for a better future. Naturalist campaigns focused on waste management and reforestation positively influence the sensitivity of children and youth. In a second stage of the investigation, it is intended to carry out a series of usual activities in awareness campaigns, aiming to establish the best approach to be considered in formal education.
|
v3-fos-license
|
2018-04-03T04:59:15.669Z
|
2013-08-26T00:00:00.000
|
14260271
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0074425&type=printable",
"pdf_hash": "4b72a0712f7c66d6349bb83984b475f1b8f3b629",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41810",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "4b72a0712f7c66d6349bb83984b475f1b8f3b629",
"year": 2013
}
|
pes2o/s2orc
|
Development of Novel Rifampicin-Derived P-Glycoprotein Activators/Inducers. Synthesis, In Silico Analysis and Application in the RBE4 Cell Model, Using Paraquat as Substrate
P-glycoprotein (P-gp) is a 170 kDa transmembrane protein involved in the outward transport of many structurally unrelated substrates. P-gp activation/induction may function as an antidotal pathway to prevent the cytotoxicity of these substrates. In the present study we aimed at testing rifampicin (Rif) and three newly synthesized Rif derivatives (a mono-methoxylated derivative, MeORif, a peracetylated derivative, PerAcRif, and a reduced derivative, RedRif) to establish their ability to modulate P-gp expression and activity in a cellular model of the rat’s blood–brain barrier, the RBE4 cell line P-gp expression was assessed by western blot using C219 anti-P-gp antibody. P-gp function was evaluated by flow cytometry measuring the accumulation of rhodamine123. Whenever P-gp activation/induction ability was detected in a tested compound, its antidotal effect was further tested using paraquat as cytotoxicity model. Interactions between Rif or its derivatives and P-gp were also investigated by computational analysis. Rif led to a significant increase in P-gp expression at 72 h and RedRif significantly increased both P-gp expression and activity. No significant differences were observed for the other derivatives. Pre- or simultaneous treatment with RedRif protected cells against paraquat-induced cytotoxicity, an effect reverted by GF120918, a P-gp inhibitor, corroborating the observed P-gp activation ability. Interaction of RedRif with P-gp drug-binding pocket was consistent with an activation mechanism of action, which was confirmed with docking studies. Therefore, RedRif protection against paraquat-induced cytotoxicity in RBE4 cells, through P-gp activation/induction, suggests that it may be useful as an antidote for cytotoxic substrates of P-gp.
Introduction
P-glycoprotein (P-gp) is a 170 kDa ATP-dependent transmembrane protein, belonging to the ATP binding cassette (ABC) superfamily, which promotes the outward transport of a wide spectrum of structurally unrelated compounds from various cell types [1]. It was firstly isolated from colchicineresistant Chinese hamster ovary cells, where it modulated drug permeability [2], hence its name where P stands for "permeability". P-gp has been initially associated to a multidrug resistance phenotype due to its overexpression in many cell types [3][4][5][6][7][8]. In fact, inhibition of its transport activity has long been seen as a strategy to overcome such resistance [9][10][11][12]. However, further studies suggested a protective role for P-gp (in alliance with metabolizing enzymes) due to its widespread constitutive expression in various blood-tissue barriers [13]. P-gp has been found physiologically expressed in enterocytes, hepatocytes and in proximal tubule cells in the kidneys [14], in the placenta and the testis [15] and also in the endothelial cells that compose the blood-brain barrier (BBB) [16]. The presence of P-gp at the BBB suggests an important role in protecting the brain against the noxious effects of P-gp substrates [8,17,18].
Given the importance of P-gp transport activity in the protection of sensitive tissues, such as the brain, P-gp activation/induction has previously been proposed as an antidotal way to prevent toxicity mediated by P-gp substrates such as paraquat (PQ) [19][20][21]. While a P-gp inducer promotes an increase in the transporter's expression, from which is expected an increase in its activity, an activator is a compound that binds to P-gp and induces a conformational alteration that stimulates the transport of a substrate on another binding site. For example, Hoechst-33342 and Rhodamine-123 (Rho 123) act by this cooperative mode of action [22]. This functional model of P-gp suggested that the efflux pump contained at least two positively cooperative sites (H site and R site, for Hoechst-33342 and Rho 123, respectively) for drug binding and transport [22]. Therefore, this approach has the advantage of promoting P-gp transport function, without interfering with protein expression levels, which makes it a more rapid and clean process than P-gp induction. While some drug-drug interactions are still expected between P-gp activators/inducers and clinically used drugs that are substrates for P-gp (as occurs with P-gp inhibitors), these are expected to be attenuated, or even prevented, due to the short therapeutic period regularly required in an antidotal scheme.
Rifampicin (Rif, Figure 1) has been described to induce P-gp expression and activity in lymphocytes, intestinal cells and in renal cells, both in vivo and in vitro [23][24][25][26] via the pregnane-Xreceptor (PXR) pathway. Although Rif's ability to induce P-gp has been reported to be species-specific (due to ligand-binding cavity differences between human and rat PXR) some authors have recently reported Rif-induced P-gp overexpression in vivo in rat, and in rat cell lines and primary cultures [27,28]. In the present study we synthesized three Rif derivatives (a monomethoxylated derivative -MeORif, a peracetylated derivative -PerAcRif, compounds that have never been described before, and a reduced derivative -RedRif, described for the first time on 2012 [29], Figure 1) in order to evaluate their ability to modulate P-gp expression and activity and also to determine their potential to protect against PQ-induced cytotoxicity, in an in vitro model of the BBB, the immortalized rat brain endothelial cell line, RBE4. This cell line expresses high levels of functional P-gp and is generally accepted as a suitable in vitro model for the study of transport functions of the BBB [30].
All other reagents used were of analytical grade or of the highest available grade.
Synthesis of rifampicin derivatives
Thin-layer chromatography (TLC) was conducted on Merck Kieselgel 60, F254 silica gel 0.2, 0.5 and 1 mm thick plates. Infrared (IR) spectra were recorded on a PerkinElmer Spectrum 1000 as potassium bromide (KBr) pellets. Proton and carbon nuclear magnetic resonance spectra ( 1 H and 13 C NMR) were recorded on a Bruker ARX 400 spectrometer at 400 and 100.62 MHz respectively. Chemical shifts are expressed in ppm, downfield from tetramethylsilane (δ = 0) as an internal standard; J-values are given in Hz. The exact attribution of NMR signals was performed using two dimension NMR experiments. The Fast Atom Bombardment (FAB) mass spectra were realized at University of Santiago Compostela, Spain (Unidade de Espectrometría de Masas).
Diazomethane was prepared by hydrolysis of an ethereal solution of N-methyl-N-nitroso-p-toluenesulfonamide (Diazald) according a well-established method [31].
The synthesized compounds were pure when analyzed by NMR. The elemental analysis of new Rif analogues RedRif and PerAcRif was carried out on a Thermo Finnigan Flash EA1112 (Bremen, Germany). The HPLC analysis of MeORif was conducted on a Merck, Hitachi system consisting of an L-7100 pump, a Rheodyne type injector, a D-7000 interface, and an L7450A diode array spectrometric detector, using a LiChrospher 100 RP-18 column, with water/methanol (at 2.5 pH) as mobile phase solvents.
Peracetylated rifampicin (PerAcRif)
A solution of Rif (0.1 g, 0.12 mol) in acetic anhydride (1 mL) was added dropwise to a solution of pyridine (70 µL, 0.9 mmol) (0°C) in acetic anhydride (1.5 mL). The solution was stirred at 0°C until total consumption of Rif. The mixture was dropped over ice/water washed with methylene chloride. The organic phase was dried with anhydrous Na 2 SO 4 and evaporated to . Rif and RedRif's effect on P-glycoprotein expression. Cells were exposed to 10 µM Rif or RedRif and P-gp expression was evaluated by western blot after 24, 48 and 72 h of exposure, using C219 anti-P-gp antibody. Rif significantly increased P-gp expression after 72 h while RedRif induced a significant increase in P-gp expression from 48 h on. Results refer to mean ± SD of 3 or 4 independent experiments. Differences between treated and untreated cells were estimated using two-way ANOVA followed by Bonferroni's multiple comparison post hoc test. ***p<0.001 and ****p<0.0001 vs. control.
Cytotoxicity assays
Cells were seeded in 96-well plates at a density of 10 000 cells per well. Three days after seeding, cells were treated with 0.1 to 50 µM of Rif, MeORif, PerAcRif or RedRif, and cytotoxicity was evaluated after 24, 48 and 72 h by the NR uptake assay and by the MTT reduction assay. PQ cytotoxicity profile in this cell line has been previously established [33]. Each experiment was performed in triplicate and independently repeated at least 3 times. RedRif's effect on P-glycoprotein activity -rhodamine 123 efflux ratio. P-gp activity is proportional to the ratio between Rho123 intracellular fluorescence from inhibited (+CyA) and non-inhibited (-CyA) cells. A significant increase in P-gp activity was found in RedRif-treated cells after 24 (P-gp activation effect) and 72h (P-gp induction effect) of exposure. Rif did not significantly change P-gp activity. Results refer to mean ± SD of at least 3 independent experiments performed in triplicate. Differences between treated and untreated cells were estimated using two-way ANOVA followed by Bonferroni's multiple comparison post hoc test. ***p<0.001 vs. control. doi: 10.1371/journal.pone.0074425.g003
Neutral Red Uptake Assay
At the end of each predefined time-point, the cells were incubated with neutral red (50 µg/mL in cell culture medium, 90 min at 37°C). The dye absorbed by viable cells, was extracted (ethyl alcohol absolute/distilled water (1:1) with 5% acetic acid) and the absorbance was measured at 540 nm using a microtiter plate reader (PowerWaveX; Bio-Tek Instruments).
MTT reduction assay
At the end of the incubation periods, 150 µL of 0.5 mg/mL MTT solution was added to each well, followed by incubation of the plates for 30 min at 37°C. The reaction was terminated by removal of the media and addition of 150 µL of dimethyl sulfoxide. Levels of reduced MTT were determined by measuring the absorbance at 550 nm using a microtiter plate reader (PowerWaveX; Bio-Tek Instruments). The NR uptake assay was performed to assess RedRif's effect in PQ cytotoxicity in a a) simultaneous exposure to RedRif and PQ for 48 h -study of P-gp activation effect, and b) 24 h, c) 48 h and d) 72 h of exposure to RedRif before exposure to PQ -study of P-gp induction effect. RedRif's protective effect against PQ-induced cytotoxicity was more significant in the simultaneous exposure assay. Two-way ANOVA was performed to estimate the differences between control (black bars) and RedRif-treated (grey bars) cells for each PQ concentration. Concentration-response curves shown as inserts were fitted using least squares as the fitting method, and the comparisons between the curves obtained in the presence and the absence of RedRif (bottom, top and EC 50 ) were made using the extra sum-of-squares F test. At least 3 independent experiments were performed in triplicate. **p<0.01 and ***p<0.001 and ****p<0.0001 for differences between control and RedRiftreated cells for each PQ concentration and for differences between the curves vs. control; $p<0.05 and $ $p<0.01 for differences in EC 50 vs. control. doi: 10.1371/journal.pone.0074425.g004
Western Blot analysis for P-gp expression assessment
Cells were seeded in 6-well plates at a density of 300 000 cells per well. Three days after seeding, cells were treated with 10 µM Rif, RedRif or PerAcRif or 5 µM MeORif. After 24, 48 or 72 h of incubation, cells were washed twice with HBSS (+/+) and lysed in a lysis buffer containing 1% Triton X-100, 5 mM ethyleneglycoltetraacetic acid, 150 mM NaCl in Tris-HCl 50 mM, pH 7.5, for 30 min at 4°C. Dithiothreitol 1 mM, phenylmethanesulfonylfluoride 0.25 mM and 1% of protease inhibitor cocktail (Sigma-Aldrich, Inc., St. Louis, MO, USA) were added to the buffer immediately before use. The lysates were centrifuged at 10 000 g for 10 min at 4°C and the supernatants were stored at −80°C until use. The protein content of each sample was determined according to Lowry's method [34] using DC protein kit. The same amount of protein (35 µg) extracted from RBE4 cells was then separated by electrophoresis on a 7.5% SDS-polyacrylamide gel and electrophoretically transferred to a nitrocellulose membrane. The membrane was washed with Tris-buffered saline solution (TBS: 20 mM Tris-HCl, 300 mM NaCl, pH 8.0) and blocked in blocking buffer [TBS solution with 0.05% Tween-20 (TBS-T) and 5% dried skim milk], overnight, at 4°C. Then the membrane was incubated with the primary monoclonal antibody against Pgp, C219, diluted 1:400 in blocking buffer, overnight at 4°C or, in parallel, with anti-α-tubulin antibody (1:5000), to ascertain equal protein loading. After washing the membranes with TBS-T, these were incubated with secondary antibody (anti-mouse IgG-horseradish peroxidase, 1:1000 or 1:2000, respectively), at room temperature, for 3 h. Detection of protein bands was performed using ECL Plus chemiluminescence reagents (Amersham Pharmacia Biotech), according to the supplier's instructions, and developed on high performance chemiluminescence films (Amersham Pharmacia Biotech) with Kodak Film Developer and Kodak Fixer (Sigma-Aldrich). Bands in the films were quantified using the ImageJ software (National Institutes of Health). Optical density results were expressed as percentage of control.
P-gp activity assessment -Effects on Rho 123 accumulation
Cells were seeded in 12-well plates at a density of 200 000 cells per well. Three days after seeding, cells were exposed to the compounds for 24, 48 and 72 h. At the end of each timepoint, cells were washed with HBSS (-/-), dissociated with 0.25% trypsin-EDTA and suspended in cell culture medium. Each collected well was divided into two aliquots. Half the aliquots were incubated with 2 µM Rho 123 in HBSS (+/+) supplemented with 10% FBS for 30 min, in the dark, in a shaking water bath, at 37°C. The other half was incubated in the same conditions but with 2 µM Rho 123 plus 10 µM CyA in HBSS (+/+) supplemented with 10% FBS. After this incubation period, cells were washed twice with ice-cold HBSS (+/+) and Figure 5. Reversal of RedRif-induced P-glycoprotein protective effect against paraquat cytotoxicity. a) Effect of P-gp blockade by potent P-gp inhibitor, GF120918, in cells simultaneously exposed for 48h to RedRif and PQ, with (black bars; dashed line) or without (grey bars; filled line) GF120918. b) All RedRif-induced protective effect was mediated by P-gp as no differences were observed between PQ-only and RedRif+PQ+GF120918-treated cells. Two-way ANOVA was performed to estimate the differences between RedRif+PQ or PQ (black bars) and RedRif+PQ+GF120918 treatment (grey bars) for each PQ concentration. Concentration-response curves were fitted using least squares as the fitting method, and the comparisons between the curves obtained in the presence and the absence of GF120918 (bottom, top and EC 50 ) were made using the extra sum-of-squares F test. At least 3 independent experiments were performed in triplicate. Significant differences were observed in the presence of GF120918. *p<0.05, **p<0.01 and ***p<0.001 for differences related to the presence of GF120918 and for differences between the curves; $p<0.05 for differences in EC 50 vs. control. doi: 10.1371/journal.pone.0074425.g005 kept on ice until flow cytometry analysis. This assay was also performed replacing CyA for MK571, 20 µM, to depict any influence of MRP1 transporter in the efflux of Rho 123.
Flow cytometry
Median fluorescence intensity values for each sample were assessed using a Becton Dickinson FACSCalibur™ flow cytometer (Becton Dickinson, Inc., Mountain View, CA, USA) equipped with a 488 nm argon-ion laser. Flow cytometry conditions were set as previously described [35]. Analysis was gated to exclude dead cells on the basis of their forward and side light scatters and the propidium iodide (5 µg/mL) incorporation, based on the acquisition of data for at least 10 000 cells. Obtained data were analysed using the BDIS CellQuest Pro software (Becton Dickinson, New Jersey, USA).
The green fluorescence due to Rho 123 was followed in channel 1 (FL1) and plotted as a histogram of FL1 staining. Pgp activity was expressed as percentage of control of Rho 123 efflux ratio, which was obtained by the ratio between median fluorescence intensity values of intracellular Rho 123 in the presence and in the absence of CyA.
Effects on PQ-induced cytotoxicity
The effect of RedRif on PQ cytotoxicity profile was assessed either in pre-exposure to RedRif for 24, 48 and 72 h before PQ exposure for 48 h (P-gp induction effect), and in simultaneous exposure to RedRif and PQ for 48 h (P-gp activation effect). Briefly, three days after seeding in 96-well plates (10 000 cells per well), cells were exposed to 10 µM RedRif alone (preexposure) or simultaneously with growing concentrations of PQ (0.5-50 mM). For the pre-exposure assay, 24, 48 or 72 h after pre-exposure, RedRif was removed and replaced for growing concentrations of PQ. NR assay was performed as described above to assess PQ cytotoxicity 48 h after any exposure to PQ. A control for PQ cytotoxicity alone was performed in parallel in all procedures.
P-gp's role on RedRif's protective effects against PQinduced cytotoxicity
Cells were simultaneously exposed to 10 µM RedRif and growing concentrations of PQ (0.5-50 mM), in the presence or absence of 10 µM GF120918, for 48 h. NR uptake assay was then performed as described above.
Docking on P-gp model
Docking simulations were done considering only the drugbinding pocket formed by the transmembrane domain interfaces of P-gp. Docking simulations between the P-gp model [previously described in [36]] and Rif, MeORif, PerAcRif, RedRif and 18 known P-gp activators [36][37][38]
Statistical analysis
All data are expressed as means ± standard deviation (SD). One-way analysis of variance (ANOVA) was used to determine the statistical significance of differences in cytotoxicity between control and each compound concentration. If analysis was significant, the differences were estimated using Dunn's Multiple Comparison post hoc test. Two-way ANOVA followed by Bonferroni's Multiple Comparison post hoc test was used to assess differences in P-gp expression and activity between control and treated cells throughout time, and to estimate RedRif's effects on PQ cytotoxicity. The best-fit non-linear regression model was applied to evaluate differences between concentration-response curves to PQ-induced toxicity. The 0.05 level of probability was used as criterion of significance. All analyses were performed using GraphPad Prism software v 5.01 (GraphPad Software, San Diego, CA).
Synthesis of RedRif, MeORif and PerAcRif
The synthesis of the three described compounds followed standard methods for the synthesis of reduction of imide, methylation of acidic hydroxyl groups and acetylation of hydroxyl groups. The structure of the used compounds was confirmed by two dimension NMR techniques (that allowed attributing the position of the new substituents on the Rif backbone) and mass spectrometry (that allowed confirming the number of new substituents). After our synthesis of RedRif other authors published the synthesis of the same compound by a similar method with identical results [29]. RedRif, MeORif and PerAcRif molecular structures and synthesis conditions are represented in Figure 1.
Cytotoxicity profiles of Rif, RedRif, MeORif and PerAcRif
A concentration range of each compound (0.1 and 50 µM) was tested, in RBE4 cells, for incubation periods of 24, 48 and 72 h. In all cases, a viability rate of more than 85% was required to proceed the study. Cytotoxicity profiles for Rif, RedRif, PerAcRif and MeoRif are available as supplementary information (figures S1 and S2). Significant decreases in cell viability were observed at 50 µM Rif, RedRif and PerAcRif and 10 µM MeORif. Therefore, Rif, RedRif and PerAcRif were further tested in P-gp modulation studies at 10 µM and MeORif at 5 µM.
Rif and RedRif increased P-gp expression in RBE4 cells
P-gp expression assessment was performed by western blot using C219 anti-P-gp antibody. A significant increase in P-gp expression was found in Rif-treated cells after 72 h (p<0.001) and in RedRif-treated cells after 48 and 72 h of exposure (p<0.001), as shown in Figure 2. The remaining derivatives, MeORif and PerAcRif, did not significantly change P-gp expression in this cell line (data not shown).
RedRif increased P-gp activity in RBE4 cells
P-gp function was evaluated by flow cytometry using a fluorescent P-gp substrate, Rho 123, in the presence and in the absence of P-gp inhibitor CyA, and represented as the ratio of Rho 123 transported out of cells. This is a widely used methodology to evaluate P-gp functionality in many cell types [20,35,39,40]. Although both Rho 123 and CyA have been reported to interact with multidrug resistance protein 1 (MRP-1) [41,42], and MRP-1 is known to be expressed in RBE4 cells [43], the contribution of this efflux pump to Rho 123 transport was negligible, as MK571 (a MRP-1 inhibitor) did not enhance the accumulation of Rho 123 in the cells (data not shown). This result indicated that Rho 123 and CyA are suitable tools to evaluate P-gp activity in this cell model, as previously suggested [44].
RedRif induced a significant increase in Rho 123 efflux ratio after 24 and 72 h (p<0.001), as shown in Figure 3. The model compound, Rif (Figure 3), and the other derivatives (data not shown) did not significantly alter P-gp functionality in RBE4 cells.
RedRif protects against PQ-induced cytotoxicity through P-gp activation
The effect of the observed P-gp activation/induction by RedRif on the cytotoxicity profile of PQ in RBE4 cells was then evaluated. Simultaneous exposure to RedRif and PQ during 48 h significantly increased cell viability at 1 (p<0.001), 5 (p<0.01) and 10 (p<0.001) mM PQ-concentrations and resulted in significantly different curves (p<0.0001) and in a significant Figure 4a. Pre-exposing cells to RedRif for 24 h led to a significant protection from PQ-induced toxic effect at 15 mM PQ (p<0.001), resulting in significantly different curves (p=0.0043), and increased PQ EC 50 (p=0.0216), as represented in Figure 4b. This effect was also observed 72 h after exposure to RedRif (p<0.01 for 15 mM PQ; p=0.0003 for differences between the obtained curves; p=0.0006, for differences between EC 50 's - Figure 4d).
A similar assay was performed but including P-gp inhibitor, GF120918, to confirm the involvement of P-gp activation in RedRif's protection against PQ cytotoxicity. The results are shown in Figure 5. Simultaneous exposure to RedRif and PQ with GF120918 resulted in a significant increase in PQ cytotoxicity (p<0.001 for 1 mM PQ; p<0.01 for 10 mM PQ and p<0.05 for 15 mM PQ), which led to significant differences Table 1. RedRif, MeORif, PerAcRif, and Rif conformations rank and binding affinity (docking on P-gp model).
Ligand
Conformation rank Binding affinity (kJ/mol -1 ) RedRif 1 -9. between the obtained curves (p=0.0004) and to a significant decrease in PQ EC 50 from 4.1 to 2.7 mM (p=0.0267). When comparing PQ-only to (RedRif+PQ+GF120918)-treated cells, no significant differences were observed at any studied PQ concentration. In fact, completely overlapping curves were obtained for both test conditions ( Figure 5).
RedRif fits in P-gp drug-binding pocket
As P-gp activators bind in the drug-binding pocket formed by the transmembrane domain interface, a docking simulation of RedRif against P-gp was performed using a model built based on Sav1866, an ABC transporter from S. aureus [36]. Docking scores of nine RedRif conformations are described on Table 1 with the binding affinity value of the top rank conformation being -9.9 kJ.mol -1 . Scores of known P-gp activators, used as controls, are available in Table S1. A visual inspection of the activator RedRif on the transmembrane domain was performed ( Figure 6). Docking studies indicated that RedRif forms a more stable complex with P-gp than the other compounds and the known P-gp activators (lower free energy) as shown in Tables 1 and S1, respectively. Furthermore, RedRif has shape, size and stereoelectronic complementarity to P-gp drug-binding pocket ( Figure 6), establishing hydrogen interactions with Serine-349 and Glutamine-990.
Discussion and Conclusions
Rifampicin (Rif) is a bactericidal antibiotic that is mainly used to treat infections caused by Mycobacterium strains, like tuberculosis [45] and Hansen's disease [46]. Prolonged therapy with Rif generally leads to therapeutic resistance due to activation of PXR, a master transcriptional regulator, which leads to induction of P-gp and CYP3A4 expression [47]. This P-gp-inducing ability of Rif has already been extensively reported [23][24][25][26]. Although Rif has been reported not to activate rat PXR, recent works have shown Rif-induced increases in rat's P-gp, both at mRNA and protein levels [27,28]. On the basis of these facts, and due to our search for P-gp modulating agents, our synthesis group developed three new Rif derivatives, a mono-methoxylated derivative, MeORif, a peracetylated derivative, PerAcRif, and a reduced derivative, RedRif (Figure 1). These compounds are expected to follow the same metabolic pathway of deacetylation as the parent compound. A wide range of concentrations of these compounds was tested (0.1 to 50 µM), in RBE4 cells, in order to establish the concentration to be used in subsequent P-gp modulation studies. Their cytotoxicity profiles in RBE4 cells were fixed on the basis of MTT reduction and NR uptake assays. Rif, PerAcRif and RedRif were further tested at the concentration of 10 µM and MeORif at the concentration of 5 µM, as viability rates remained above 85% and no significant effects on cell viability were observed (Figures S1 and S2).
Western blot analysis revealed a significant increase in P-gp expression in Rif-treated cells 72 h after exposure without increasing P-gp functionality, which is in accordance with previous findings [28]. This may be explained by the fact that western blot was performed with whole cell lysates (membranes plus cytosol) and C219 anti-P-gp antibody recognizes an intracellular domain of P-gp. Therefore, total cell P-gp content was quantified, meaning that recognition of the protein does not necessarily indicate that it is integrated in the membrane or in its active form. Another explanation for this observation is the dual effect Rif is known to exert on P-gp, acting as an inducer in long-term exposures and also being a substrate for P-gp [48]. Therefore, despite inducing P-gp's expression in our cell model, Rif may compete with Rho 123 to be transported through the efflux pump, thus diminishing the rate of transported Rho 123. Among the tested Rif derivatives, only RedRif led to a significant increase in P-gp expression after 48 and 72 h of contact with cells. Furthermore, RedRif significantly enhanced Rho 123 efflux ratio at 24 h, corresponding to an increase in P-gp activity. Once no P-gp expression increase was detected at this time-point, these results suggest that RedRif may act as an activator of P-gp efflux activity. On the other hand, at 48 h time-point, no changes in P-gp activity were observed despite the significant increases in P-gp expression. As already suggested for Rif, this may be due to the quantification of total P-gp content in RBE4 cells, suggesting that the protein is being actively synthesized but still may not be integrated in the membrane. After 72 h of exposure, the significant increase in P-gp activity observed in RedRif-treated cells was possibly due to the observed increase in P-gp expression at the same time-point. On the basis of these results, RedRif was the only compound to be tested against PQ-induced cytotoxicity.
Because PQ is a known P-gp substrate [19] with extensive documented toxic effects, an increase in the activity of this efflux transporter would decrease intracellular levels of PQ, consequently diminishing PQ-mediated cytotoxicity. RedRif's ability to increase P-gp activity (and expression) was expected to produce such effect. To study P-gp induction effects, cells were exposed to RedRif for 24, 48 and 72 h before PQ exposure. A simultaneous exposure to RedRif and PQ for 48 h was also performed, to evaluate P-gp activation effects. We recently reported that RBE4 cells are highly resistant to PQ toxicity, which implies that all PQ exposures need to last 48 h, the time necessary for PQ to induce a significant toxic effect on RBE4 cells [33]. RedRif significantly protected RBE4 cells against PQ-induced cytotoxicity. This effect was much more significant when simultaneous exposure was performed than in pre-exposure assays, suggesting that P-gp activation by RedRif may be a more efficient way to prevent P-gp substrates' toxicity.
In order to confirm P-gp's involvement in RedRif-induced protective effect, the same assay was performed using RedRif and PQ in the presence and in the absence of P-gp inhibitor GF120918 in a simultaneous exposure. GF120918 significantly inhibited RedRif's protective effect for the intermediate PQ concentrations (1, 10 and 15 mM). This effect resulted in a significant decrease in PQ EC 50 from 4.1 to 2.7 mM, implicating P-gp on RedRif's protective role against PQ cytotoxicity. In fact, the observed effect was totally mediated by P-gp as no differences were found between a control curve (PQ for 48 h) and the curve obtained in the simultaneous presence of RedRif, PQ and GF120918. Although Shapiro has long ago reported the existence of at least two positively cooperative sites for drug binding and transport in P-gp [22,49], a four-P-gp-binding-sites model was more recently proposed, supporting the presence of three transport sites and one regulatory site. This last site allosterically alters the conformation of the transport binding sites from low to high affinity, increasing the rate of translocation for substrates [50]. Adaptation and survival mechanisms of living beings have allowed the binding of several xenobiotics at the same time to P-gp [51,52], increasing the transport of each other, not competing but activating the transportation cycle [53]. Therefore, the hypothesis of an activation mechanism of action for RedRif was further supported by a docking study. RedRif was docked on the cleft formed by the transmembrane alpha-helices of a P-gp model based on homologous S. aureus ABC transporter, Sav1866 [36]. A more stable complex was formed between RedRif and the used P-gp model than its analogues and the known P-gp activators (lower free energy), which suggest that RedRif may have higher affinity to P-gp binding site than these compounds (Tables 1 and S1). Also, RedRif's shape, size and stereoelectronic complementarity to P-gp binding pocket, allows the establishment of hydrogen interactions with Serine-349 and Glutamine-990. This last residue has already been described as being part of the translocation pathway and being involved in the transport cycle [54]. These results indicate that RedRif has high probability of interacting with the translocation channel on P-gp, which supports the experimental data. In what concerns Rif and the other derivatives, a structure-activity relationship study revealed that peracetylation of Rif increases the steric impedance and changes the orientation of PerAcRif in the P-gp binding pocket. The CN double bond next to the piperazine ring (see Figure 1) rigidifies the molecule and sets torsion angles that do not benefit the establishment of interactions with the transporter. The binding affinity of the top rank conformation of PerAcRif to the efflux pump was higher than RedRif-P-gp complex (Δ=-2.7 kJ.mol -1 , Table 1). The complex formed between the MeORif and P-gp model also has a higher free energy than RedRifmacromolecule complex (Δ=-1.5 kJ.mol -1 , Table 1). Noteworthy, the pattern of polar interactions is also different, involving distinct residues (when comparing Figure S3 (middle) and Figure 6, respectively). Although Rif has a slightly higher binding affinity towards P-gp than RedRif (Table 1), the possibility of this compound being a substrate cannot be excluded, as suggested by the authors and by others [48]. On the other hand, the introduction of hydrophobic substituents to the positively charged drug is expected to furnish chemosensitizers, as described elsewhere [55].
In conclusion, RedRif is a new Rif derivative that protects RBE4 cells against PQ-induced cytotoxicity by increasing P-gp expression and activity, consequently leading to an enhancement of PQ efflux. RedRif's activator effect on P-gp activity was confirmed by in silico studies and experimentally, and seems to be more effective than its induction ability. Therefore, RedRif should be further tested in other cell lines and in vivo to establish its use to efficiently prevent the toxicity of P-gp substrates. Figure S1. Rif and RedRif's cytotoxicity profiles assessed by the MTT reduction and the Neutral Red uptake assays. Cytotoxic effect was evaluated 24, 48 and 72 h after exposure to the compound in a concentration range between 0.1 and 50 µM. The compounds were non-cytotoxic until 10 µM. Results refer to mean ± SD of at least 3 independent experiments. Differences between concentrations were estimated using Kruskal-Wallis test (one-way ANOVA on ranks) followed by Dunn's multiple comparison post hoc test. **p<0.01; ***p<0.001; ****p<0.0001 vs. control. (TIFF) Figure S2. PerAcRif and MeORif's cytotoxicity profiles assessed by the MTT reduction and the Neutral Red uptake assay. Cytotoxic effect was evaluated 24, 48 and 72h after exposure to the compound in a concentration range between 0.1 and 50 µM. PerAcRif remained non-cytotoxic until 10 µM while MeORif started significantly diminishing cell viability at 5 µM. Results refer to mean ± SD of at least 3 independent experiments. Differences between concentrations were estimated using Kruskal-Wallis test (one-way ANOVA on ranks) followed by Dunn's multiple comparison post hoc test. *p<0.05; **p<0.01; ***p<0.001; ****p<0.0001 vs. control.
|
v3-fos-license
|
2023-04-17T15:12:13.312Z
|
2023-04-01T00:00:00.000
|
258175456
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.jseint.2023.03.018",
"pdf_hash": "f117bd5a73f01d9da11b3096f20dcbe70523b054",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41812",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "57a13454a6dbbfd0798f09ca82ab6f032490002a",
"year": 2023
}
|
pes2o/s2orc
|
Terrible triad injury of the elbow: a spectrum of theories
For more than one century, understanding the injury mechanism leading to the terrible triad of the elbow (TTE) was a significant challenge for surgeons. We aimed to summarize: (1) the history of the treatment of TTE and (2) the increasing scientific knowledge that supported its evolution. Five electronic databases were searched between 1920 and 2022. Results were reported as a comprehensive review of the relevant literature. Between 1940 and 1980, surgical exploration allowed observation of complex elbow instability involving both radial head, coronoid process, and ligament(s) injuries. In 1966, Osborne introduced the concept of posterolateral rotatory instability as the first mechanism injury to explain the complex elbow instability. From 1980 to 1995, a biomechanical revolution by American pioneers critically improved our understanding of elbow instability. After 1992, a few unifying theories and surgical protocols were provided, but those have divided the surgeons’ population. The formalization of the TTE treatment allowed avoiding of terrible short-term outcomes. However, post-traumatic osteoarthritis (PTOA) at long-term follow-up is still an issue. No consensual surgical protocol for the treatment of TTE has been widely accepted. While the outcomes of the TTE have been improved, the rate of PTOA at long-term follow-up is still high regardless of the treatments. The terrible triad has given way to the subtle triad with persistent microinstability of the elbow. The next challenge for elbow surgeons is to diagnose and fix this persistent subclinical instability after surgery in order to prevent the onset of PTOA.
For more than one century, understanding the injury mechanism leading to the terrible triad of the elbow (TTE) was a significant challenge for surgeons.We aimed to summarize: (1) the history of the treatment of TTE and (2) the increasing scientific knowledge that supported its evolution.Five electronic databases were searched between 1920 and 2022.Results were reported as a comprehensive review of the relevant literature.Between 1940 and 1980, surgical exploration allowed observation of complex elbow instability involving both radial head, coronoid process, and ligament(s) injuries.In 1966, Osborne introduced the concept of posterolateral rotatory instability as the first mechanism injury to explain the complex elbow instability.From 1980 to 1995, a biomechanical revolution by American pioneers critically improved our understanding of elbow instability.After 1992, a few unifying theories and surgical protocols were provided, but those have divided the surgeons' population.The formalization of the TTE treatment allowed avoiding of terrible short-term outcomes.However, post-traumatic osteoarthritis (PTOA) at long-term follow-up is still an issue.No consensual surgical protocol for the treatment of TTE has been widely accepted.While the outcomes of the TTE have been improved, the rate of PTOA at longterm follow-up is still high regardless of the treatments.The terrible triad has given way to the subtle triad with persistent microinstability of the elbow.The next challenge for elbow surgeons is to diagnose and fix this persistent subclinical instability after surgery in order to prevent the onset of PTOA.
© 2023 The Author(s).Published by Elsevier Inc. on behalf of American Shoulder and Elbow Surgeons.
This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Based on his clinical experience, Hotchkiss 10 first defined in 1996 an injury pattern involving an elbow dislocation associated with a radial head (RH) fracture and a coronoid process (CP) fracture.He named it "The Terrible Triad injury of the elbow" because of the poor prognosis, including mainly stiffness and recurrent instability (Fig. 1).
Several theories on the terrible triad of the elbow (TTE) were then published by other surgeons after better understanding of the biomechanical condition of the elbow joint stability.Based on these theories, they have then proposed surgical protocol guidelines. 21owever, there was no widely accepted approach for the treatment of TTE.
We aimed to summarize: (1) the history of the treatment of TTE and (2) the increasing scientific knowledge that supported its evolution.
Methods
An electronic literature search was carried out using PubMed, Scopus, Medline, EMBASE, and the Medical Subject Headings.The search was limited to English language literature.The terms « AND » and « OR »; « arthroplasty »; «elbow dislocation»; «prosthesis »; and « radial head » were used in various combinations with "AND" and "OR" to assist in the review.The reference list of each article was also searched in order to identify additional articles pertinent to our research criteria.References from the existing literature were also queried because of the limited historical time frame inherent in these search engines.Results were reported as a comprehensive review of the relevant literature from 1920 to 2022.
Surgical exploration (1940-1980)
In the early 20th century, closed reduction with or without RH resection in cases of unreconstructible fractures was the main treatment of TTE. 2 According to the most popular theories, ligament laxity and/or incongruity between the trochlear notch (ie, depth) and coronoid (ie, height) were the 2 main causes of persistent elbow instability after dislocation. 23The goal of the initial surgical treatment was to prevent the ulna from disengaging with the trochlea by increasing the stability of the CP (eg, transfer biceps tendon to the CP, CP bone block augmentation, and direct anterior capsular repair). 23These surgical procedures, derived from the Bankart repair for shoulder instability, led to an incidence of 38%-63% of post-traumatic osteoarthritis (PTOA). 14eterotopic ossifications of the elbow and/or wrist pain were also frequently observed after elbow dislocation regardless of these treatments.In order to prevent these complications, Speed 28 proposed in 1941 the first RH implant consisting of a Vitallium cap placed over the radial neck.Ten years later, Essex-Lopresti et al 4 described a series of cases of forearm instability after RH resection.The same year, Essex-Lopresti as well as Carr and Howar 1 showed that RH replacement by an implant was required in case of RH fracture and radio-ulnar joint injury to maintain elbow stability and prevent painful wrist instability.
The medial collateral ligament (MCL), along with its anterior, posterior, and transverse bands was first described by Gutierrez et al 6 in 1964.In this article, the authors showed that the anterior medial collateral ligament (aMCL) limited the angular opening of the humero-ulnar joint.They, therefore, speculated that the aMCL could be the primary contributor to the elbow stability in valgus.
In 1966, an injury pattern involving elbow dislocation, lateral collateral ligament complex (LCLC) injury, and fracture of the RH was described. 23Based on a literature review, Osborne and Cotterill provided the first unifying theory to explain simple and complex elbow dislocations.They speculated that the force of the fall on an out-stretched hand and incompletely extended elbow might induce an impaction of the coronoid against the trochlea and potentially a fracture of the CP.This vertical thrust was also converted into lateral rotation and valgus strains of the ulna by the laterally sloping surface of the medial part of the trochlea.Then, a posterior-lateral disengagement of the CP and the RH could be, therefore, observed by the induced posterolateral (PL) rotation movement of the ulna.The dislocation of the RH behind the capitulum induced stripping of the LCLC and a tear of the PL capsule.Osborne et Cotterill 23 and then Hassman et al 7 in 1975 showed that the postero-lateral rotatory instability (PLRI) of the elbow after TTE could be treated by repairing the LCLC.Similarly, repair of the aMCL was recommended only in cases of persistent valgus instability after repair of LCLC. 23biomechanical revolution from America (1980-1995) The increase of biomechanical data discovered from 1980 to 1995 has critically improved our understanding of elbow instability.
In 1981, Tullos et al 29 set out the 3 primary static constraints of the elbow to valgus stress: aMCL, RH, and CP.In 1989, Morrey and Regan 25 developed a new classification describing the extent to which the height of the coronoid contributed to elbow stability.The classification was based on 3 fracture sizes of the CP: tip avulsion (type I), <50% (type II), and >50% (type III).For type-III CP fracture, the authors showed an increased risk of recurrent elbow dislocation and always recommended reducing and fixing it regardless of the associated ligamentous and bony injuries.
In 1987, Josefsson et al 15 published a series of 31 simple elbow dislocations (n ¼ 31) with 100% MCL rupture.Between 1987 and 1991, Hotchkiss 11 and Morrey et al 20 published additional data suggesting that the aMCL was the most critical stabilizer of the elbow throughout the flexion-extension arc.Morrey et al 19 also showed that its contribution to elbow stability increased in cases where the RH was resected.Conversely, in the case of competent MCL, the comminuted RH could be excised without a significant increased risk of altered elbow biomechanics. 15Therefore, the authors supported that a RH arthroplasty was indicated to prevent gross valgus instability in case of TTE involving both unreconstructible RH fractures with aMCL deficiency.Based on this new biomechanical knowledge, the authors reclassified the RH as a secondary constraint and considered the LCLC, the aMCL, and the humero-ulnar joint as primary stabilizers of the elbow under valgus strain.However, all biomechanical studies analyzed the stability of the elbow under combined rotatory strains and motions, which did not allow us to identify the contributor to the PLRI.
In 1992, O'Driscoll et al 21 showed that a rupture of the lateral collateral ulnar ligament (LCUL) could induce an elbow dislocation regardless of the status of the aMCL.They 21 provided biomechanical results confirming the mechanism speculated by Osborne and Cotterill. 23The authors described a transient rotatory subluxation of the ulno-humeral joint after a rupture of the LCUL under a combination of an axial load, a valgus (15 ) stress, a moderate internal rotation or a hyper-supination (40 ) of the forearm, and a slight flexion of the elbow. 21ifying theories and spliting surgeons; population (1992-2020) Based on his clinical experience, Hotchkiss 10 first defined in 1996 the TTE as an injury pattern involving an elbow dislocation associated to a RH fracture and a coronoid CP fracture.The term "terrible" referred to poor treatment outcomes, including stiffness, recurrent instability, and critical arthritis (Fig. 1).
Spectrum of unifying theories
As a result of the inability to explain pathoanatomic features of the TTE, some authors speculated about sequential injury to unify clinical findings and posterolateral rotatory instability mechanism observed after biomechanical studies.In 1992, O'Driscoll et al 21 described the Horii circle illustrating the sequential soft tissue and bony injuries according to a spectrum of elbow instability.This circle starts from the lateral structures of the elbow and progresses according to 3 stages to the medial structures.Stage 1 corresponded to the PLRI caused by the rupture of the LCUL.Stage 2 corresponded to the perched elbow related to the rupture of the LCL complex associated with the injury of the elbow capsule (anterior and/or posterior).Stage 3 concerns the dislocated elbow caused by the disruption of the previous structures and the posterior (a) and then anterior (b) band of the MCL.The circle of Horii described by O'Driscoll et al stated that TTE could occur regardless of the status of aMCL (stage 3a).The authors speculated that the RH and coronoid fractures dissipated the energy of impact and stopped the progression of the circle before the elbow dislocation.Therefore, they supported that the stability of the elbow could be restored with the stabilization of the radiocapitellar joint (ie, LCL, RH, and CP repair).In 1998, Ring and Jupiter 26 used a 4-column linkage theory (anterior, posterior, lateral, and medial columns) that has been compared to a ring to illustrate the restraints of the elbow.The stability of the elbow is guaranteed by the integrity of the different elements of the ring.Similarly, to pelvic ring injuries, stability is restored if both injured columns are repaired.
Despite this growing biomechanical data and related theories, a high incidence of TTE with MCL injury was reported in clinical studies. 27In 2012, a new unifying theory proposed a new circle of sequential injury starting from the medial side corresponding to a "reverse Horii circle".Rhyou et al 26 speculated that axial compression and valgus stress were the primary loads while hypersupination of the forearm was a secondary load.The primary load could cause the rupture of the MCL, which could inducedin combination with a hypersupination momentda lateral translation strain on the ulna under the trochlea until the PL dislocation of the elbow, with potentially associated fractures of the RH and CP.The authors speculated that the LCUL was stripped when the RH was abutted against the posterior aspect of the capitulum.So the initial lesions started medially and ended laterally.
In 2018, Luokkala et al 17 added a second circle to the reverse Horrii circle, equivalent to a spiral around the elbow.The authors considered the tendons less stiff than the ligaments and speculated that the medial and then lateral ligament complex would fail before the common flexor and then extensor mass origins.
Surgery: protocols and disagreements
In 2005, Mc Kee et al 18 published the first surgical protocol for TTE.They recommended to systematic repair of the CP, RH, and LCL injuries via a lateral approach.Based on the sequential injury according to the Horii circle of O'Driscoll, this protocol allowed to reduce the rate of persistent instability and led to satisfactory outcomes at short-to mid-term follow-ups. 24However, recent biomechanical studies have changed the indication of fixations for CP fractures in TTE. 8 A new classification system for CP fractures published by described 3 types of fractures based on both their amount and anatomic location: tip (I), anteromedial (II), and base (III). 22In 2012 Adams et al added the mid-transverse (50% of the CP height), and anterolateral fractures of the CP.According to Doonberg et al, 3 the tip and mid-transverse fractures of the coronoid (<50% of CP height) represented 97% of TTE.In 2012, Jeon et al 12 showed that fixation of all CP fractures was not necessary if the fracture involved less than 50% of the coronoid, and if the LCL and RH were fixed or intact.In the same years, Hartzler et al 8 confirmed the less impact of the fixation of mid-transverse fracture on valgus and external rotation laxity.However, the authors showed significant increase in the stability in varus and internal rotation after CP fixation, regardless the status of the RH.Therefore, the authors recommended fixing the CP fractures according to the varus stress test and/or the height and location of the CP fractures on the computed tomography scan.
However, the prevalence of PTOA is still elevated at mid-(11.2% at 3 years) and long-term follow-ups (66% at 9 years) regardless of the surgical treatments of TTE. 9 The high rate of PTOA may be due to the initial cartilage lesions following the trauma. 9However, for Jung et al, 16 the rate of PTOA after TTE was significantly higher if the MCL was not repaired.Eygendaal et al 5 showed an increased medial space opening under valgus stress in the elbow with a rupture of the MCL positioned at 90 of flexion.This instability cannot be detected by the usual stability tests of the elbow and could be equivalent to subclinical instability or micro-instability. 30In 2010, Jeong et al 13 showed that the MCL repair in TTE prevented the onset of moderate or severe PTOA in 13 patients.However, the literature does not provide any data about the long-term radiographic outcomes of the TTE according to the MCL status.Further studies are necessary to assess the risk factor of PTOA after TTE in the long-term.
Lesson learned
Since McKee's protocol, the consensus is to first stabilize the radio-capitular joint by addressing the RH and LCUL injuries.The fixation of the coronoid fracture will depend on the residual stability after the lateral complex fixation.
The formalization of the TTE treatment prevents gross instability and avoids terrible short-to mid-term outcomes.However, PTOA at long-term follow-up is still an issue despite the improvement of our biomechanical knowledge and surgical protocols.Recent data suggest that a deficient aMCL and CP fracture (ie, midtransverse and tip) could induce an infra clinical valgus or varus instability.Thus, it seems that the aMCL and the coronoid should be addressed to prevent long-term complication rates in younger or athlete patient population (Fig. 2).
Conclusions
In conclusion, there exists no consensus on the surgical protocol for the treatment of TTE has been provided in the literature.While the outcomes of the TTE have been improved, the rate of PTOA at long-term follow-up is still high regardless of the treatments.The terrible triad has given way to the subtle triad with persistent microinstability of the elbow.Based on this historical review, we argue that the next challenge for elbow surgeons is to diagnose and fix this persistent subclinical instability after surgery for TTE in order to prevent the onset of PTOA.
Figure 1
Figure 1 Terrible triad injury of the elbow: an injury pattern.Since 1996, the TTE of the elbow has been defined by RH and CP fractures associated with elbow dislocation (light blue circle).The growing knowledge in the biomechanics of elbow stability showed the involvement of the LCUL and aMCL, and/or the common flexor-pronator and extensor tendons originate in TTE (blue circle).TTE, terrible triad of the elbow; LCUL, lateral collateral ulnar ligament; RH, radial head; CP, coronoid process; aMCL, anterior medial collateral ligament.
Figure 2
Figure 2 The proposition of treatment algorithm for terrible triad injuries of the elbow.RH, radial head; LCL, lateral collateral ligament.
|
v3-fos-license
|
2018-12-29T09:00:16.941Z
|
2018-06-20T00:00:00.000
|
59414521
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://ocs.editorial.upv.es/index.php/HEAD/HEAD18/paper/download/8205/3827",
"pdf_hash": "7e08f28b0a7be747e1d13af0b8426c101a3b29e9",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41814",
"s2fieldsofstudy": [
"Education"
],
"sha1": "7e08f28b0a7be747e1d13af0b8426c101a3b29e9",
"year": 2018
}
|
pes2o/s2orc
|
Use of Software Tools for Real-time Monitoring of Learning Processes : Application to Compilers subject
The effective implementation of the Higher European Education Area has meant a change regarding the focus of the learning process, being now the student at its very center. This shift of focus requires a strong involvement and fluent communication between teachers and students to succeed. Considering the difficulties associated to motivate students to take a more active role in the learning process, we explore how the use of a software tool can help both actors to improve the learning experience. We present a tool that can help students to obtain instantaneous feedback with respect to their progress in the subject as well as providing teachers with useful information about the evolution of knowledge acquisition with respect to each of the subject areas. We compare the performance achieved by students in two academic years: results show an improvement in overall performance which, after observing graphs provided by our tool, can be associated to an increase in students interest in the subject.
Introduction and Motivation
During the last decade, and thanks to the implementation of the Higher European Education Area, there has been a shift in terms of focus in the learning process being the student at its center.As part of this change, some actions have been undertaken such as decreasing the number of students in each class to ease student-teacher interaction as well as changing the balance between theoretical and practical sessions.As a result of these changes, teachers have also had to adapt the way students are evaluated towards a more continuous evaluation.
The continuous proposal and evaluation of learning activities is a high time consuming task, which also needs of strong student motivation to take part in the different proposed activities.As students have to divide their efforts among the different subjects, it is difficult to pursue them to work continuously on each of them as they tend to focus on the most inmediate assignment deadline.This, along the low attendance to classroom activities, Bukoye (2017), makes it difficult teachers to have continuous information about evolution of the learning process.This only allows us teachers to correct potential knowledge gaps in specific moments in the semester, mainly as a result of evaluation activities.
To overcome this, some alternatives have been proposed such as rewarding the students for their attendance to classroom activities Bukoye (2017), involving students in the evaluation Valero (2010) Conde ( 2017) Harland (2017) or, more recently, to include gamification in the learning process Kapp (2012) Su (2015) Mauricio (2017).
As a use-case, we show how we have adapted the subject we teach to the new learning process focus Valero (2010) García-Peñalvo (2014).Our subject is part of a Computer Sciences degree and requires students to learn the basics of compiler building theories.One big part of the subject involves students to build their own compiler; this task is supported by explanations during theorical and seminar activities.As a result of our experience over years, we have observed the following problems associated to the practical part of the subject: 1) low students attendance and performance and 2) big performance gap between practicum exam and practicum assignments.Students work in pairs and are evaluated individually del Canto (2015) at the end of the semester to verify that each of them have actually taking part in the practicum assignment.We associate differences between assignment and exam marks to individual students taking charge of a group assignment, excessive help among students and practicum copying, as all students had the same assignment.These reasons might come as a result of low student motivation in the subject, which can be caused by the appearance of difficulties in the learning process that the student is not able to solve and, as they are not known by the teacher, they are difficult to solve.
We study in this paper the role of that a software tool can have to support students learning process.The tool proposed incorporates evaluation and monitoring capabilities so the teacher can know in real-time the level of assessment of the different concepts at a glance without requiring additional information to students.We study the benefits associated to the use of the tool by comparing students performance over two consecutive academic years.
Learning Process Monitoring Tool
We present in this section our learning process support tool.To ease readers understanding, we use as example a real assignment from our subject.The task students have to undertake is to add new functionalities over a basic compiler.
Assignment Preparation
At the beginning of the semester, the teachers define the different additional functionalities that will be incorporated to the compiler.For each functionality, several variations are explored aiming to cover all possible different scenarios that the compiler might face (an example is shown in Fig. 1).Each variation is given a difficulty score by the teachers, as a result of both personal experience and students observation during the previous academic year.To assign the funcionaltiy variations each student has to work with, we use the assignment preparation tool.This tool works under the following rules: 1) all assignments have to be different, 2) all the assignments should have a similar difficulty, and 3) all assignment should have one variation from each functionality.This is achieved by the use of a backtracking algorithm Priestley (1994).The teacher can incorporate additional constraints such as imposing compulsory functionality variation to tackle.The use of this tool naturally prevents students from copying, as none of them has all the same functionality variations to add.
Assignment Evaluation
In order to assess that these functionalities have been correctly incorporated by students, two different types of tests are designed.Public tests aim to assess if the student has acquired the basic knowledges of the subject whereas private ones are focused to explore if students have gone beyond the minimum requirements in order to build a more robust solution.Private tests do not require additional theorical explanations but a careful thought about the solution that is being prepared.As an example, a public test will check if the power operation between integers provide the expected results whereas the private one will explore whether the combination of some variable types is allowed (i.e., the compiler should not allow the power between an integer and a character).The content of the public tests is known by the students in advance, and they should be all overcome in order to pass the subject.Private tests are not known by the students and they are used to modulate the mark between 5 and 10.
Students can upload their solution to the assignment using a dedicated website.Every time a new delivery is uploaded, the assignment evaluation tool checks whether the tests associated to each of the student-specific functionality variations are overcome.This tool provides instantaneous feedback to the student by generating a report summarizing the level of assessment of the different proposed tasks.This is an evolution over what was done in previous years, as students have to ask the teacher to test their solution which might delay the obtention of the feedback as well as having information about the progress of the learning activities.
For the case of private tests, we only inform students about the percentage of private tests that have been overcome as a way to encourage them to try harder in order to achieve the maximum mark, inspired by gamification theories.By doing this, we aim to transform knowledge acquisition into a discovering experience that can motivate students to gain interest in the subject, as they are 'battling' against the unkwnown.
Performance monitoring
As students upload the solutions to their assignment, the performance monitoring tool also generates a text file that, conveniently processed by common software tools, can provide teachers with useful information with respect to the evolution learning process.For instance, we can easily obtain the following information: 1) number of deliveries and its evolution over time per group or class, 2) percentage of functionalities, variations and test that have been overcome by each group or class, and its time evolution.By this information, teachers can have information about the level of interest of a group (number of deliveries) or difficulties associated to specific functionalities (low number of tests overcome with respect to the number of deliveries).As the system allows teachers to have this information in real time, learning actions can be implemented to solve knowledge gaps during theoretical and seminar activities and, by this, improve the level of students assessment of the different key concepts.
Results
We show in Table 1 a comparison of students performance in two consecutive years: 2015-2016 and 2016-2017.During the latter, the learning process monitoring tool was used.The difficulty level of the practicum was equivalent.The main result of this study is a general increase in student performance, which is specially observed with respect to the practicum exam where the percentage of students that surpass the minimum mark is almost doubled.This improvement in the practicum marks is also reflected in the ratio of students that pass the subject, doubled from previous year.We associate improvements in students performance to them being more engaged to the subject.In order to check the validity of this conclusion, we present next some graphs extracted by the use of our performance monitoring tool.
Fig. 2 shows the evolution in the number of mean deliveries per group and the mean mark during the period the assignment is active.We can observe how students interact continuously with the tool (though peaks can be observed coinciding with practicum classroom activities).We can also see how the majority of the students achieve the assigned task before delivery date and how, as a result of their interest in the subject, they keep improving their solution which results in an increase in the final mean mark.Our tool also allows teachers to observe which of the tasks presented more difficulties to students.Fig. 3 shows the dependence between the functionality and the number of times the student has tried to overcome the different tests associated to it.We can observe how some functionalities (Parameters, Operators) needed of less effort than others, especially Initialization.This information can be used to reinforce the theoretical explanation of some concepts to reduce the effort needed.Our tool also allows us to observe in detail the performance related to each functionality.Fig. 4 shows how a very small reduced number of students overcome Object recursion, indicating an area in which to apply a learning action.Finally, Fig. 5 shows the global results obtained by the students in all the different functionalities variations that were studied.This graph allows us to determine which of them were easier for the students (higher percentage of overall sucesss and higher percentage of hidden test overcome) as well as to observe which of the sub-functionalities required more effort by the students to obtain the minimum required mark.This
Conclusions and future work
Keeping a hig level of interest of the student in a given subject is key to a positive result of the learning process.In this paper we have proposed a software tool to observe students learning process.Our tool incorporates assignment preparation and evaluation as well as monitoring capabilities.
Our tool allows students to have inmediate feedback of their performance and also allows teachers to have real time information about students progress.This information can be used to correct knowledge gaps during the present course or to plan improvements in the learning activities of the subject for a posterior year.
A comparison study between two academic years shows promising results associated to the use of the monitoring tool, which suggest that the improvement in overall performance can be associated to an increase in students interest.
Future work should consist of incorporating a control panel which allows the teacher to have direct access to student-specific graphs.We also plan to generate student-specific reports indicating them areas in which they have to improve as well as suggesting supporting material to study.Finally we would like to study the use of mobile applications , either to adapt the ones we propose or use already existing ones such as Kahoot Wang (2016) or Plickers Wood (2017), to obtain real-time information of students learning assessment in order to extend our proposal to lecture classroom activities.
Figure 1 .
Figure 1.Examples of how variations are defined from a single functionality.
Figure 2 .
Figure2.Evolution of the number of deliveries and mean mark over the assignment period.
Figure 3 .
Figure 3. Effort associated to each of the functionalities proposed to the students.
Figure 4 .
Figure 4. Effort associated to each of the private tests associated to Initialization functionality.
information can be used to prepare the assignments for a new academic year, as teachers have powerful to better balance between assignments.
Figure 5 .
Figure 5. Effort associated to each of the functionalities variations proposed to students.
|
v3-fos-license
|
2023-04-29T06:18:12.594Z
|
2023-04-25T00:00:00.000
|
258376766
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "139bd2c79a8bc2d1aa9f6e3d27844c32fa681e49",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41815",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"sha1": "52608b6bebd81b0e18b425ff9c59ba85cfd27e5d",
"year": 2023
}
|
pes2o/s2orc
|
Correlation between spasticity and corticospinal/corticoreticular tract status in stroke patients after early stage
We investigated the correlation between spasticity and the states of the corticospinal tract (CST) and corticoreticular tract (CRT) in stroke patients after early stage. Thirty-eight stroke patients and 26 healthy control subjects were recruited. The modified Ashworth scale (MAS) scale after the early stage (more than 1 month after onset) was used to determine the spasticity state of the stroke patients. Fractional anisotropy (FA), apparent diffusion coefficient (ADC), fiber number (FN), and ipsilesional/contra-lesional ratios for diffusion tensor tractography (DTT) parameters of the CST and CRT after the early stage were measured in both ipsi- and contra-lesional hemispheres. This study was conducted retrospectively. The FA and FN CST-ratios in the patient group were significantly lower than those of the control group (P < .05), except for the ADC CST-ratio (P > .05). Regarding the DTT parameters of the CRT-ratio, the patient group FN value was significantly lower than that of the control group (P < .05), whereas the FA and ADC CRT-ratios did not show significant differences between the patient and control groups (P > .05). MAS scores showed a strong positive correlation with the ADC CRT-ratio (P < .05) and a moderate negative correlation with the FN CRT-ratio (P < .05). We observed that the injury severities of the CST and CRT were related to spasticity severity in chronic stroke patients; moreover, compared to the CST, CRT status was more closely related to spasticity severity.
Introduction
Spasticity is defined as a velocity-dependent increase in muscle tone characterized by a hyperactive stretch reflex following central nervous system injury. [1,2] Spasticity occurs in up to 65% of stroke patients and is closely related to poor motor function, including dexterity loss, muscle weakness, contracture, and pain. [3][4][5][6][7][8][9] Several mechanisms involved in the pathophysiology of spasticity have been suggested, including abnormal sensory processing (changes in the balance of excitatory and inhibitory inputs) in the intraspinal network, decreased post-activation depression (a phenomenon that controls excitability of the stretch reflex), muscle immobilization at short lengths (leads to muscle contracture, and contributes to hypertonia), and abnormal supraspinal influences due to injury of neural tracts such as the corticospinal tract (CST), corticoreticulospinal tract, and vestibulospinal tract (VST). [10][11][12][13][14] However, to date, a precise description of the pathophysiologic mechanism of spasticity has not been fully elucidated.
Among various descending motor pathways, the CST, the corticoreticulospinal tract comprising the corticoreticular tract (CRT) and the medial and lateral reticulospinal tracts (RSTs), and the VST have been suggested to be associated with spasticity. [10][11][12][13][14][15] Among these neural tracts, the CST and the CRT with the lateral RST are reported to provide inhibitory inputs to the intraspinal network, acting as a supraspinal inhibitory system. [10][11][12][13][14][15] In contrast, the medial RST and VST provide excitatory inputs to the intraspinal network, functioning as a supraspinal excitatory system. [10][11][12][13][14][15] Previous studies have detected correlations between spasticity and the neural tracts originating from the brainstem, such as the RST and VST, by using neurophysiological methods. [14,[16][17][18][19] However, little is known about the correlation between spasticity and other neural tracts for motor function, such as the CST and CRT. [14] The introduction of diffusion tensor tractography (DTT), which is derived from diffusion tensor imaging (DTI), has enabled 3-dimensional reconstruction and estimation of various neural tracts, including the CST, CRT, and VST. [20][21][22][23] Consequently, several studies have reported on CST, CRT, and VST injuries in stroke patients with impaired motor function. [24][25][26][27][28][29][30][31] However, no DTT-based study on the correlation between spasticity and the above neural tracts in stroke patients has been reported. Among the above-mentioned neural tracts, previous studies have reported that the VST has a minor role in spasticity. [10][11][12][13] In this study, we hypothesized that spasticity in stroke patients could be related to CST and CRT injuries.
In the current study, by using DTT, we investigated the correlation between spasticity and the state of the CST and CRT in stroke patients after early stage.
Subjects
In this study, the stroke patients were recruited according to the following inclusion criteria: first-ever stroke; age at the onset of stroke: 20 to 69 years; spontaneous intracerebral hemorrhage or cerebral infarction confined to a unilateral hemisphere, as confirmed by a neuroradiologist; DTI scan performed after the early stage (more than 1 month after onset); spasticity in the contra-lesional extremity (modified Ashworth scale [MAS] score after the early stage ≥ 1 [32,33] ); no history of neurologic/ psychiatric disease or head trauma. MAS scores, used to determine the spasticity state of the stroke patients, were obtained at the time of DTI scanning. [32,33] This study was conducted retrospectively, and all patients and control subjects provided written informed consent. The study protocol was approved by the institutional review board of a university hospital (IRB number: YUMC 2021-03-014).
DTI and tractography
The DTI data were acquired at an average of 10.53 ± 10.90 months after stroke onset using a 1.5 T Philips Gyroscan Intera system (Philips, Ltd, Best, Netherlands) equipped with a Synergy-L Sensitivity Encoding (SENSE) head coil and using a single-shot, spin-echo planar imaging pulse sequence. For each of 32 non-collinear diffusion sensitizing gradients, 60 contiguous slices were acquired parallel to the anterior commissure-posterior commissure line. Imaging parameters were as follows: acquisition matrix = 96 × 96, reconstructed to matrix = 192 × 192, field of view = 240 mm × 240 mm, TR = 10,398 ms, TE = 72 ms, parallel imaging reduction factor (SENSE factor) = 2, EPI factor = 59 and b = 1000 s/mm 2 , NEX = 1, thickness = 2.5 mm. Fiber tracking was performed by applying the fiber assignment continuous tracking algorithm implemented within DTI task card software (Philips Extended MR WorkSpace 2.6.3). Each DTI replication was intra-registered to baseline "b0" images to correct for residual eddy-current image distortions and head motion effects by using a diffusion registration package (Philips Medical Systems). The CST was reconstructed using fibers passing through 2 regions of interest (ROIs) on the DTI color map. The seed ROI was placed at the upper pons, and the target ROI was placed at the mid pons. [23] For analysis of the CRT, the seed ROI was placed on reticular formation of medulla, and the target ROI was placed on the midbrain tegmentum. [21] Termination criteria used for fiber tracking were fractional anisotropy (FA) < 0.15 and angle < 27°. [34] The FA, apparent diffusion coefficient (ADC), and fiber number (FN) values for the CST and CRT after the early stage were measured in both the ipsilesional and contra-lesional hemispheres. Subsequently, ipsilesional/contra-lesional ratios of the CST and CRT for each of the DTT parameters (FA, ADC, and FN) after the early stage were calculated and are presented as CST-and CRT-ratios, respectively ( Fig. 1).
Statistical analysis
SPSS software (version 21.0, SPSS Inc., Chicago, IL) was used for data analysis. The chi-squared test was used to assess a sex-based difference between the patient and control groups. Independent t tests were used to examine the age distribution difference and the DTT parameter differences in CST-and CRT-ratios between the patient and control groups. The level of statistical significance was set at P < .05. The MAS scale category 1 + was modified to a score of 1.5 for statistical analysis purposes. Spearman rank correlation coefficients were used to examine correlations between MAS score and CST-and CRTratios for each of the DTT parameters. A correlation coefficient (R-value) was interpreted as strong when >0.50, as moderate when between 0.30 and 0.49, and weak when between 0.10 and 0.29. [35]
Results
Thirty-eight stroke patients (23 males, 15 females; mean age 52.61 ± 12.18 years; age range 24-69 years; 19 intracerebral hemorrhage and 19 cerebral infarction) and 26 age-and sexmatched healthy control subjects (15 males, 11 females; mean age 49.00 ± 13.00 years; age range 26-77 years) with no history of neurologic/psychiatric disease or head trauma were recruited. Demographic data for the patient and control groups are summarized in Table 1. No significant differences in age or sex distributions were observed between the patient and control groups (P > .05).
The CST-and CRT-ratios for each DTT parameter in the patient and control groups are summarized in Table 2. The FA and FN CST-ratios of the patient group were significantly lower than those of the control group (P < .05). However, there was no significant difference between the 2 groups in the ADC CSTratio (P > .05). Regarding the CRT, the FN CRT-ratio of the patient group was significantly lower than that of the control group (P < .05), whereas the FA and ADC CRT-ratios were not significantly different between the patient and control groups (P > .05).
Correlations between MAS score and the CST-and CRT-ratios for the DTT parameters in the patient group are summarized in Table 3. The MAS score showed a strong positive correlation with ADC CRT-ratio (R = 0.574, P < .05) and a moderate negative correlation with the FN CRT-ratio (r = −0.356, P < .05). [35]
Discussion
In the present study, by examining the correlation of MAS score with DTT parameter ratios for the CST and CRT, we investigated the correlation between spasticity and the CST and CRT states in stroke patients after early stage. Our results are summarized as follows. First, our CST-ratio examination showed that the patient group had lower FA and FN values than those of the control group. Second, the CRT-ratio assessment showed that the FN value in the patient group was lower than that in the control group. Third, the MAS score had a strong positive correlation with the ADC CRT-ratio and a moderate negative correlation with the FN CRT-ratio.
Among the various DTT parameters that can be examined, FA, ADC, and FN values are most commonly used when evaluating the state of neural tracts in patients with brain injury. [20,36,37] The FA parameter, which indicates the degree of directionality of water diffusion, is used to assess the degree of tract directionality, whereas the ADC parameter indicates the magnitude of the water diffusion. [20,36,37] Therefore, the FA and ADC parameters may be used to indicate the microstructural integrity status of white matter microstructures, such as axons, myelin, and microtubules. [20,36,37] In contrast, the FN value indicates the number of voxels in a neural tract, suggestive of the total number of fibers in a neural tract. Therefore, decrements in FA and FN values and an increment in the ADC value of a neural tract indicate that neural tract injury status. [20,36,37] In addition, DTT parameters presented as ipsi-/contra-lesional ratios of a neural tract reflect the degree of asymmetry between the ipsi-and contra-lesional tracts. Therefore, relatively large decrements in FA and FN values and increment in ADC value in bilateral ratios for a neural tract indicate greater injury in the ipsilesional neural tract than in the contra-lesional tract. Consequently, greater decrements in FA and/or FN CST-and CRT-ratios in the patient group than in the control group indicate the stroke patients exhibit more severe injuries in the ipsilesional CST and CRT than in the contra-lesional tracts.
The relationship between MAS scores and DTT parameter CST-and CRT-ratios in the patient group showed that the DTT parameter CST-ratios had no correlation with the MAS score, whereas, the ADC CRT-ratio was strongly positively correlated with the MAS score, and the FN CRT-ratio was moderately negatively correlated with the MAS score. Compared with the contra-lesional CRT, injury severity of the ipsilesional CRT was significantly related to the severity of spasticity in stroke patients after early stage. Specifically, compared with the ipsilesional CST, DTT results for the ipsilesional CRT were closely associated with spasticity severity.
Among various pathophysiologic mechanisms of spasticity, abnormal supraspinal factors due to injury of neural tracts are reported to be major causes of spasticity. [10][11][12][13]15] In detail, spasticity is maintained by controlled balancing of the stretch reflex via inhibitory influences of the CST and the CRT with the lateral RST and the excitatory influences of the medial RST and VST; however, the VST is reported to have a minor effect on spasticity. [10][11][12][13]15] Therefore, in brain injuries that include the CST and CRT, the inhibitory influences on the medullary brainstem and intraspinal network can be lost, leading to an unopposed excitatory influence by the medial RST; thus, hyperexcitability of the medial RST can occur. [10][11][12][13]15] Hyperexcitability of the RST has been related to spasticity, abnormal motor synergy, and disordered motor control. [11,18] In addition, previous studies have reported that if a CST injury due to extensive cortical damage occurs, injury recovery can use an alternative motor pathway, such as the RST, for compensation, leading to abnormal motor synergy, dexterity loss, and spasticity. [18,[38][39][40][41] Therefore, although various neural tracts are involved in spasticity, given the above results and the difficulty in identifying the individual actions of brainstem nuclei when using DTT, we focused on CST and CRT injuries in stroke patients after early stage with spasticity. [12] Consequently, based on previous and current studies, we suggest that abnormal supraspinal influences on the intraspinal network due to the CRT injury rather than the CST injury could be responsible for spasticity in stroke patients after early stage. A few studies have reported on the correlation between spasticity and CST and/or CRT injuries in stroke patients. [42][43][44] In 2016, Barlow demonstrated injuries of gray (insula, basal ganglia, and thalamus) and white (pontine crossing tract, CST, internal capsule, corona radiate, external capsule, and superior fronto-occipital fasciculus) matters using lesion density plots and voxel-based lesion-symptom mapping in 20 acute patients with spasticity following ischemic stroke. [42] During the same year, Lee et al [43] , using brain magnetic resonance imaging and positron emission tomography, reported on a patient who showed spasticity in the left leg due to injuries of the CST and CRT following infarction in the right cerebral peduncle. In 2019, Plantin et al demonstrated CST injury due to intracerebral hemorrhage or cerebral infarction by examining weighted CST lesion loads and voxel-based lesion-symptom mapping in 61 stroke patients with hand spasticity evaluated by the NeuroFlexor method, which accounts for velocity dependence in the tonic stretch reflex. [44] Taken together, our results support previous studies showing that injuries of the CST and/or CRT are associated with spasticity. To the best of our knowledge, this is the first DTI-based study to report a correlation between spasticity and CST and CRT injuries.
Several limitations of this study should be considered. First, the fiber tracking technique applied during DTT reconstruction is operator-dependent. Second, this study recruited a relatively small number of subjects. Third, among the descending motor pathways related to spasticity, we only reconstructed the CST and CRT because the descending motor pathways originating from the brainstem are challenging to reconstruct using DTI. Fourth, only MAS scale was used to evaluate spasticity in this study.
By contrast, the Modified MAS (MMAS) scale which is an advanced version of MAS that was reported as a more reliable and valid method than the MAS scale by excluding the MAS score category 1 + aspect, could be better for this kind of study. [45][46][47] However, the reliability and validity of Ashworth scale methods for measuring the severity of muscle spasticity have been controversial. [45,48] Therefore, further prospective studies that include larger numbers of patients, more advanced spasticity evaluation tools, and enhanced DTI techniques to reconstruct motor pathways originating from brainstem should be encouraged.
In conclusion, we have demonstrated that CST and CRT injury severities were related to spasticity severity in stroke patients after early stage. In particular, of the 2 tracts assessed, CRT injury status was the most closely related to spasticity severity. Our results suggest that recovery of CST and CRT injuries through neuro-rehabilitation during the stroke recovery phase is important in the prevention of spasticity after the early stage. Thus, DTT-based reconstruction and assessment of the CST and CRT could be helpful when predicting the occurrence of spasticity and planning neuro-rehabilitation treatments that could improve recovery prognosis. Comparison of ipsilesional/contra-lesional ratios of the corticospinal and corticoreticular tracts for diffusion tensor tractography parameters between the patient and control groups.
Table 3
Correlation between modified Ashworth scale scores and ipsilesional/contra-lesional ratios of the corticospinal and corticoreticular tracts for diffusion tensor tractography parameters.
|
v3-fos-license
|
2021-02-21T06:16:04.350Z
|
2021-02-19T00:00:00.000
|
231969101
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1097/md.0000000000024790",
"pdf_hash": "2d2c6a826ab08041839849d8e645328ed8172a14",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41816",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "75d4b677a692be2478c303b6d6b3f677779ca60a",
"year": 2021
}
|
pes2o/s2orc
|
Choroidal neovascularization secondary to half-dose photodynamic therapy for chronic central serous chorioretinopathy
Abstract Rationale: Half-dose or reduced-fluence photodynamic therapy (PDT) with verteporfin has been well acknowledged to be the most effective and permanent treatment with very low rates of complications. However, we report a case of chronic central serous chorioretinopathy (CSC) who developed choroidal neovascularization (CNV) secondary to half-dose PDT within only 3 weeks. Such an occurrence following this short a course of treatment has not been reported previously. Patient concerns: A 46-year-old Chinese man who had been diagnosed as acute more than 1 year ago revisited our department recently and complained of blurred vision again in his left eye. Diagnoses: Fluorescein fundus angiography (FFA) and indocyanine green angiography (ICGA) revealed patchy hyperfluorescent dots and optical coherence tomography (OCT) indicated irregular flat pigment epithelium detachment (PED) in the central macula. The patient was diagnosed with chronic CSC. Interventions: The patient was treated by half-dose PDT with verteporfin. Three weeks later, the patient complained of sudden blurred vision and fundus examination showed macular hemorrhages with a best-corrected visual acuity (BCVA) of 20/250. OCT angiography (OCTA) showed a distinct area of flower-like CNV located within the deep retinal slab. Secondary CNV had developed after a quite short course of half-dose PDT treatment. Subsequently, the patient was administered by 2 intravitreal injections of aflibercept (2 mg). Outcomes: Two months after the second intravitreal injection, macular hemorrhages and secondary CNV were completely resolved, and the BCVA improved to 20/25. Lessons: Patients of chronic CSC with irregular PED who undergo PDT should be warned of secondary CNV within a short course after treatment. If happened, it should be treated by intravitreal injections of anti-vascular endothelial growth factor agents as soon as possible.
Introduction
Central serous chorioretinopathy (CSC) is a common visionthreatening chorioretinal disease that causes idiopathic serous detachment of the retina, which primarily affects males aged 20 to 60 years. [1] Pathogenesis of CSC is incompletely understood due to its multifactorial etiology and wide systemic associations. However, choroidal hyper-perfusion and hyperpermeability are known to play a major role. [2] Photodynamic therapy (PDT) induces choroidal vascular remodeling and decreases choroidal permeability, and is advocated for the treatment of CSC. [3] A widely reported complication of standard PDT is secondary choroidal neovascularization (CNV). [4] Its mechanism is attributed to the pro-inflammatory effect, choriocapillaris occlusion, and significant reduction in chorioretinal perfusion caused by PDT. Subsequently, half-dose or reduced-fluence PDT with verteporfin has been well acknowledged to be the most effective and permanent treatment with very low rates of complications. [5,6] Despite this, some rare but severe complications are inevitable.
Recently, a study reported high rates of CNV, detected using optical coherence tomography angiography (OCTA), associated with chronic CSC after half-dose PDT with a mean period of 39.5 months. [7] However, development of CNV, secondary to halfdose PDT for chronic CSC after a short course of treatment, is rare. Here, we report a patient of chronic CSC with serous pigment epithelium detachment (PED) whose visual acuity decreased abruptly due to CNV and macular hemorrhage, following a half-dose PDT, within 3 weeks of the intervention. Fortunately, this case was treated successfully with 2 intravitreal injections of aflibercept (2 mg).
Case presentation
More than 1 year ago, a 46-year-old Chinese man presented to our department with blurred vision in his left eye. At the time, he was diagnosed with acute central serous chorioretinopathy. Oral medications were prescribed; however, he was lost to follow-up. He had excellent uncorrected distance acuity (20/20 in both eyes). Records of optical coherence tomography (OCT) B-scan showed neurosensory detachment at the central macula.
He revisited our department recently and complained of blurred vision again since 1 month. At this visit, he underwent a complete ophthalmic examination, including slitlamp biomicroscopy, bestcorrected visual acuity (BCVA), non-contact tonometry, detailed fundus examination, fluorescein fundus angiography (FFA), indocyanine green angiography (ICGA), OCT, and OCTA.
The BCVA was 20/20 OD and 20/25 OS. Intraocular pressure was within normal limits, and anterior segment examination was unremarkable in both eyes. Fundus examination of the right eye was normal, but the left eye showed a shallow sensory detachment in the macula (Fig. 1A). On FFA, the lesion revealed patchy hyperfluorescent dots temporal to the fovea in the early and late phases (Fig. 1B). ICGA also revealed patchy hyperfluorescent dots in the middle and late phases (Fig. 1C). OCT Bscan showed a dome-shaped serous detachment of the neurosensory retina and a flat serous PED at the first visit (Fig. 1D) and subsequent visit (Fig. 1E). OCTA, reported to be useful for identifying hidden CNVs that could not be found via FA and ICGA, did not show a distinct CNV (Fig. 1F).
The patient was diagnosed with chronic CSC. Treatment employed was half-dose PDT with verteporfin. Fifteen minutes after the start of the intravenous infusion of verteporfin (3 mg/m 2 ), a 689 nm laser was delivered (600 mW/cm 2 ; 83 s). The delivered radiation covered the hyperfluorescent area of the corresponding serous subfoveal PED in the middle or late phase of the ICGA. Three weeks later, the patient complained of sudden blurred vision and revisited our department. Fundus examination showed macular hemorrhages ( Fig. 2A) with a BCVA of 20/250. OCTA showed a distinct area of flower-like CNV located within the deep retinal slab (Fig. 2B). OCT B-scan revealed subretinal hyperreflective material corresponding to the CNV complex located above the retinal pigment epithelium (RPE) (Fig. 2C). Unfortunately, secondary CNV had developed after half-dose PDT in this case.
Next day, intravitreal injection of aflibercept (2 mg) was administered. One month later, the patient's symptoms were relieved and his BCVA improved to 20/100. Fundus examination revealed almost complete resolution of macular hemorrhages. OCTA showed the CNV had completely disappeared. To consolidate treatment, a second intravitreal injection of aflibercept (2 mg) was administered. Two months later, the BCVA improved to 20/25. Macular hemorrhages were completely resolved in the fundus (Fig. 2D) The CNV did not recur in the OCTA image (Fig. 2E), but a small PED was present on the OCT B-scan (Fig. 2F).
Discussion
Although PDT with verteporfin was originally developed for treating CNV secondary to age-related macular degeneration, it was soon used as an important treatment modality for chronic CSC. [6,8] Full-dose PDT with verteporfin (6 mg/m 2 intravenously) used in CSC may sometimes, cause severe complications that are unacceptable for a disease with relatively favorable prognosis. Verteporfin dye might accumulate selectively around the choroidal hyper-permeable area owing to slow blood flow and vascular congestion, which may lead to irreversible occlusion of the choroidal vessels. [9] Additionally, PDT may cause RPE alterations and induce the release of vascular endothelial growth factor (VEGF), contributing to the development of CNV. [10] Many studies have recommended half-dose or reduced-fluence PDT for treating CSC. [5] More recently, Wu et al [7] reported high rates of CNV associated with chronic CSC after half-dose PDT with a mean period of 39.5 months (range: 4-138 months) from treatment to OCTA examination. However, the development of CNV secondary to half-dose PDT within 1 month of treatment is rare. Hwang et al [11] reported 1 such case of chronic CSC developed secondary CNV and subretinal hemorrhages 1 month after reduced-fluence PDT (30 mJ/cm 2 ). In their case, the PED was small and dome-shaped under the fovea before treatment. Fortunately, their patient recovered his vision after 2 intravitreal bevacizumab injections.
In our case, we employed half-dose PDT (3 mg/m 2 ), reported to result in favorable outcomes with less risk of complications. [5] Unfortunately, secondary CNV developed 3 weeks post-treatment. To the best of our knowledge, such an occurrence following this short a course of treatment has not been reported previously. Although the pathophysiology of CSC is poorly understood, it is reasonable to apply optimized PDT at the area of choroidal congestion and RPE leakage, thus preventing the disease from developing. Choriocapillary thinning secondary to the underlying choroidal congestion, long-standing serous PED, and pre-existing defects in Bruch's membrane due to chronic RPE changes, may be a risk factor for the development of CNV. Demircan et al [12] reported that choriocapillaris perfusion seemed to decrease in the very early period following half-fluence PDT and returned to normal after 1 month of therapy. Therefore, halfdose PDT may further exacerbate the already compromised choriocapillaries and increase the incidence of CNV. Undoubtedly, the risk does not outweigh the benefits of optimized PDT for the treatment of CSC. However, patients undergoing PDT with relatively good vision should be warned of this rare complication. Another rare complication secondary to PDT, which needs to be distinguished from secondary CNV, is PDT-induced acute exudative maculopathy (PAEM). [13] PAEM is defined as a massive subretinal serofibrinous exudation with or without acute severe vision impairment. [14] It occurs within days after PDT but has a self-resolving course and favorable prognosis. PAEM is rarely reported after treatment of chronic CSC, with only 3 cases reported in the literature. The pathogenesis includes breakdown of the blood-retinal barrier, RPE pump dysfunction, and inflammatory surge of VEGF occurring after PDT. OCTA is a useful tool to distinguish PAEM from CNV.
To summarize, CNV secondary to half-dose PDT for chronic CSC after a quite short course of treatment is rare. Considering an otherwise favorable prognosis for CSC treatment, patients with chronic CSC who undergo PDT should be warned of this rare complication. Fortunately, it can be successfully treated by intravitreal injection of anti-VEGF agents.
Disclosure
The Institutional Review Board of the Affiliated Wuxi No.2 People's Hospital of Nanjing Medical University approved the protocol, and our study was performed in accordance with the tenets of the Declaration of Helsinki. Written informed consent was obtained from the patient for publication of this case report and all accompanying images. There is no conflict of interest
|
v3-fos-license
|
2019-03-17T13:10:19.544Z
|
2017-12-01T00:00:00.000
|
79540359
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://eu-jr.eu/health/article/download/466/469",
"pdf_hash": "fbbfac43ee75f5f4ddcb10812789f3557c140b79",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41817",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "fbbfac43ee75f5f4ddcb10812789f3557c140b79",
"year": 2017
}
|
pes2o/s2orc
|
MICROBIOLOGICAL SPECTRE OF TERTIARY PERITONITIS AS A COMPONENT OF ITS DIAGNOSTICS AND TREATMENT
The aim of the research was to investigate the microbial spectre of tertiary peritonits (TP) and its antibiotic resistance as the way to improve the diagnostics and treatment of TP. Materials and methods. Prospective research enrolled 109 patients with secondary peritonitis. Tertiary peritonitis developed in 18,3 % of cases. Samples of peritoneal exudate had been drawn upon index operation, relaparotomy and on the day of diagnosis of TP. Blood sampling was performed in patients with persistent fever, impaired consciousness, prolonged (>4 days) discharge from drainage tubes and on the 1st day of diagnosis of TP. Antibacterial susceptibility was evaluated using Hinton-Müller media. Results and discussion. Patients were divided into 2 groups: with secondary peritonitis (89) and with TP (20). In TP group, cultivation of 76,2 % of primary specimens resulted in replantable and identifiable growth, presenting a shift towards Gram-negative flora and higher incidence of Candida albicans. Following the development of TP, hemocultures were positive in 15,1 %, presented mainly by Proteus spp. and non-albicans Candida spp. Second-group carbapenems, tigecycline and piperacillin-tazobactam had shown the highest activity in pathogens of TP. Caspofungin proved to be the most potent antifungal agent, especially towards non-albicans Candida spp. Antibiotic resistance in TP group was marked in 63,8 %. Conclusions. Tertiary peritonitis is one of the most severe forms of abdominal sepsis with highest mortality. Causing pathogenic flora in case of TP is mainly Gram-negative and coccal with high rates of antibiotic resistance both in vitro and in vivo. Fungi, presented predominantly by Candida non-albicans substrains, show an increasing content in peritoneal exudate and major effect upon mortality in TP. In case of TP, a significant percent of peritoneal specimens do not provide any culture growth despite of observing stringent sampling, transportation and cultivation rules. Antimicrobial therapy of TP can never be standardized and should always be thoroughly based upon regular and proper peritoneal and blood sampling.
Introduction
Abdominal sepsis remains to be the leading problem of modern emergency surgery despite the global progress of surgical and pharmaceutical technologies [1,2].Tertiary peritonitis (TP) is one of the most severe forms of abdominal sepsis with dismal results of treatment, namely difficult verification of causal factors, mostly ineffective antibacterial treatment and, as a result, high rates of mortality [3,4].Alongside substantially impaired homeostasis, a major impact on severity of septic patient's condition is made by nosocomial microflora [5].For decades, the latter presents high and fast increasing resistance to a vast array of antimicrobial agents, including potent and recently invented ones [6,7].Clinicians frequently face the failure to cultivate and even to identify the microflora, nevertheless proper techniques and media are used [8,9].Current studies show a di-
Medicine and Dentistry
versity of microbial spectre both of secondary peritonitis (SP) and TP, depending on region, type of department, nosology, antibiotic treatment regimen etc. [10].Recently, an increasing role in course and prognosis of treatment of peritonitis is given to fungal infection [11].All abovementioned prove the topicality of the problem of TP, its causing flora, antibiotic susceptibility and the need for their further in-depth investigation.
Aim of the research
The aim of the research was to investigate the microbial spectre of tertiary peritonits (TP) and its antibiotic resistance as the way to improve the diagnostics and treatment of TP.
Materials and methods
We had prospectively examined 109 patients with SP, operated in the Clinic of surgery and endoscopy of Lviv Danylo Halytsky national medical university in 2010-2015.
Tertiary peritonitis was diagnosed on 3rd-12th day (median -5) in 20 (18,3 %) patients.Criteria for diagnosis of TP were: persistence of peritoneal symptoms despite adequate surgical elimination of the infectious source, presence of nosocomial microflora in peritoneal exudate, multi-organ failure and time of stay in intensive care unit for >3 days [12].Postoperative mortality in the whole cohort was 30,2 %.Tertiary peritonitis had a lethal outcome in 90 % of cases, with sepsis being the main cause.
Samples of peritoneal exudate had been drawn upon index operation for SP at least from 4 distant areas in case of diffuse peritonitis and from 2 in case of local each using a separate swab and container with protective environment.All specimens had been transported to microbiological laboratory within 15 min and were cultivated using different nutritional media in appropriate thermal environment.In case of programmed relaparotomy (PRL) / on-demand relaparotomy (ODRL) and on the day of diagnosis of TP, we had additionally performed sampling of peritoneal exudate directly from the peritoneal cavity and/or drainage tubes at least from 2 remote locations.
Blood sampling was performed from both cubital veins in all patients with persistent (>48 h) fever, impaired consciousness (according to Glasgow scale values), prolonged (>4 days) discharge from drainage tubes and on the 1st day of TP diagnosis.In case blood samples gave growth to skin saprophytes, the sampling was repeated once again from both cubital veins.
Actively growing colonies were identified by microscopy and/or enzyme technique.Gram-positivity and primary phenotyping of flora were evaluated by cultural features and biochemical identification systems after 24 h.Identified cultures had been replanted onto Hinton-Müller media to evaluate susceptibility to 19 antibiotics using Kirby-Bauer method.Susceptibility of pathogenic fungi to fluco-and voriconasole was evaluated on glucose-enriched agar using semi-quantitative method.Candida and its subspecies were identified using mannan and galactomannan serological method.
Results
Patients were divided into 2 groups: with SP (n=89) and with TP (n=20).In SP group, cultivation of 85,7 % of primary specimens resulted in replantable and identifiable growth, in TP group -76,2 %.As seen in Table 1, aerobic flora had quantitatively prevailed in primary specimens of SP.In those patients, who developed TP further on, a shift towards Gram-negative flora was marked alongside higher incidence of Candida albicans.A detailed analysis of primary microbiograms had shown that the infection was presented by cultural associations.In SP group, 3 different microorganisms formed an association in 24,5 % of cases, 2 -in 70,1 % and only in 5,4 % it was monocultural.Microbial associations in the TP group were alike to those in SP group, but with a 23,1 % fraction of Candida spp.
Taking into account a multifold amount of results, antibacterial susceptibility to chosen drugs in SP group is given as Expected clinical efficacy (ECE) (>66 % of cultures presented delayed growth >20 mm) in Fig. 1.
Expected clinical efficacy in TP group is shown in Fig. 2.
We had observed a substantial shift of Candida towards non-albicans subspecies, which in 68,2 % had exposed a good susceptibility to caspofungin (26,1 %) and voriconazole (18,3 %) and a rather poor one to fluconazole (9,5 %).Half of the TP group received 4 antimicrobials at once incl. 1 systemic antifungal agent.
Received results created the basis for introduction of changes in antimicrobial treatment (Table 4).It is worth of mentioning that several substrains of Staphylococci (TP group) demonstrated medium (33-66 %) susceptibility to fosfomycin.As well, we observed >66 % in vitro efficacy of co-trimoxazole towards certain strains of Acinetobacter spp.and Citrobacter spp.Similar in vitro data were obtained about activity of colistin towards P. aeruginosa and E. coli.Although, none of those were chosen for use in vivo due to known significant side effects, particularly toxicity and risk of further promotion of antibiotic resistance.
Discussion
Tertiary peritonitis was once called an "uncontrolled peritonitis", manifesting itself as sepsis at the background of sterile peritoneum [13].Hospital-acquired infections, concomitant diseases and immunosuppression are one of the most potent risk factors of mortality [14].Failure of antibacterial therapy is considered to be one of the indications for hemoculturing, as Candida spp.are often suspected [9].Severity of condition, causal pathogenic flora and prognosis differ substantially in septic patients of surgical department and ICU [15].
According to a recent study of O. van Ruler et al. [16], 70 % of peritoneal cultures of SP were polymicrobial, 19 % -monomicrobial and 11 % showed no microbial growth.Another group of authors observed 72 % cases of polymicrobial, 8-28 % monomicrobial cultures and up to 20 % of unsuccessful microbial sampling [17,18].These data partially coincide with ours, though percentages in TP group differ essentially.
Recent studies of microbial spectre of SP prove that it is mostly stable throughout years: E. coli -50-100 %, Streptococcus spp.-10-44 %, P. aeruginosa -24,9 %, S. aureus -16 % [19].Severity of the patient's condition and difficulties of adequate "source control" upon index emergency operation often form the indication for relaparotomy -a known risk factor for further colonization of abdominal cavity by nosocomial pathogens [20].This could explain why causal flo-
Medicine and Dentistry
ra of TP is alike to that in SP, though has obvious qualitative and quantitative differences.Majority of modern works indicate on lower incidence of successful peritoneal sampling, a move towards Gram-positive flora and increasing content of fungi [21].One study observed E. coli in 52 % cultivates of TP, Klebsiella spp.-10 %, Enterobacter spp.-19 %, P. aeruginosa 13 %, Enterococcus faecalis -33 %, Enterococcus faecium -8 % [22].These findings also have similarities with our research, though, study duration could have led to changes both in spectre and antibiotic susceptibility over time.Diversities and sometimes discrepancies in microbial spectres of peritonites could be explained by differences in regional protocols/standards of antibiotic treatment and structure of morbidity of given population.
Yeast strains comprise 22-41 % in exudate of SP and 17 % of all nosocomial isolates in ICU [23].Isolation of Candida from intra-abdominal cultures is successful in 57 % of cases and is associated with increased mortality [24].In the structure of obtained fungal cultures during SP, Candida albicans was seen in 74 %, Candida glabrata -17 %; remaining 9 % included: Candida inconspicua, Candida parapsilosis, Candida tropicalis, Candida zeylanoides, Geotrichum candidum [23].In our research, in SP group we observed lesser percentage of Candida albicans and very moderate -of Candida non-albicans in comparison to TP group.
Prevalence of Candida, especially of its non-albicans strains, in peritoneal exudate on index operation hints upon compromised anti-infective defense and latent immunodeficiency as potent risk factors for development of TP.In case of candidous TP, mortality reaches 70 % [11].Peritoneal sampling during TP had shown positive culturing of albicans-subspecies in 12 % of cases, non-albicans-subspecies -in 3 % [23].In TP group, we had seen a remarkable shift of the ratio of "albicans/non-albicans" subspecies in favour of the latter.
International research data claim a vast range of hemocultivation of Candida spp.-4-32 % [27].Regardless of primary disease, Hung-Wei C. et al. [28] reported a shift from Candida albicans to Candida non-albicans subspecies in hemocultures, with non-albicans quotient equaling up to 63 %.Out results of hemocultivation in TP group were 90 % of non-albicans substrains.
Clinicians still deal with unclear previous history of antibiotic use and fungal status, which directly affect the results of antibacterial treatment.Normally, antibacterial therapy should start as pre-emptive, later on transforming into a definitive one.Nowadays, there are probably no borders for migration of resistant microflora and interchange of its defense mechanisms between species.Even if "source" and "damage control" principles had been maintained on the index procedure, inadequate antimicrobial therapy is the risk factor for pessimistic outcome [21].
Antimicrobial susceptibility data vary a lot due to the huge set of independent factors, including technique of sampling, quality of nutritional media, subjectivity of interpretation and actual antimicrobial protocols.In our study, 2 nd group carbapenems showed the highest in vitro efficacy in both study groups, corresponding to data of other colleagues [29].Other clinicians call to diminish/ignore the use of any generation cephalosporins due to serious drop of their efficacy with time and strong promotion of antibacterial resistance [26].
On one hand, antifungal therapy should be prescribed either to each critical patient or in case of positive fungal culture [8], on the other -empiric coverage of yeast in abdominal sepsis is not supported by present data because of high resistance of Candida spp. to fluconasole [27].
Finally, the "collateral damage" effect, alongside natural process of antibiotic resistance, became the main reason of emergence of MDR pathogens.Antibiotic resistance to >2 antibiotics occurred in 64,9 % of cases of SP, presented mainly by extended spectrum b-lactamase-producing Enterobacteriaceae spp.and P. aeruginosa in 11,1 % and 11,9 % respectively [26].In other works, fraction of nosocomial MDR reached 84,8 %, but just 14 % in case of TP [23].Our findings of MDR pathogens make up for 35,3 % in SP group, and 28,7 % of strains in TP group.
All above emphasize the complexity of the problem of TP and the necessity of further indepth investigation of its microbial spectre and antibiotic resistance in order to improve the approach to diagnostics and treatment results.
Conclusions
1. Tertiary peritonitis remains to be one of the most severe forms of abdominal sepsis with highest mortality.
2. Causing pathogenic flora in case of tertiary peritonitis is mainly Gram-negative and coccal with high rates of antibiotic resistance both in vitro and in vivo.
3. Fungi, presented predominantly by Candida non-albicans substrains, show an increasing content in peritoneal exudate and major effect upon mortality in tertiary peritonitis.
4. In case of tertiary peritonitis, a significant percent of peritoneal specimens do not provide any culture growth despite of observing stringent sampling, transportation and cultivation rules.
5. Antimicrobial therapy of tertiary peritonitis can never be standardized and should always be thoroughly based upon regular and proper peritoneal and blood sampling.
Table 1
Microbial spectre of SP and TP upon index operation
Table 2
Microbial spectre of TP
Table 3
ECE depending on Gram-polarity (% of sensitive cultures)
|
v3-fos-license
|
2023-03-01T16:11:54.416Z
|
2023-02-27T00:00:00.000
|
257248790
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2023.1094379/pdf",
"pdf_hash": "db7ed8c8b04374dade40acf1c50d5a83d577780d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41818",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3a4bc07e16a10a2e88d74b49d94ada2e6dc31fe0",
"year": 2023
}
|
pes2o/s2orc
|
Preoperative ultrasound identification and localization of the inferior parathyroid glands in thyroid surgery
Introduction The parathyroid glands are important endocrine glands for maintaining calcium and phosphorus metabolism, and they are vulnerable to accidental injuries during thyroid cancer surgery. The aim of this retrospective study was to investigate the application of high-frequency ultrasound imaging for preoperative anatomical localization of the parathyroid glands in patients with thyroid cancer and to analyze the protective effect of this technique on the parathyroid glands and its effect on reducing postoperative complications. Materials and methods A total of 165 patients who were operated for thyroid cancer in our hospital were included. The patients were assigned into two groups according to the time period of surgery: Control group, May 2018 to February 2021 (before the application of ultrasound localization of parathyroid in our hospital); PUS group, March 2021 to May 2022. In PUS group, preoperative ultrasound was used to determine the size and location of bilateral inferior parathyroid glands to help surgeons identify and protect the parathyroid glands during operation. We compared the preoperative ultrasound results with the intraoperative observations. Preoperative and first day postoperative serum calcium and PTH were measured in both groups. Results Our preoperative parathyroid ultrasound identification technique has more than 90% accuracy (true positive rate) to confirm the location of parathyroid gland compared to intraoperative observations. Postoperative biochemical results showed a better Ca2+ [2.12(0.17) vs. 2.05(0.31), P=0.03] and PTH [27.48(14.88) vs. 23.27(16.58), P=0.005] levels at first day post-operation in PUS group compared to control group. We also found a reduced risk of at least one type of hypoparathyroidism after surgery in control group:26 cases (31.0%) vs. 41 cases (50.6%), p=0.016. Conclusion Ultrasound localization of the parathyroid glands can help in the localization, identification and in situ preservation of the parathyroid glands during thyroidectomy. It can effectively reduce the risk of hypoparathyroidism after thyroid surgery.
and PTH [27.48 (14.88) vs. 23.27(16.58), P=0.005] levels at first day post-operation in PUS group compared to control group. We also found a reduced risk of at least one type of hypoparathyroidism after surgery in control group:26 cases (31.0%) vs. 41 cases (50.6%), p=0.016.
Conclusion: Ultrasound localization of the parathyroid glands can help in the localization, identification and in situ preservation of the parathyroid glands during thyroidectomy. It can effectively reduce the risk of hypoparathyroidism after thyroid surgery. KEYWORDS thyroid cancer, parathyroid gland, hypoparathyroidism, ultrasound localization, clinical effect
Introduction
Thyroid cancer is one of the most rapidly increasing malignancies in the world in recent years, and more than 90% of its pathological type is papillary carcinoma, for which surgical resection represents the most effective treatment (1). Hypoparathyroidism caused by surgical damage is one of the most common comorbidities of thyroid surgery (2,3).
The parathyroid gland (PG) is an important endocrine gland. The chief cells of PG secrete parathyroid hormone (PTH), an important hormone that maintaining calcium and phosphorus homeostasis in the body (4,5). Due to different migration patterns during embryonic development, Inferior parathyroid glands (IPG) ectopia is observed frequently with a fragile blood supply and anatomical variability (6, 7). These locations include paraoesophageal, mediastinal, intrathoracic, intrathyroidal and/or around the carotid sheath (8). Additionally, the number of PGs is also varying (9). Therefore, it is crucial to identify and protect the PGs in situ during thyroidectomy (10,11), because incidental resection of one or more PGs or disruption of parathyroid blood supply may lead to hypocalcemia and hypoparathyroidism (12,13), causing symptoms such as numbness sensation in the hands and feet and even paralysis of the laryngeal and respiratory muscles, which seriously affects the quality of life of patients. It is also a major cause of doctor-patient tension after thyroid surgery.
Ultrasound diagnostic techniques is a common examination of thyroid and cervical lymph node lesions (14,15). Compared to other diagnostic techniques, ultrasound diagnosis has several advantages including dynamic, radiation-free, reproducible, highresolution, low-cost, and is accepted by a larger number of patients (16,17). However, studies on the characteristics of preoperative normal parathyroid ultrasound images are not very prominent (18, 19), and it is desirable to improve the clinical knowledge and application of preoperative normal parathyroid ultrasound images.
The purpose of this work is to evaluate the protective effect of the application of preoperative high-frequency ultrasound imaging of parathyroid in patients with thyroid cancer and its effect on reducing postoperative complications.
Patience
A total of 165 patients with thyroid cancer surgery in our hospital from May 2018 to May 2022 were included, with a male to female ratio of 38:127 (1:3.34), aged from 21 to 71 years, with a mean of 45.63 ± 11.29 years.
The patients were assigned into two groups according to the time period of surgery: Control group, May 2018 to February 2021 (before the application of ultrasound localization of parathyroid in our hospital); PUS group, March 2021 to May 2022 (After the application of ultrasound localization of parathyroid). Among them, 81 cases were in the control group and 84 cases in the preoperative ultrasound group (PUS group).
Inclusion criteria: (1) Subject is received total thyroidectomy and bilateral central node dissection, and is diagnosed as papillary thyroid cancer by postoperative pathology. (2) Subject is treated for the first time with thyroid surgery. (3) Preoperative parathyroid hormone levels, serum calcium levels, liver function and kidney function were within normal range. No history of suspected parathyroid pathology such as chronic kidney disease, urinary tract stones, increased or decreased bone density, pathological fracture, etc.
Exclusion criteria: (1) Subjects is received lymph node dissection in the lateral neck area. (2) Subjects is received intraoperative parathyroid auto-transplantation.
All subjects were informed and the informed consent form were signed. The study was approved by the ethics committee of our hospital (NO.2022-301).
Identification of normal Inferior parathyroid glands using ultrasonography
We developed a ultrasonic identification method of the IPGs. The IPGs was scaned using Aplio 500 Doppler ultrasonography (Toshiba) with a 5 to 14 MHz frequency probe. During the examination, the patient was placed in a supine position, and the head was tilted back to fully expose the anterior neck, with the head tilted to the side if necessary (Figure 1). The number, location, size, and internal echogenicity of the IPGs are observed ( Figure 2).
Operation of ultrasonography: The upper and lower range is from the middle and lower 1/2 of the dorsal thyroid to the level of the sternoclavicular joint, and the right and left range is between the trachea and the lateral border of the sternocleidomastoid (Figure 1).
Ultrasound features of the IPGs we developed are: (1) The morphology is mostly oval, a few narrow. (2) The size is about (6-8 mm) x (4-5 mm) x (3-4 mm). (3) The overall echogenicity is higher than that of the surrounding muscle tissue and is slightly higher than that of the thyroid tissue. The internal echogenicity is A B FIGURE 2 Preoperative ultrasound identification of the IPG. homogenous and a border with the surrounding tissue is visible ( Figure 2). We have grouped the location of parathyroid into two categories, type I: closely opposed to the thyroid peritoneum; type II: free below the inferior pole of the thyroid gland or lateral to the thyroid gland, not closely opposed to the thyroid gland.
We have focused on measuring the following three distances of the IPGs in cross-section and longitudinal section: (A) The distance of the IPGs from the cricoid cartilage; (B) The distance of the IPGs from the midline of the trachea; (C)The distance of the IPGs from the anterior tracheal section. (It should be noted that this study did not involve bilateral superior parathyroid glands(SPG); please see the Conclusions and Discussion section for details). The diameters of three axes of the IPG were also measured ( Figure 3). In addition, our IPG ultrasound images were independently reviewed and determined by three senior ultrasound experts individually. We included the PUS group for their agreement to confirm the IPG images.
Operation and measurement
Intraoperatively, after carefully searching of the IPGs according the preoperative ultrasound localization, a sterile ruler was used to measure the distance A and B without touching the PGs ( Figure 4). We didn't measure the distance C (i.e., the depth), because significant variation due to the pulling of muscles during the anatomical exposure of the thyroid gland. We do not measure the size of the PGs for the sake of unnecessary manipulation.
The Ca 2+ (SIEMENS, ADVIA2400) and PTH (ROCHE, Cobas e 601) levels were measured preoperatively and on postoperative day 1. Patients with the following conditions are considered to be at risk for hypoparathyroidism: (i) Ca 2+ less than 2.0 mmol/L; (ii) PTH less than 15 pg/ml; (iii) numbness of mouth and lips; (iv) Tingling and numbness of fingertips; (v) Muscle aches and spasms; (vi) tetany; (vii) laryngeal spasm, diaphragm spasm, and other severe spasms in both skeletal and smooth muscles throughout the body.
Statistical analysis
SPSS 25.0 and Graphpad prism 8.0.2 software were used to process the data. The Mann-Whitney U test was used. Data were expressed as mean ± standard deviation (X ± S) or median (quartiles); Spearman correlation was used for bivariate correlation analysis; Pearson X2 test was used for categorical variables. p<0.05 was considered statistically significant.
Patient characteristics
A total of 165 eligible patients were included in this study. There were 84 in the preoperative ultrasound group and 81 in the control group. There were no significant differences between the preoperative ultrasound groups and control groups in parameters such as age, sex, prevalence of Hashimoto's thyroiditis, tumor size, and lymph node metastasis in the central region (Table 1).
Preoperative ultrasound localization of IPGs
Our data showed that the left side IPGs were found in 51 (60.7%) patients and 48 (57.1%) for the right side by ultrasonography. A total of 31 (36.9%) IPGs were found bilaterally, and 68 (81.0%) IPGs were found on at least one side. This indicates that we had a high detection rate of the IPGs on at least one side ( Table 2). The average size observed in ultrasonography of the IPGs was: (7.25 ± 1.38) x (3.88 ± 0.92) x (4.65 ± 1.04) mm, while the three-dimensional spatial coordinate A, B, and C were (36.70 ± 5.75) mm, (15.85 ± 3.82) mm, and (5.29 ± 2.93) mm, respectively. This is an important reference for finding the IPGs during surgery (Table 2; Figure 5).
We have classified the position of IPGs into two categories ( Figure 6): type I, where the IPGs were in close proximity to the thyroid gland and adhered to the thyroid peritoneum, on the inferior thyroid position or to the dorsal lateral thyroid peritoneum; type II, where the IPGs were not adhered to the thyroid peritoneum and separated from the thyroid gland for a certain distance.
Intraoperative comparison for IPG
After carefully searching the IPGs according to the preoperative ultrasound position, the coordinate parameters of parathyroid (A and B) were measured with a sterile scale, which were (33.70 ± 12.75) mm and (14.79 ± 8.82) mm, respectively, we found a good correlation (r= 0.8945; r= 0.9113) with the preoperative ultrasound measured spatial position ( Figure 5). This indicates that our preoperative ultrasound-measured three-dimensional spatial coordinate was very helpful in finding identification during surgery.
Bilateral IPGs could be explored intraoperatively in 62 cases (73.8%) and at least one IPG was found in 74 cases (88.1%) in the preoperative ultrasound ( Table 2). The percentage of those who could be found on preoperative ultrasonography but cannot be found intraoperatively, whether left or right sides, was less than 10% (6.0%, 1.2%). This indicates t that our preoperative ultrasonic identification technique has more than 90% (94%, 98.8%) accuracy (true positive rate) in identifying the tissue as a PG. Those who were not identified by preoperative ultrasound evaluation but had found intraoperatively were: 17 cases (20.2%) in the right side and 23 cases (27.4%) in the left side (false negative rate).
We can conclude that ultrasonic localization of the PGs can help in the search, identification and in situ protection of the PGs during thyroidectomy. No IPG detected both pre-and intra-operatively 6 (7.1%) A B D C In addition, only 6 patients in the preoperative ultrasound group complained of slight numbness in both hands-on postoperative day 1, and only 1 complained of numbness and tingling with spasm in the fingertips; none of them complained of the above-mentioned numbness on postoperative day 3 when they were left the hospital. In contrast, 15 patients in the control group complained of slight numbness in both hands first day after surgery, and 5 patients complained of numbness and tingling in the with spasm; 2 patients complained of numbness in both hands and around the mouth again on the third day after surgery. There were no critical symptoms such as hand-foot convulsions and laryngeal diaphragm spasms in the study and control groups after surgery.
Discussion
In present study, we evaluated the effect of the application of preoperative ultrasound localization techniques on reducing the risk of postoperative hypoparathyroidism. We found that our preoperative ultrasonic identification technique has more than 90% (94%, 98.8%) accuracy (true positive rate) in identifying the tissue as a PG. Postoperative biochemical result showed a better Ca 2 + and PTH levels at first day post-operation in PUS group compared to control group.
Interestingly, As shown in Table 5, we found a higher detection rate of type II PG than type I pg. We suppose the reasons are as No IPG detected both pre-and intra-operatively 6 (7.1%)
Risk of hypoparathyroidism
Type I: Closely opposed to the thyroid peritoneum; Type II: Free below the inferior pole of the thyroid or lateral to the thyroid, not closely opposed to the thyroid.
follows: Firstly, type II IPG are not closely adhered to thyroid, which makes them easier to identify; Secondly, most type II IPG were enveloped by a "fat capsule" (Figure 7). The reasons for the high false-negative rate are as follows: (1) A small percentage of suspicious IPGs on ultrasound do not have clear borders and echogenicity and are similar to surrounding fat, lymph, and connective tissue. The three senior ultrasound specialists in our team did not include ambiguous tissues in the positive criteria. (2) A small percentage of IPGs do not visible clearly on ultrasound and may be covered by other tissues. For example, they are extremely close to the true peritoneum of the thyroid gland, far from the thyroid gland, or encapsulated in lymph nodes in the central region and not clearly detected.
At present, the SPGs are very challenging to visualize on ultrasonography, the result as follows: (1) We have tried to obtain satisfactory ultrasound images for the SPGs from experienced ultrasonographers. Very few of the SPGs are clearly visualized on ultrasound. This may be due to the complex anatomy surrounding the SPGs, such as the thyroid cartilage, which affects the visualization on ultrasound. (2) The location of the SPGs is relatively fixed and without surrounding fat or lymph nodes. As a result, the SPGs are easy to find during surgery. (3) The location of the IPGs is extremely variable, with a complex blood supply and surrounding fat or lymph nodes. During the operation, it is difficult to find, which increased the risk of accidental removal or blood supply damage. (4) In our prior experience, the probability of accidental removal of the IPGs is much higher than the SPGs. Therefore, our study focused on bilateral inferior parathyroid glands and excluded bilateral superior parathyroid glands.
How it helps to reduce the damage to the parathyroid glands and their blood supply by anticipation the location of the inferior parathyroid glands? Since we anticipate their location, we can protect the parathyroid glands and their blood supply before they are fully exposed. Our technique reduces the window period to discover the parathyroid glands. More importantly, our technique avoids unnecessary manipulations to search and expose parathyroid (this is also the main reason of accidently damage by experienced surgeon), that effectively reduces the risk of the direct damage of the parathyroid glands. Because while we were searching for it during surgery, we had already damaged it or its blood supply.
For the parathyroid glands that closely attached to thyroid (type I IPG), we can try to anticipately preserve their blood supply before thyroidectomy. For the parathyroid glands that positioned at a certain distance from the thyroid gland (type II IPG), due to their high similarity to surrounding lymph nodes and fatty granules, surgeon should well expose the suspicious structures one by one to identify the parathyroid. Our preoperative localization technique allows us to anticipate its location and avoid exposing every suspicious structure, which reduces the risk of accidental removal or blood supply damage.
Additionally, for the parathyroid glands that closely attached to thyroid glands and without independent blood supply. This type of parathyroid glands could not be preserved in situ, and the only solution is auto-transplantation. We have excluded these cases in the present works. This is because survival rates are different after auto-transplantation. Serum Ca2+ and PTH may be affected.
In our study, in 10 cases (11.9%) we did not find IPG either by preoperative ultrasound localization or intraoperatively. The IPG and SPG have distinct embryonal origin: The IPG originates from the third pharyngeal bursa together with the thymus, whereas the SPG originates from the fourth pharyngeal bursa (20,21), which explains the variation in the location of the IPGs. The 10 cases in which we did not find the IPGs could localized around the vascular sheaths of the neck, in the thymus, mediastinum or thyroid parenchyma. This category may be called "type III" PG.
Currently, 99mTc-MIBI is only used to localize parathyroid adenomas but is not recommended for the identification of normal PGs during surgery (22,23). Although exogenous dyes such as indocyanine green have been used intraoperatively to localize PGs, it may cause toxicity, pain, allergy (24). In addition, there is the parathyroid near-infrared autofluorescence technique. However, it is difficult to determine type II PG, which is covered by fat, connective tissue or the thyroid gland. In other word, the type II PGs cannot be identified with autofluorescence unless the glands are exposed (25, 26). More importantly, these techniques require additional manipulations and increase overall time cost of surgery.
Conclusion
In summary, our ultrasonography identification and localization help to anticipate the location of parathyroid gland before surgery. Our technique reduces intraoperative damage to the parathyroid glands by reducing unnecessary manipulations during the search of the parathyroid glands, which reduces the incidence of postoperative hypoparathyroidism. We have confirmed its effect by postoperative biochemical analysis. In addition, this technique also reduces the psychological stress of the surgeon during surgery about the unknown location of the parathyroid glands.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.
Ethics statement
The studies involving human participants were reviewed and approved by the ethics committee of Shandong Provincial Hospital Affiliated to Shandong First Medical University. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
|
v3-fos-license
|
2022-03-25T15:09:26.456Z
|
2022-03-21T00:00:00.000
|
247651469
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fevo.2022.840784/pdf",
"pdf_hash": "033930ea24bde7b4217b721cc5b7f47ab48e5a98",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41820",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural And Food Sciences"
],
"sha1": "31c0f296f27cb68fc696f3678d4548dbe99c1db8",
"year": 2022
}
|
pes2o/s2orc
|
Deposition Flux, Stocks of C, N, P, S, and Their Ecological Stoichiometry in Coastal Wetlands With Three Plant Covers
The depositional flux of coastal wetlands and the deposition rate of biogenic elements greatly affect the carbon sink storage. Ecological stoichiometry is an important ecological indicator, which can simply and intuitively indicate the biogeochemical cycle process of the region. This study investigated the soil deposition flux, stocks, and ecological stoichiometric ratios of C, N, P, and S under different water and salt conditions based on 137Cs dating technology in the Yellow River Delta (YRD) of China. The results showed that the deposition fluxes were 0.38 cm/year for PV wetlands, 1.08 cm/yr for PA wetlands, and 1.06 cm/yr for SS wetlands. Similarly, PA wetlands showed higher deposition fluxes of C, N, and S compared with SS and PV wetlands. PA wetlands had higher stocks of C (5.86 kg/m2), N (0.36 kg/m2) and S (0.36 kg/m2) in the top 1-m soil layer compared with PV and SS wetlands. However, the highest deposition rate of P (9.82 g/yr/m2) was observed in SS wetlands among the three wetlands. Three accumulative hotspots of C, N, and S in soil profiles of PA and SS wetlands were observed at soil depths of 0–10, 40–60, and 90–100 cm, whereas one accumulative hotspot of P was at the soil depth of 10–12 cm in SS wetlands and 80–82 cm in PA wetlands. PV wetlands showed higher accumulations of C, P, and S in the top 10 cm soil layer and N at the soil depth of 90–100 cm. The higher top concentration factors in these three wetlands indicated that the dominant input of plant residues was the main reason. The ratios of C/N and C/N/P of each sampling site were higher in the surface soils and decreased with depth. The ratios of C/P and N/P were larger in the surface layer (0–20 cm), the middle layer (40–60 cm), and the deep layer (90–100 cm). The ratios of N/P and C/N/P were relatively lower, indicating that these studied wetlands were N-limited ecosystems. The results implied that the coastal wetlands in the YRD have huge storage potential of biogenic elements as blue carbon ecosystems.
INTRODUCTION
Coastal wetlands, sea-land interlaced belts and buffer zones are important areas for the strong exchange of materials and energy between the ocean and land. They play a vital role in regulating global environmental changes and serving as sources and sinks of carbon (C), nitrogen (N), phosphorus (P), and sulfur (S). C, N, P, and S are important constituent elements in the soil. Their contents and stocks can directly affect the soil quality, nutrient cycling, and ecological functions of the coastal wetland ecosystems Feng et al., 2017;Lu et al., 2018).
The regulation of biological elements with different soil profiles in the coastal wetland is not obvious because coastal wetlands are subject to the combined effects of erosion and sedimentation under the two-way regulation of tidal current and runoff. Taking soil organic carbon (SOC) as an example, the SOC pool is composed of the active pool and the inert pool, which leads to different stability in the soil profile due to high spatial heterogeneity. In addition, soil animals and microorganisms could decompose SOC and nitrogen into CO 2 , CH 4 or N 2 O, which would exacerbate the heterogeneity of soil carbon or nitrogen pools (Wu et al., 2013. Many studies have shown that SOC is mainly distributed in the upper soil layer of the soil profile and it tends to gradually decrease from the surface to the bottom (Wang et al., 2010). However, the current research on the SOC storage of coastal wetlands is mainly concentrated on the surface soil (0-30 cm) of the Yellow River Delta (YRD) region, and there are few studies on the contribution rate of the deep SOC storage to the carbon pool (Luo et al., 2020). Therefore, it is not enough to accurately reflect the soil carbon pool of coastal wetlands by studying the biogenic elements in surface soils. For the study of the distribution form of soil biogenic elements and ecological stoichiometry in the soil, a deeper soil profile should be selected.
The ecological stoichiometric ratio provides important insights for studying the energy and material cycles in estuarine ecosystems (Coynel et al., 2016). In addition, each ecological stoichiometric ratio is of great significance to the biogeochemical cycle of C, N, P, and S in the soil. For example, the soil C/N ratio is a sensitive indicator used to measure soil quality (Elser et al., 2003); the C/P ratio and C/S ratio are considered to be symbols of the mineralization ability of phosphorus and sulfur in the soil. Previous studies have focused on the stoichiometric ratios of soil and the effects of various environmental factors, such as vegetation types and rainfall, on the ecological stoichiometric ratios of soil at different soil depths in forests, karst, and grasslands (Yang et al., 2014;Fan et al., 2015;Wang et al., 2018). Some researchers have presented the impacts of the ecological stoichiometric ratios on various ecological functions in the ecosystems and the interactions between microbes and the ecological stoichiometric ratios in the abovementioned ecosystems. Although the stoichiometric ratios in coastal wetlands are more important due to the impacts of historical sedimentation and strong exchange between marine and terrestrial materials of coastal wetlands, only few studies have been conducted on the relations between sedimentation and ecological stoichiometry in coastal wetlands.
The Yellow River is one of the rivers with the highest sediment content in the world, and the relationship between its sedimentation flux and the stocks of soil biogenic elements is also a problem that needs to be revealed. As the most fragile and sensitive new delta in the world, the YRD has extremely precious ecological value. In recent years, governance along the Yellow River has also received great attention from the Chinese government. Therefore, it is necessary to study the soil deposition rate of the YRD and enhance the sequestration capacity of these elements and soil quality management of the coastal wetlands. The primary objectives of this study were to: (1) analyze the depth of the distributions of the contents and stocks of C, N, P, and S along soil profiles with different water and salt conditions in the YRD; (2) investigate the deposition rates of C, N, P, and S in the soil using the 137 Cs dating technique in coastal wetlands; and (3) identify the changes in ecological stoichiometric ratios of C, N, P, and S with depth along the soil profiles.
Study Area
The study area is located in the YRD (N 37 • 35 -38 • 12 , E 118 • 33 -119 • 20 ), Dongying city, Shandong Province, China. It has a warm temperate semihumid monsoon climate with four distinct seasons, rain and heat at the same time, small regional climate differences, and a frost-free period of 196 days (Cui et al., 2008). The annual average sunshine hours are 2,590-2,830 h, and the annual average temperature is 12.1 • C. The annual precipitation is 551.6 mm, 70% of which is concentrated in the period from June to August. The annual evaporation is 1,962 mm. The spring evaporation is strong, and the evaporation accounts for approximately 51.7% of the year. The drought index is as high as 3.56 . The main vegetation types are herbs, such as Pteris violata, Phragmites australis, Triarrhena sacchariflora, Suaeda salsa, Myriophyllum spicatum, and Limoninum sinense; shrubs, such as Tamarix chinensis; and trees, such as Salix matsudana (Cheng et al., 2021). There are more than 40 families, 110 genera, and 160 species. Phragmites australis, Pteris violate, and Suaeda salsa are the main dominant species in this area.
Three sampling sites were selected along one sampling belt perpendicular to the riverbed in the coastal wetlands on the north bank of the Yellow River (Figure 1), including Pteris violata wetlands (PV), Phragmites australis wetlands (PA), and Suaeda salsa wetlands (SS). Among them, PA wetlands can be affected by underground seawater and freshwater when the water and sediment regulation project of the YRD is implemented in July. Comparatively, SS wetlands are dominantly affected by tidal water.
Sample Collection and Analysis
The soil samples were collected by excavating the soil profiles at a depth of 0-100 cm at intervals of 2 cm. After removing visible plant residues and stones, the soil samples were airdried for 2 or 3 weeks. One replicate was sampled at each point for analyzing relevant indicators. One part of the air-dried samples were ground through sieving using a 20-mesh for the determination of 137 Cs labeling. Another part of the sample was further ground to pass through a 100-mesh sieve for determining soil properties. SOC was measured by the potassium dichromate dilution thermal-colorimetric method. Total nitrogen (N) was measured on an elemental analyzer (CHNOS elemental analyzer, Vario EL, Germany). Total phosphorus (P) and total sulfur (S) were determined by inductively coupled plasma atomic emission spectroscopy (ICP/AES) after digestion in an HClO 4 -HNO 3 -HF mixture in Teflon tubes. Quality assurance and quality control were assessed using duplicates, method blanks, and standard reference materials (GBW07401) from the Chinese Academy of Measurement Sciences with each batch of samples (1 blank and 1 standard for every 10 samples). The recoveries of samples spiked with standards ranging from 95 to 106%. 137 Cs value in the soil was determined using a high-purity germanium gamma spectrometer in the laboratory of the Geography Department, Beijing Normal University.
Stocks and Topsoil Concentration Factors
The stocks of C, N, P, and S were calculated as follows: where CS, NS, PS, and SS are the stocks of C, N, P, and S per unit area (kg/m 2 ), respectively; C i , N i , P i , and S i are the average contents of C, N, P and S in the soil layer i (i = 1, 2, 3, 4, and 5); and T is the thickness of soil layer i (cm). The topsoil concentration factor is calculated by the stocks in the top 10 cm and the stocks in the top 100 cm.
Statistical Analysis and Graphing
One-way ANOVA was used to identify the significant difference in C, N, P, and S contents and stocks between different wetlands. The difference was considered to be significant if P < 0.05. Data processing was performed using Excel 2019 software package software, and graphs were created by Origin 2017 software package.
Contents and Stocks of C, N, P and S
The highest average contents of C, N, and S in the PA wetland were 3.91, 0.24, and 0.24 g/kg, respectively, and the PV wetland had the highest average contents of P (0.65 g/kg) (Figure 2). SOC content in PA wetlands was significantly higher than that in PV wetlands (P < 0.05). Three layers with high concentrations of C, N, and S in soil profiles were observed in PA and SS wetlands at soil depths of 0-10 cm, 40-60 cm, and 90-100 cm. In PV wetlands, higher concentrations of C (5.29 g/kg) and S (0.36 g/kg) appeared in surface soils (0-10 cm) and N (0.48 g/kg) at soil depths of 90-100 cm, respectively. However, P contents showed different depth distributions in three wetlands, with an accumulative peak at the soil depth of 4-6 cm (3.70 g/kg) in PV wetlands, 10-12 cm (2.58 g/kg) in SS wetlands, and 80-82 cm (3.36 g/kg) in PA wetlands. SOC stocks in 1-m depth reached approximately 6 kg/m 2 in PA wetlands, slightly higher than that in the PV (3.55 kg/m 2 ) and SS (5 kg/m 2 ) wetlands (Figure 3). In contrast, low soil N stocks (0.36 kg/m 2 , PA; 0.22 kg/m 2 , SS; and 0.32 kg/m 2 , PV) were observed in the YRD. Unlike C and N, three wetlands showed similar soil P stocks (0.92 kg/m 2 ) in 1-m depth. Similar to N, PA wetlands exhibited the largest soil S stocks (0.36 kg/m 2 ), followed by SS wetlands (0.31 kg/m 2 ), while the lowest soil S stocks (0.23 kg/m 2 ) in PV wetlands were observed. SOC stocks in three accumulative hotspots in PA wetlands were 97.79 g/m 2 in top 10 cm soils, 148.25 g/m 2 for the 40-60 cm soil depth, and 66.32 g/m 2 for the 90-100 cm soil depth, also 64.67, 113.86, and 59.25 g/m 2 in SS wetlands, respectively. Soil TN stocks in three accumulative hotspots were 4.09 g/m 2 (0-10 cm), 12.14 g/m 2 (40-60 cm), and 5.36 g/m 2 (90-100 cm) in PA wetlands, with TS stocks of 3.87, 11.50, and 3.65 g/m 2 , respectively. In SS wetlands, TN and TS stocks were 2.05 and 5.03 g/m 2 in the top 10 cm, 5.88 and 2.81 g/m 2 for the 40-60 cm soil depth, and 3.52 and 1.99 g/m 2 for the 90-100 cm soil depth. Soil P stocks in the accumulative hotspot were 9.85 g/m 2 for the 10-20 cm soil depth in SS wetland and 11.25 g/m 2 for the 80-90 cm soil depth in the PA wetland. In contrast, the accumulative hotspot of C, P, and S stocks appeared in top 10 cm soils of PV wetlands (with the levels of 74.06, 11.67, and 3.83 g/m 2 , respectively), whereas the accumulative hotspot of N appeared in the 90-100 cm soil depth (6.96 g/m 2 ).
Soil Concentration Factors and Deposition Rates
The top concentration factors of C, N, P, and S in each wetland were all more than 0.10, which indicated that the accumulation of these elements was mainly affected by plant secretions (Table 1). Among them, the topsoil concentration factor of C fell within the scope of 0.16-0.22, following the order PV > PA > SS. As for soil N, their topsoil concentration factors in the three wetlands were relatively low. The followed order of topsoil concentration factors of both P and S was PV > SS > PA. The topsoil concentration factors of C, N, P, and S in the PA wetlands were higher than that of PV and SS wetlands. After 137 Cs isotope dating, the depth of the soil layer represented in the year 1963 was identified at the 20cm soil depth in PV wetland, 56-cm soil depths in PA, and 58-cm soil depths in SS wetlands. As shown in Figure 4, the deposition rate of sediments in PV wetland was 0.38 cm/yr, 1.08 cm/yr in PA wetland, and 1.06 cm/yr in SS wetland.
Deposition Fluxes of C, N, P and S Different deposition rates of C, N, P, and S in three wetlands were also observed. PA wetlands exhibited the highest deposition rate of C (71.17 g/yr/m 2 ), N (4.35 g/yr/m 2 ), and S (4.34 g/yr/m 2 ), while the lowest deposition rate appeared in PV wetlands ( Figure 5). In contrast, the highest deposition rate of P followed the order SS wetlands (9.82 g/yr/m 2 ) > PA wetlands (9.70 g/yr/m 2 ) > PV wetlands (4.08 g/yr/m 2 ). The contents of four elements in three wetlands showed a similar pattern with depth along soil profiles (Figure 2). Except for N, the contents of C, P, and S in the top 15-cm soils were relatively high, then declined significantly at approximately 20-cm soil depth and kept stable. However, at soil depth of approximately 40 cm, the contents of C, N, and S in the soil increased significantly again, but the change of P content was not obvious at this depth. At approximately 60-cm soil depth, the contents of these four elements again fell to a relatively stable level, to the soil layer of approximately 90-100 cm.
Ecological Stoichiometric Ratios
The soil C/N ratios in the three wetlands remained relatively stable (Figure 4). PV wetlands showed a higher range of the C/N ratios than PA and SS wetlands. This indicated that the C/N ratios at higher ranges in low-salinity wetlands were lower than that in middle-salinity and high-salinity wetlands. The highest C/P ratios were observed in PA wetlands, followed by those in SS wetlands, while the lowest value appeared in PV wetlands. Similarly, both C/N and C/P ratios were higher in surface soils of the three wetlands, and higher ratios also appeared at soil depths of 40-60 and 90-100 cm in PA and SS wetlands. However, this was not the case in PV wetlands. PA wetlands showed higher C/S ratios in upper soils than deeper soils, while a fluctuating change was observed along soil profiles in PV and SS wetlands. The C/N/P of SS wetlands was significantly higher than that of PA wetlands; the C/N/P of PA wetlands was significantly higher than that of PV wetlands (P < 0.05). N/P ratios decreased along soil profiles in PV wetlands, and a fluctuation was observed in PA and SS wetlands with higher ratios at the soil depths of 40-60 and 90-100 cm.
Depth Distributions of the Contents of C, N, P and S Along Soil Profiles
Higher contents of C, N, P, and S in surface soils (0-10 cm) of the three wetlands were generally observed, which was associated with exogenous inputs of plant litters, sedimentation by freshwater, and seawater input (Saintilan et al., 2013). The contents of C and N in the surface soils were higher than that in the deep soils in coastal wetlands . The Frontiers in Ecology and Evolution | www.frontiersin.org reason might be that the surface soils are more easily affected by environmental changes (Salome et al., 2010). In addition, higher contents of C, N, and S in the three wetlands were also observed at soil depths of 40-60 and 90-100 cm, which was associated with historical input and sedimentation. Over the past years, historical New wetlands on the east coast of the U.S.
1.58 ± 0.11 Autumn Krull and Craft, 2009 Mature wetlands on the east coast of the U.S.
5.83 ± 0.25 Autumn Krull and Craft, 2009 plant death and sedimentation due to flooding or Yellow River runoff might contribute to the elemental accumulation in deep soils. According to the annual Yellow River Yearbook, the Yellow River was diverted in 1964, which may have caused a large number of plants to be buried underground at this stage and converted into biogenic elements, such as C, N, P, and S. Another reason might be related to the high content of silt and clay in these layers since soil C content has a linear relationship with the composition content of clay + silt in the YRD (Zhao et al., 2020). Additionally, microbial activity and leaching could also cause C, N, and S in the surface soil to migrate downward (Wang et al., 2010).
Soil Deposition and Stocks of C, N, P and S in Soil Profiles
Compared with PA and SS wetlands, PV wetlands showed lower soil deposition at the rate of 0.38 cm/yr, which could be ascribed to the strong erosion by river water in PV wetland and higher sediment deposition in PA (1.08 cm/yr) and SS (1.06 cm/yr) wetlands affected by the flow-sediment regulation in July and sediment input by tidal flow . Our results fell within the range by DeLaune et al. (2003) that the deposition rate of Louisiana estuarine wetlands affected by freshwater input was 0.10-1.11 cm/yr. Ding et al. (2016) also reported that the deposition rate of tidal flats in the YRD was 0.58 cm/yr, which was lower than that of PA and SS wetlands and higher than that of PV wetland observed in this study. This indicated that tidal flats underwent much stronger hydraulic erosion by tidal flow than coastal salt marshes (Ding et al., 2016). Moreover, river flow might have a much stronger erosion than tidal flow in the study area.
In general, the stocks of C, N, P, and S in PA wetlands were higher than those of PV and SS wetlands. This might be associated with less erosion by runoff or tidal flow in PA wetlands. Moreover, higher plant biomass of Phragmites australis and high litter inputs in PA wetlands than PV and SS wetlands could explain higher levels of these elements (Lu et al., 2018). Additionally, higher salinity in SS wetland may inhibit the accumulation of SOC (Zhao et al., 2017). The SOC stocks in this study were 1.47-1.89 kg/m 2 at the top 30 cm soils, which was within the range of the previous results (1.17-2.14 kg/m 2 ) (Yu et al., 2013). Compared with other Chinese coastal wetlands, the SOC storage in the YRD was lower than that of the Chongming Dongtan coastal wetland (2.32 kg/m 2 ) (Jiang et al., 2015), but higher than that in the Shanghai tidal flats (1.38 kg/m 2 ) (Shi et al., 2010; Table 2). The possible explanation was that the YRD is a newly formed wetland, vegetation types and different runoff or tidal flow conditions would also lead to different SOC storage in different regions. The organic carbon storage in the YRD is close to that of the new wetlands, but much lower than the mature wetlands on the eastern coast of the United States (Krull and Craft, 2009). This was because the vegetation of mature wetlands had gradually evolved into more advanced large plants, and more SOC was imported into the soil. This also indicated that wetland protection and restoration will help to improve the YRD's carbon storage.
The SOC content of the three accumulative hotspots of PA wetlands accounted for 16.69% (0-10 cm), 25.30% (40-60 cm), 11.32% (90-100 cm) of the total soil profile; and 13.97% (0-10 cm), 24.60% (40-60 cm), 12.97% (90-100 cm) in the SS wetlands. The proportion of SOC in the three accumulative hotspots of PA and SS wetlands exceeded 40% (PA: 53.31%, SS: 51.36%). SOC stocks of PV wetlands in the top 10 cm soils accounted for 19.1% of the soil profiles. The SOC storage in the 0-10 cm hot zone showed PV > PA > SS, which may be related to the difference of soil salinity. In the deep hot zone (90-100 cm), it was just the opposite (PV < PA < SS). The results showed that the increase in salinity might accelerate the downward migration of SOC.
The proportion of N in the three accumulative hotspots of PA and SS wetlands exceeded 40% (PA: 59.6%, SS: 52.8%). The proportion of S in the three accumulative hotspots of PA and SS wetlands exceeded 40% (PA: 52.8%, SS: 52.0%). The distribution of P is relatively uniform in the entire section, and only a significant jump occurs in some soil layers. Similar to the Chongming Dongtan wetlands (Jiang et al., 2015), the nitrogen content of the YRD also showed a high peak at a depth of about 10 cm. In addition, according to the C/N comparison with other wetlands, the YRD is a nitrogen-restricted area (Cheng et al., 2021). Compared with PV wetlands, the stocks of C and S in both PA and SS wetlands increased significantly at soil depths below 40 cm. The possible reason was higher historical inputs and underground seawater inputs. Additionally, the root input of these wetland plants is mainly concentrated at a depth above 40 cm. The assimilation and utilization of nutrients by the root system make the soil at 0-40 cm less neutralized by C and S .
Top concentration factors for three wetlands showed higher stocks of C, N, P, and S in surface soils (0-10 cm), indicating that plant-soil cycling is the dominant factor influencing the biogeochemical processes of nutrients in the YRD. This was in agreement with the results of Bai et al. (2016). The topsoil concentration factor of C, P, and S in PV wetlands was higher than that of PA and SS wetlands. This may be associated with the effects of freshwater and seawater in PV and SS wetlands, respectively. Compared with SS wetland impacted by tidal water, the salinity of SS wetland was higher than that of PV due to the impacts of tidal flooding in SS wetlands with worse plant growth. Appropriate water conditions and lower salinity would have a positive effect on the topsoil concentration factor since water and salinity conditions can greatly influence vegetation covers and plant growth. Therefore, SS wetlands had a lower surface enrichment factor than PV wetlands. Additionally, higher salinity in SS wetlands might enhance the leaching effect of soil nutrients .
Changes in Ecological Stoichiometric Ratios Along Soil Profiles
The C/N ratio of the soil has been proven to be an important indicator for the ability of microorganisms to decompose soil organic matter (Canfora et al., 2017). If C/N < 9 it represents the stage of accelerating the decomposition of soil organic matter; C/N ratio between 9 and 11 indicates a dynamic equilibrium stage; and C/N > 11 can indicate the process of complete humification (Thomsen et al., 2008). Parolari and Porporato (2016) also proposed that when C/N > 10, the mineralization of SOC began to be restricted. Generally, the C/N ratios in this study were within the range of 10-50, except for a few low-value points. This indicated that soil organic matter is in the process of complete humification in the YRD, and the mineralization of organic carbon in this area had been inhibited to a certain extent. Cleveland and Liptzin (2007) observed that the soil C/N ratio was relatively consistent across various ecosystem types, although the soil was characterized by high biological diversity, structural complexity, and spatial heterogeneity (Yang et al., 2010). Our results in the current study showed that the C/N ratio in the surface soils was higher than that in deeper soils in PV and SS wetlands (P < 0.05). Although there was no statistically significant value, the soil C/N ratios tended to decrease with soil depth in PV and SS wetlands. However, the soil C:N ratios were relatively unstable in the PA wetland.
The C/P ratio is usually an indicator for the mineralization ability of soil organic phosphorus, and C/S represents the effect of the microbial biomass of the site on the availability of soil S (Heinze et al., 2010). When the ratios of C/P and C/S in a certain area are high, the limits of P and S during the decomposition of soil organic matter are not conducive for the growth of plants in the area. In contrast, low C/P and C/S ratios will contribute to the release of nutrients during the decomposition of organic matter and increase the available P and S levels in the soil. In addition, Reddy and DeLaune (2008) presented that the initial S mineralization would occur when the C/S ratio is less than 200, and the initial S immobilization will occur when the C/S ratio is in access of 400. In this study, the low ratios of C/P (2-7) and C/S (10-35) indicated that the mineralization of P and S, instead of immobilization, is the main process in the YRD (Figure 4), resulting in lower accumulation in this region. Compared with other coastal wetlands in China, the soil nutrients in the YRD are relatively low (Zhang et al., 2012(Zhang et al., , 2013, indicating that the exchange of carbon and nutrients between the YRD and the external environment might be very active (Cao et al., 2015). Soil N/P and C/N/P also could be used as indicators of nutrient restriction types. In the current study, The N/P ratios in three wetlands were less than 1.2, which was similar to the results of Gao et al. (2012). Similarly, the C/N/P ratios were between 0 and 0.1. Therefore, the YRD is a nitrogen-limited area, and the accumulated SOC content in this area was relatively less than N and P.
CONCLUSION
The deposition and ecological stoichiometric of C, N, P, and S were investigated in coastal wetlands along a sampling belt of the YRD in this study. The results showed three accumulative hotspots of C, N, and S in soil profiles in PA and SS wetlands (0-10, 40-60, and 90-100 cm), and one accumulative hotspot of P in SS (10-20 cm) and PA (80-90 cm) wetlands. In PV wetlands, one accumulative hotspot of C, P, and S appeared in the top 10 cm, whereas N at the soil depth of 90-100 cm. Generally, due to the erosion of rivers, higher soil deposition rates and deposition fluxes of C, N, P, and S were observed in PA and SS wetlands compared with PV wetlands. Topsoil concentration factors showed that the area was mainly affected by the input of plant residues, while the leaching effect gradually strengthens with increasing salinities. The ecological stoichiometry further verified that coastal wetlands in the YRD were nitrogen-limited ecosystems and in the stage of accelerating the decomposition of soil organic matter. Improving the degree of salinization in this area, increasing the vegetation coverage of YRD, and reducing the erosion effect of rivers on soil will not only help to improve the ecological stability of this area but also increase the blue carbon storage of YRD and make better use of YRD's carbon sink function.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.
AUTHOR CONTRIBUTIONS
SD and CW contributed to the writing of the manuscript. SD, QZ, and JJ involved in the field work and data collection. JB and QZ contributed to concept of study. JB, CW, YG, JJ, GZ, and CY provided important guidance on methods and writing. All authors contributed critically to drafts and gave approval for publication.
FUNDING
This study was financially supported by the Joint Funds of the National Natural Science Foundation of China (No. U2006215) and the National Natural Science Foundation of China (No. 42107490).
|
v3-fos-license
|
2015-09-18T23:22:04.000Z
|
2014-07-22T00:00:00.000
|
9798576
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-6382/3/3/341/pdf",
"pdf_hash": "185827af48c1f211196cdb1d9998d2db664fb1d1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41823",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "01b65b18c396bdb85015ff9d728e0eb89041a6e4",
"year": 2014
}
|
pes2o/s2orc
|
Uncomplicated Urinary Tract Infections and Antibiotic Resistance—Epidemiological and Mechanistic Aspects
Uncomplicated urinary tract infections are typically monobacterial and are predominantly caused by Escherichia coli. Although several effective treatment options are available, the rates of antibiotic resistance in urinary isolates of E. coli have increased during the last decade. Knowledge of the actual local rates of antibiotic resistant pathogens as well as the underlying mechanisms are important factors in addition to the geographical location and the health state of the patient for choosing the most effective antibiotic treatment. Recommended treatment options include trimethoprim alone or in combination with sulfamethoxazol, fluoroquinolones, β-lactams, fosfomycin-trometamol, and nitrofurantoin. Three basic mechanisms of resistance to all antibiotics are known, i.e., target alteration, reduced drug concentration and inactivation of the drug. These mechanisms—alone or in combination—contribute to resistance against the different antibiotic classes. With increasing prevalence, combinations of resistance mechanisms leading to multiple drug resistant (mdr) pathogens are being detected and have been associated with reduced fitness under in vitro situations. However, mdr clones among clinical isolates such as E. coli sequence type 131 (ST131) have successfully adapted in fitness and growth rate and are rapidly spreading as a worldwide predominating clone of extraintestinal pathogenic E. coli.
Introduction
Uncomplicated urinary tract infections are among the most common infectious diseases in the community and occur in patients without any anatomic or functional abnormality. About 50% to 70% of all women acquire such an infection at least once during their life [1]. Data on the prevalence and antibiotic resistance of the bacteria causing these infections are difficult to obtain, as these infections are treated empirically without bacteriological testing. However, the knowledge of the pathogens and their sensitivities towards the most commonly used antibiotics is essential for a successful treatment and helps avoiding development of resistance [2].
More than 90% of uncomplicated urinary tract infections are monomicrobial [3]. They are mainly (between 85% and 90%) caused by E. coli, to a lesser extent by other Enterobacteriaceae, Enterococci and Staphylococci [2]. The recent ECO.SENS study [4] reports a frequency of E. coli of 74.2% in patients from Austria, Greece, Portugal, Sweden, and UK. Similar frequencies for E. coli are reported by the ARESC study [5] and the findings by Dong Sup Lee et al. [6] In addition, these aforementioned studies report 3.4% and 2.3% P. mirabilis, 4.1% and 5.6% Enterococci, 3.5% and 4.7% K. pneumoniae, 1.1% and 2.3% Enterobacter spp., other bacteria were found with 11.2% and 7%, respectively. However, these data are derived from designed studies which have to be interpreted with caution: First, usually no microbiological testing is performed for patients suffering from uncomplicated urinary tract infections. Second, it is difficult to obtain reliable local data on the incidence of resistant strains from respective patients. Third, different surveillance systems do neither use the identical methodology to measure susceptibility nor the identical breakpoints for classifying resistant and sensitive bacteria. Beside laboratory data on the in vitro susceptibility of the presumptive causative agent, the application of antibiotic stewardship, which uses data not only on the local and global epidemiological situation of antibiotic resistance, but also the potential impact of antibiotics on the microflora of the patient, provides a rationale for choosing an appropriate antibiotic for treatment. The aim of this review is to provide an overview of epidemiology and mechanisms of resistance for antibiotics frequently used in the treatment of uncomplicated urinary tract infections.
Therapeutic Options
Besides pharmacodynamic, pharmacokinetic, and tolerability aspects developing or already existing resistance to the drug to be chosen for treatment is the most important motivation for selecting an antibiotic. Furthermore, the activity of the drug against the resident bowel flora, as well as the effect of the duration of the treatment on the probability to develop resistant bacteria may have an important impact [1].
Considering the afore mentioned factors, therapeutic options for the treatment of uncomplicated, community acquired urinary tract infections have been developed, based on few long experienced antibiotics for oral application. The treatment of acute uncomplicated cystitis as recommended by the guidelines of the European Association of Urology (EAU) [7] includes fosfomycin-tromethamol, pivmecillinam, and nitrofurantoin as first line therapy. As an alternative therapy, fluoroquinolones, cefpodoxime proxetil, cotrimoxazole and trimethoprim are possible options, if the local resistance rate is less than 20%. These recommendations should be adjusted taking into account the geographical location of the patient, age, and sex as well as other diseases. A similar restriction is valid for β-lactam antibiotics like amoxicillin, amoxicillin/clavulanic acid and pivmecillinam, which are used in some countries. The recommended duration of treatment takes into account the necessary time for effectiveness and the risk of resistance development as a result of prolonged selective pressure. Usually, the duration of treatment is three days. Fosfomycin is given even in a single dose, amoxicillin and nitrofurantoin, however, require five to seven-day treatment [8].
Antibiotic Resistance-Genetic and Mechanistic Basis
The development of antibiotic resistant mutants from susceptible cells is driven by two characteristic features of prokaryotic cells-high growth rate and a haploid genome: The rapid growth rate yields a large population size within a short period of time and this increases the probability to yield one mutant cell within typically 10 8 cells due to the fortuitous acquisition of a resistance mutation [9]. Slow growing bacteria, like Mycobacterium tuberculosis overcome their growth deficiency by expression of immune evasion mechanisms which ensure the undisturbed local persistence of populations over periods of time long enough to develop resistant mutants [10].
The haploid genome structure allows for an immediate expression of a mutant genotype in bacteria. Beside mutations, which alter existing genetic material resulting in antibiotic resistance, another genetic strategy to acquire antibiotic resistance is the transfer of genetic material encoding antibiotic resistance between cells of mixed bacterial populations by either transformation of naked DNA, conjugation of plasmid DNA via cell-cell-contact, or phage-mediated transduction [11]. Either genetic alteration can result in one of three basic biochemical mechanisms of resistance: a reduction of the affinity of the target for an antibiotic, a reduction of the concentration of the drug at the target site, and an enzymatic inactivation of the drug. During evolution under selective pressure, bacterial cells have developed numerous variations of these three basic mechanisms which alone or in combination can result in clinically relevant resistance to specific or even all known antibiotics [12]. However, the acquisition of antibiotic resistance can be associated with a reduced fitness/virulence of the resistant cell due to an impairment of the normal function of the affected target or to the overexpression of a specific resistance gene [13,14]. While in the presence of the selecting antibiotic the resistant mutant has a growth advantage, in the absence of the antibiotic, i.e., after the therapy is completed, reduced fitness can turn this into a disadvantage. Over time, antibiotic is resistant, but less fit mutants can acquire additional genetic alterations which enable them to compensate for the fitness reduction. Finally, the combination of resistance and compensatory mutations can give rise to well adapted clones capable of spreading among host populations.
Resistance to Sulfonamides and Trimethoprim-Epidemiology and Mechanisms
The use of sulfonamides alone or in combination with the dihydropyrimidine derivative trimethoprim has a long history in the treatment of uncomplicated urinary tract infections. Since these infections are often treated empirically without susceptibility testing only a few data on resistance of the pathogen are available. Percentages of E. coli strains resistant to cotrimoxazole vary with the geographical location of the patients: 35.9% in Korea [6], 25.4% [15], 16.1% [4], and 12.2% [16] in Europe. According to recent data from Kahlmeter et al. [ for Austria, Greece, Portugal, Sweden and UK, respectively. These authors also described changing incidences from the first study in 1999-2000 to the second study in 2008: Portugal showed a drop from 26.7% to 16.7% and Sweden an increase from 8.3% to 16.3% resistant strains. Data for sulfonamides and trimethoprim alone are scarce, 24.8% and 16.7%, respectively [6].
Sulfonamides such as sulfamethoxazol (SMX) and dihydropyrimidines such as trimethoprim (TMP) target dihydropteroate synthetase (DHPS, the sul gene product) and dihydrofolate reductase (DHFR, the dfr gene product), respectively. Both enzymes catalyze either of two subsequent steps in the bacterial biosynthesis of folic acid [17]. Recent crystallographic data revealed that a conserved binding pocket of DHPS for the natural substrate para-aminobenzoic acid (p-ABA) is formed only in an intermediate reaction step during the catalytic cycle. The resulting covalent bond formed between p-ABA and a pteridin cation yields 7,8-dihydropteroate [18]. In an analogous mode of action involving dynamic conformational changes of DHFR from a closed to an occluded state, this enzyme catalyzes the reduction of the substrate dihydrofolate to tetrahydrofolate (THF) by using nicotinamide adenine dinucleotide phosphate (NADPH) as a cofactor [19]. Inhibition of either enzyme by sulfonamide or trimethoprim causes a shortage of folic acid, which is an essential cofactor also for the biosynthesis of purine nucleotides, thymine, nucleic acids, and serine. As a consequence DNA replication stops and this event finally causes cell death.
Resistance to SMX and TMP could easily be selected in vitro from susceptible strains of E. coli. Resulting mutants were demonstrated to have acquired mutational alterations of the chromosomal sul and dfr genes. Such modifications have also been identified as a cause of primary resistance to sulfonamide in chromosomal sul genes of those species which are naturally competent pathogens such as Streptococcus pneumoniae, Campylobacter jejuni, Neisseria meningitidis, and Neisseria gonorrhoeae, capable of taking up from the environment foreign DNA fragments released from dead cells, and subsequently integrating them into their chromosome. However, comparison of sul and dfr gene sequences from isolates belonging to the above mentioned species, suggest a horizontal transfer of antibiotic resistant gene copies and subsequent integration of resistance-determining gene sequences from the acquired gene into homologous regions of the chromosomal copy. The resulting sequences form so-called mosaic genes carrying a central antibiotic resistant gene fragment of acquired DNA flanked by regions of the residing chromosomal gene copy [20].
In contrast, the most frequent mechanism of resistance in clinical isolates of E. coli and other enterobacteria from urinary tract infections is the acquisition of resistant variants of complete sul and dfr genes expressing enzyme variants which are refractory to the inhibitory activity of the respective drug at clinically achievable concentrations. More than 30 resistant variants of trimethoprim resistant dfr genes and three variants of sulfonamide resistant sul genes have been described so far [20]. Many of these are encoded by mobile genetic elements residing on transferable plasmids in combination with other resistance genes. As a consequence, multiple resistance genes are cotransferred en bloc. An unusual genetic constellation has been detected in sulfonamide resistant E. coli isolates belonging to clonal group A, which have been isolated from different US regions. All isolates carry a genomic resistance module consisting of several resistance genes integrated in a specific chromosomal locus [21].
Resistance to Fluoroquinolones-Epidemiology and Mechanisms
Resistance to fluoroquinolones in E. coli is quite high in many European countries ranging from 25% to 50% [22]. The prevalence of resistant strains, according to few surveillance data available on uncomplicated urinary tract infections, was reported to be much lower, i.e., 28.2%, 13.9%, 3.9%, and 1.0% according to references [4,6,15,16], respectively. However, again Kahlmeter, 2012 [4] reports an increase from 2000 to 2008 in most countries in Austria (0% to 4.1%), in Greece (1.5% to 5.7%), in Portugal (5.8% to 7.6%), in Sweden (0% to 2.5%), and in the United Kingdom (0.5% to 0.6%). The incidence of resistant strains isolated from blood in 2008 was 22.9% in Austria, 22.4% in Greece, 28.6% in Portugal, 10.3% in Sweden, and 15.1% in the United Kingdom according to EARS Net. The source of infection in uncomplicated UTIs is the bowel flora of the patient or the sexual partner. The resistance is much lower in relation to surveillance data from hospitals.
Fluoroquinolones were first introduced into clinical use in 1985. The high clinical efficacy of orally available fluoroquinolones norfloxacin, ofloxacin and ciprofloxacin together with the initially very low incidence of resistance in E. coli and many other Gram-negative pathogens in the treatment of urinary tract infections rapidly resulted in a widespread empirical use for this application. This high efficacy is due to the high affinity and inhibitory activity of the drugs to their target topoisomerases gyrase and topoisomerase IV which are A 2 B 2 tetrameric enzymes sharing high structural and functional homology. Consequences of the irreversible enzyme inhibition are the arrest of replicative DNA metabolism and subsequent cell death due to secondary bactericidal mechanisms such as the introduction of DNA double strand breaks following the inhibition of DNA gyrase [23,24].
The development of clinically relevant resistance to fluoroquinolones in E. coli has been investigated intensively and has been demonstrated to require multiple mutation steps which involve alterations in conserved regions of both chromosomally encoded target gene pairs gyrA/parC and gyrB/parE encoding subunits A and B of topoisomerase II/IV, respectively. These result in a reduced affinity of the drugs to the mutated target [25]. In addition, a reduced drug concentration at the target site had been associated with chromosomal mutations either reducing the amount of outer membrane porin OmpF, a water-filled transmembrane channel which allows water-soluble small molecules such as fluoroquinolones to enter the cell, or by an increased expression of multiple drug-resistance (MDR) efflux pump AcrAB-TolC which actively exports antibiotics of different classes out of the cell, or by a combination of both. The latter mechanism is due to genetic alterations inactivating chromosomally encoded local (AcrR) or global negative regulators (MarR, SoxR, and RamR) which control the expression of MDR efflux pump AcrAB-TolC. Global regulators have been demonstrated to simultaneously control the expression of major porin OmpF via an antisense RNA switch.
Besides these mechanisms, several non-target based mechanisms of plasmid-mediated quinolone resistance (PMQR) have been detected during the last decade. A PMQR mechanism alone mediates only a low-level fluoroquinolone resistance resulting in MIC increases below the breakpoint. PMQR are subdivided into (I) mechanisms associated with reduced drug concentration at the target site due to the expression of plasmid-encoded quinolone efflux pumps QepA or OqxAB, (II) mechanisms protecting the target site, as has been determined for the different Qnr proteins belonging to either of the pentapeptid repeat protein families QnrA, QnrB, QnrC, QnrD, or QnrS, and (III) a mechanism exclusively inactivating C7-piperazinyl substituted fluoroquinolones norfloxacin and ciprofloxacin by an acetyltransferase mechanism. This unique enzyme is derived from an aminoglycoside-modifying acetyltransferase AAC6'(Ib) by the acquisition of two point mutations which extend the substrate spectrum to two different antibiotic classes [26].
While mechanisms of PMQR are reported with increasing prevalence also in E. coli isolates mediating UTI in humans and animals, their impact on clinically relevant resistance is lower compared to mutations affecting target topoisomerases gyrase and topoisomerase IV, but in combination PMQR contribute to an increase in the resistance level [27]. In addition, a possible role of PMQR as pacemakers of the development of clinical resistance to fluoroquinolones is being discussed. This view is supported by in vitro studies demonstrating an impact of qnr genes in E. coli isolates from UTI on fluoroquinolone activity in an in vitro model [28] as well as in a mouse infection model [29,30]. Former use of fluoroquinolones has been identified as a relevant risk factor for the development of clinically relevant resistance to these drugs [31].
Mechanisms of Resistance to β-Lactam Antibiotics-Epidemiology and Mechanisms
For uUTI treatment pivmecillinam (PIV), the prodrug of the active compound mecillinam, is recommended as first-line drug in several countries [1]. PIV shows good clinical cure rates against Gram-negative pathogens including E. coli expressing extended-spectrum β-lactamases (ESBL) such as ST131 isolates encoding CTX-M14, CTX-M15 [32]. Therefore, resistance to cefotaxime or ceftazidime can be used as a good marker for ESBL prevalence. Such data obtained from EcoSens study II revealed increasing but still low prevalence of ESBL producing E. coli from uUTI in Europe [4]. Due to high concentrations of β-lactam drugs in the bowel and a relatively long treatment period all β-lactams exert a high degree of selection pressure. Prevalence of resistant E. coli strains for amoxicillin is far above 20%, so that this drug should not be used for treatment of these infections at all. The addition of clavulanic acid restores susceptibility for most strains. Data from Kahlmeter [4] demonstrate that the percentage of amoxicillin/clavulanic acid-resistant E. coli strains differs from country to country: Austria, 8.9%, Greece 4.3%, Portugal 6.9%, Sweden 2.5%, and United Kingdom 2.0%. Data from other studies also show a varying realation of E. coli isolates from uncomplicated urinary tract infections resistant to amoxicillin/resistant to amoxicillin plus clavulanic acid, such as 63.6%/5.5% [6], 42.4/7.5 [15], 28.0/4.5 [4], and 35.5/1.5 [16].
The predominant mechanism of resistance to β-lactam antibiotics in E. coli and most other Gram-negatives is the production of a β-lactamase which enzymatically inactivates β-lactam antibiotics by hydrolysis of the essential β-lactam ring. As a consequence the β-lactam will not more bind to its targets, the cell wall synthesizing transpeptidases/transglycosylases also designated as penicillin binding proteins (PBPs). According to the active site architecture β-lactamases either belong to the serine protease type or to the metallo enzyme type. While the catalytic site of a serine protease is formed by a conserved triad of amino acids aspartate, histidine and serine [33], that of a metallo enzyme is composed of a central catalytically active Zn 2+ ion chelated by a set of four conserved histidin and/or cystein amino acids [34]. While some enterobacterial genera encode a chromosomal β-lactamase active against a broad spectrum of β-lactam antibiotics whose expression can be induced by β-lactams, the clinically most relevant enzymes are plasmid-encoded extended spectrum β-lactamases (ESBL), serine β-lactamases, which hydrolyze most β-lactams with the exception of carbapenems, but can be inhibited by β-lactamase inhibitors such as clavulanic acid [35].
Many ESBL enzymes belonging to class A are grouped into one of three major families: TEM, SHV and CTX-M. Within each family a high degree of DNA and amino acid sequence homology is found. Individual family members differ from their parent enzyme mediating only broad-spectrum activity by a few point mutations. These mutations affect regions associated either with the access of the drug to the binding pocket containing the active-site serine or with the kinetic properties of the enzyme resulting in an acceleration of drug inactivation. The resulting enzymes mediate resistance to extended spectrum β-lactams which includes resistance to cefotaxime and ceftazidime as a good marker for an ESBL phenotype [36]. Beside these -classical‖ ESBL enzymes, several new derivatives of classes C and D β-lactamases have evolved which share characteristics of an ESBL phenotype [37]. Despite of an increasing prevalence of ESBL-producing E. coli in UTI the clinical efficacy of β-lactams in the treatment of acute uncomplicated urinaryt tract infections is often less affected by this resistance mechanism presumably due to the high drug concentrations in urine [32]. E. coli strains belonging to the epidemiologically dominating sequence type 131 (ST131) express a multiple drug resistance phenotype which includes sulfonamide/trimethoprime and fluoroquinolones in addition to β-lactams due to the acquisition of a plasmid carrying a bla CTX-M-15 -encoding ESBL gene [37].
Resistance to Fosfomycin-Epidemiology and Mechanisms
Fosfomycin is rarely used in clinical settings, it is used only in combination with other powerful drugs like third generation cephalosporins, carbapenems, or aminoglycosides in severe infections, because the mutation rate to generate resistant mutants is extremely high. For uncomplicated urinary tract infections, however, the monobasic watersoluble fosfomycin salt, fosfomycin trometamol, is specially designed. After oral application, fosfomycin achieves a urinary concentration, which does not allow the resistant mutants to grow. Therefore, it can be used in this indication as a single drug.
Until now, resistance rates to fosfomycin are low: 1.2% and 0.2% acording to [4] and [16], respectively. Kahlmeter [4] found a low prevalence of fosfomycin resistance for all European countries involved, such as Austria 0.7%, Greece 2.9%, Portugal 0.7%, Sweden 1.5%, and United Kingdom 1.5%. A systematic review of clinical studies on the incidence of fosfomycin resistance in clinical isolates from complicated and uncomplicated urinary tract infections shows less than 8% resistance overall [38].
Fosfomycin is a potent inhibitor of the murA gene product, UDP-N-acetylglucosamine enoylpyruvyl transferase, which catalyzes an essential step in the synthesis of UDP-N-acetylmuramic acid, one essential building block for peptidoglycan synthesis. Inhibition of MurA results in an effective stop of the biosynthesis of the bacterial cell wall. However, to inhibit the intracellular target structure MurA, fosfomycin has to pass the cell membrane. This is achieved in E. coli by an active uptake process involving either one of two fosfomycin influx transporters, gylcerol-3-phosphate transporter (the glpT gene product) and hexose phosphate transporter (the uhpT gene product) [39]. Different mechanisms of resistance to fosfomycin have been identified from in vitro studies. These mechanisms include in E. coli a point mutation in murA changing cys-115 into asp thereby preventing the covalent binding of fosfomycin to its target [40]. Variations at this site have been detected in species such as M. tuberculosis, Chlamydia trachomatis, and Vibrio fisheri which are naturally less susceptible to fosfomycin. Another mechanism involves mutations in genes uhpT and glpT which results in reduced drug uptake [39]. However, these chromosomal mutations selected in vitro are associated with reduced fitness. This seems to provide a plausible explanation for the observed low incidence of such mutants in clinical isolates from urinary tract infections in comparison with the relatively frequent isolation of resistant mutants in vitro [41]. However, recent data report the occurrence of clinical isolates in Asia with transferable resistance to fosfomycin. The underlying mechanism is the expression of a glutathione-S-transferase activity which inactivates fosfomycin. Several genes encoding such activity have been identified on resistance plasmids frequently associated with a gene encoding a CTX-M-type β-lactamase [42,43].
Resistance to Nitrofurantoin-Epidemiology and Mechanisms
Nitrofurantoin is the class representative of the nitroimidazoles. After intracellular activation by bacterial nitroreductases it shows excellent bactericidal activity against E. coli and a still good, but lesser activity against many other enterobacterial pathogens. This advantage of an as yet extraordinarily low incidence of resistance (<3%) is partially outcompeted by an increased risk for severe side effects, such as toxicity for lung and liver [44].
Epidemiological Aspects of Multiple Drug Resistance in Urinary Tract Infections
During the last decade the isolation of ESBL-producing E. coli isolates belonging to the O25b:H4 serotype from urinary tract infections has increasingly been reported from over the world. Detailed molecular research revealed that these isolates predominantly belong to ST131 and the pathotype B2. Remarkable is the high incidence of fluoroquinolone resistance due to chromosomal mutations in both target topoisomerases [45] as well as the presence of a conjugative plasmid typically carrying a CTX-M-type β-lactamase such as −15, −1, and −14 [46,47]. A genome analysis of a set of ST131 isolates provided further evidence for a single clone which has spread among patients suffering from community acquired urinary tract infections within a large UK region and has split into a few molecular subgroups presumably due to the acquisition of resistance plasmids mediating varying patterns of antibiotic resistance including β-lactams, trimethoprim/sulfonamide, tetracycline, or gentamicin [48]. Besides a resistance profile the ST131 cells seem to express specific virulence traits allowing them to successfully spread among patients not only in hospitals, but also in nursing-homes. The detection of ST131 cells in animals varies between different studies suggesting that animals do not play a significant role as reservoirs for human infections. Although local epidemiological data from the Indian subcontinent are limited [49], travel to Pakistan and India is suspected to be a potential risk factor for the acquisition of multiple drug resistant E. coli ST131 [47]. Thus, the epidemiological survey of this pandemic clone requires special attention in the future and may have an impact on the empirical treatment of uncomplicated urinary tract infections.
|
v3-fos-license
|
2020-03-19T10:52:56.531Z
|
2020-03-02T00:00:00.000
|
216250905
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.intechopen.com/citation-pdf-url/71300",
"pdf_hash": "1727ff06442883a72dd25306e32db3a3b8d257fc",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41826",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "23ed5aa070df382c2b0aeeac724a5df6f09815a2",
"year": 2020
}
|
pes2o/s2orc
|
Childhood Malnutrition in India
India is home to 46.6 million stunted children, a third of world ’ s total as per Global Nutrition Report 2018. Nearly half of all under-5 child mortality in India is attributable to undernutrition. Any country cannot aim to attain economic and social development goals without addressing the issue of malnutrition. Poor nutrition in the first 1000 days of a child ’ s life can also lead to stunted growth, which is associated with impaired cognitive ability and reduced school and work performance. Malnutrition in children occurs as a complex interplay among various factors like poverty, maternal health illiteracy, diseases like diarrhoea, home environment, dietary practices, hand washing and other hygiene practices, etc. Low birth weight, episode of diarrhoea within the last 6 months and the presence of developmental delay are often associated with malnutrition in most developing nations including India. This chapter is a small attempt to highlight the state of malnutrition in India and tries to get an insight to overcome the problem. This chapter also highlights the issues and challenges for not obtaining the desired nutritional outcomes. It also provides an insight that this issue can be addressed by adopting comprehensive, coordinated and holistic approach with good governance and help of civil society.
Introduction
'Good nutrition allows children to survive, grow, develop, learn, play, participate and contribute-while malnutrition robs children of their futures and leaves young lives hanging in the balance'.
Adequate Nutrition is essential for human development. Malnutrition includes both undernutrition as well as over-nutrition and refers to deficiencies, excesses or imbalances in the intake of energy, protein and/or other nutrients. Benefits of good health are perceived not only at the individual level but also at the level of society and country level as well. Health of an individual is determined by interplay of various factors like social factors, economic factors, dietary factors, lifestyle related factors, environmental factors, government policies and political commitment, etc. [1]. Foundation of an individual's health is laid in early phase of life. It is a well-known fact that in some developing nations, India being one of them, nearly half of children under 5 years of age succumb to death every year due to poor nutrition. It is quite difficult for the poor to bear the cost of treatment especially suddenly occurring out-of-pocket expenditures [2]. A dissimilar trend is observed among individuals of affluent society. Sedentary habits coupled with unhealthy food habits results in weight gain in them. Health experts refer these conditions as malnutrition. The irony is, India being the world's second largest food producer and yet is also home to the large number of undernourished children in the world.
It is well acknowledged that investment in human resource development is a pre requisite for any nation to progress. In year 2012, while releasing HUNGaMA (Hunger and Malnutrition) Report-2011, the then prime minister of India, Dr. Manmohan Singh, expressed dismay at the 'unacceptably high' levels of malnutrition despite high and impressive GDP growth and said it was a matter of 'national shame'. He, being renowned economist, also expressed that that 'the health of our economy and society lies in the health of this generation [3]. We cannot hope for a healthy future for our country with a large number of malnourished children'.
India is home to 46.6 million stunted children, a third of world's total as per Global Nutrition Report 2018. Nearly half of all under-5 child mortality in India is attributable to undernutrition. Children of today are citizens of tomorrow, and hence improving nutritional status of children becomes extremely important. Early childhood constitutes the most crucial period of life, when the foundations are laid for cognitive, social and emotional, language, physical/motor development and cumulative lifelong learning.
Recently Millennium Development Goals (MDGs) has been transformed into Sustainable Developmental Goals (SDGs) and maternal & child health (MCH) has received attention in the last two decades as never before. Adequate nutrition has always been a definitive tool for achieving the maternal and child heath targets. Nutrition is defined as the science of food and its relationship with health. Nutrition is a basic human need and a prerequisite for a healthy life. A proper diet is essential from the very early stages of life for growth, development and for a state of overall well-being. Food consumption, which largely depends on production and distribution, determines nutrition and health of the population. Apart from supplying nutrients, food provides other components (non-nutrient phytochemicals), which have a positive impact on health.
Methods for the literature review
We searched PubMed, Google search engine and other databases on the internet for relevant literature. We searched reference lists of all primary and review articles based on the key words 'childhood malnutrition, determinants, diarrheal diseases, India, problem burden, intervention strategies and control program'. Apart from that database of government run nutritional programmes, critical review and analysis of these programmes and related published books were also studied. At few instances, stakeholders of nutritional programmes were also consulted. Relevant data was collected, summarized and analysed.
Meaning of malnutrition
Malnutrition is a term that refers to any deficiency, excess or imbalance in somebody's intake of energy and/or nutrients. In simple words, malnutrition can either be due to inadequate intake or an excess intake of calories. The term malnutrition covers two broad groups of conditions namely undernutrition and overnutrition. One is 'undernutrition'-which includes stunting (low height for age), wasting (low weight for height), underweight (low weight for age) and micronutrient deficiencies or insufficiencies (a lack of important vitamins and minerals). Another one is overweight, obesity and diet-related non-communicable diseases (such as heart disease, stroke, diabetes and cancer).
Stunting refers to a child who is too short for his or her age. These children can suffer severe irreversible physical and cognitive damage that accompanies stunted growth. The devastating effects of stunting can last a lifetime and even affect the next generation.
Wasting refers to a child who is too thin for his or her height. Wasting is the result of recent rapid weight loss or the failure to gain weight. A child who is moderately or severely wasted has an increased risk of death, but treatment is possible.
Overweight refers to a child who is too heavy for his or her height. This form of malnutrition results from energy intakes from food and beverages that exceed children's energy requirements. Overweight increases the risk of diet-related non-communicable diseases later in life.
Why childhood malnutrition matters to us?
Malnutrition is a universal problem that has many forms. No country is untouched. It affects all geographies, all age groups, rich people and poor people and all sexes. All forms of malnutrition are associated with various forms of ill health and higher levels of mortality. Undernutrition explains around 45% of deaths among children under-5, mainly in low and middle-income countries.
As far as adverse effects of child malnutrition are concerned, growth failure and infections are quite important. Malnourished children do not attain their optimum potential in terms of growth and development, physical capacity to work and economic productivity in later phase of life. It is commonly observed that school absenteeism is much higher in such child that leads to poor performance in the class. Cognitive impairment resulting from malnutrition may result in diminished productivity. Apart from these, Undernutrition increases the risk of infectious diseases like diarrhoea, measles, malaria and pneumonia and chronic malnutrition can impair a young child's physical and mental development. As per estimates of World Bank, childhood stunting may result in a loss of height among adults by 1%, which may further lead to a reduction in individuals economic productivity by 1.4% [4].
Micronutrient deficiencies can lead to poor health and development, particularly in children. Overweight and obesity can lead to diet-related noncommunicable diseases such as heart disease, high blood pressure (hypertension), stroke, diabetes and cancer.
Malnutrition is also a social and economic problem, holding back development across the world with unacceptable human consequences. Malnutrition costs billions of dollars a year and imposes high human capital costs-direct and indirecton individuals, families and nations. Estimates suggest that malnutrition in all its forms could cost society up to US$3.5 trillion per year, with overweight and obesity alone costing US$500 billion per year [5]. The consequences of malnutrition are increases in childhood death and future adult disability, including diet-related non-communicable diseases (NCDs), as well as enormous economic and human capital costs [6]. According to UNICEF, one in three malnourished children in the world is Indian. It is estimated that reducing malnutrition could add some 3% to India's GDP.
The consequences of the problem
• This inter-generational cycle of undernutrition transmitted from mothers to children greatly impacts on India's present and future. Undernourished children are much more likely to suffer from infection and die from common childhood illnesses (diarrhoea, pneumonia, measles, malaria) than wellnourished children.
• According to recent estimates, more than a third of all deaths in children aged 5 years or younger is attributable to undernutrition.
• Undernutrition puts women at a greater risk of pregnancy-related complications and death (obstructed labour and hemorrhage).
• Undernourished boys and girls do not perform as well in school as compared to their well-nourished peers, and as adults they are less productive and make lower wages.
• Widespread child undernutrition greatly impedes India's socio-economic development and potential to reduce poverty [7].
Measurement of malnutrition
Underweight is defined as weight that is 2 standard deviations below the WHO child growth standards for that particular age. In other words, child is underweight if Z-scores of child for a given weight for age is less than À2 SD from the median of the WHO/NCHS Child Growth Standards or References.
Wasting is defined as loss of body weight with reference to height. In other words, child is having wasting if Z-scores of child for a given weight for height is less than À2 SD from the median of the WHO/NCHS Child Growth Standards or References.
Wasting is also known as 'acute malnutrition' and is characterized by a rapid deterioration in nutritional status over a short period of time in children under 5 years of age. In children, it can be measured using the weight-for-height nutritional index or mid-upper arm circumference (MUAC). There are different levels of severity of acute malnutrition: moderate acute malnutrition (MAM) and severe acute malnutrition (SAM).
Stunting is defined as a height that is more than 2 standard deviations below the WHO child growth standards median. In other words, child is stunted if Z-scores of child for a given height for age is less than À2 SD from the median of the WHO/ NCHS Child Growth Standards or References.
Stunting is also known as 'chronic undernutrition', although this is only one of its causes. Stunting is often associated with cognitive impairments such as delayed motor development, impaired brain function and poor school performance, as it often causes these negative impacts.
Magnitude of problem
In present era malnutrition is reflected as double burden, one aspect is undernutrition and other being overnutrition. But, in India and other low and middle-income countries (LMICs), basically malnutrition is synonymous with protein energy malnutrition or undernutrition, which signifies an imbalance between the supply of protein and energy and the body's demand for them to ensure optimal growth and function.
Global scenario
Globally, approximately 149 million children under-5 suffer from stunting. In 2018, over 49 million children under-5 were wasted and nearly 17 million were severely wasted. There are now over 40 million overweight children globally, an increase of 10 million since 2000 [8]. (Figures 1-3) It is estimated that by 2050, 25 million more children than today will be malnourished [9].
Indian scenario
India is one among the many countries where child undernutrition is severe and also undernutrition is a major underlying cause of child mortality in India. Pattern of stunting prevalence among Indian districts is shown in Figure 4.
The prevalence of underweight children under age 5 was an indicator to measure progress towards MDG 1, which aims to halve the proportion of people who suffer from hunger between 1990 and 2015. For India, this would imply a reduction in the child underweight rate from 54.8% in 1990 to 27.4% in 2015. Sustainable development Goals (SDG) 2 focuses on end hunger, achieves food security and improves nutrition and promotes sustainable agriculture. By 2030, end all forms of malnutrition, including achieving, by 2025, the internationally agreed targets on stunting and wasting in children under 5 years of age, and address the nutritional needs of adolescent girls, pregnant and lactating women and older persons and indicators are primarily prevalence of stunting, wasting and overweight among children under 5 years of age. In a recently released Global Nutrition Report 2018, revealed the prevalence of stunting, wasting and overweight at national level as 37.9, 20.8 and 2.4% respectively [10].
In In the 2018 Global Hunger Index, India ranks 103rd out of 119 qualifying countries [12]. With a score of 31.1, India suffers from a level of hunger that is serious.
Web of factors maintaining malnutrition in Indian communities
'Asian enigma' is a phenomenon of persistent and unusually high prevalence of child undernutrition in South Asia as compared to countries at similar levels of income or economic growth. In-depth analysis into why malnutrition is so resistant to improvement shows its complex aetiology. The immediate causes of undernutrition reflect a negative synergy between inadequate food intake and repeated infectious diseases. Underlying these causes is a constellation of factors particularly salient to India [13]. These include especially poor sanitation and high rates of open defecation that leads to various kinds of infestations, infections and environmental enteropathy; poor coverage of health services and half-hearted implementation of nutritional programs and policies; no political commitment and will, and economic, social determinants including economic growth and income distribution, deficiencies in governance and strategic leadership and the status of women [14,15].
A new study from Harvard Chan School of Public Health has now pinpointed the five top risk factors responsible for more than two-thirds of the problem. Short maternal stature, extreme poverty, poor dietary diversity and mother's lack of education are among the top five risk factors for malnutrition in children in India. Examining an array of 15 well-known risk factors for chronic undernutrition among children in India, the study found that the five top risk factors were essentially markers of poor socioeconomic conditions as well as poor and insecure nutritional environments in children's households [16].
Economic conditions definitely play a crucial role. On the one hand, money is required to look after food, water and sanitary living conditions, whereas on the other hand, approximately 22% of the Indian population live below the poverty line. Rural population, a major chunk (especially agriculturists) is mostly dependent on rains for their income. They always live in a state of uncertainty of income. Apart of income, illiteracy plays a crucial role. Most of the people are not aware about their health, nutrition, balanced diet and breastfeeding practices. Without these, effective nutrition communication campaign cannot succeed in their purpose.
India ranked 97 among a list of 118 countries on hunger as per Global Hunger Index (GHI). It concludes that Indian population does not have access to sufficient and nutritious food. National Food Security Act is a great step in the direction of ensuring greater access to adequate quantity of quality food at affordable cost via Targeted Public Distribution System (PDS). Desired outcomes were not achieved due to corruption in PDS [17]. Wastage of food grains (theft, rotting) in Food Corporation of India (FCI) warehouses has also dented the access of food to common man. Greater efforts are needed to strengthen the existing initiatives to make them as corruption free and efficient institutions to get better results.
State of maternal health illiteracy is an important determinant of child nutritional status. The type of care a mother provides to her child depends to a large extent on her knowledge and understanding of some aspects of basic nutrition and health care [18].
Millions of beneficiaries have benefitted by ICDS Scheme however, problems are being observed in ensuring supply of quality food, and its uniform distribution. Anganwadi Workers (AWWs) and Anganwadi Helpers (AWHS) at Anganwadi centres are often dissatisfied by low wages. Thus they fail to play an effective role in tackling the problem of malnutrition.
Scam in ICDS project unearthed
Dibrugarh, Assam: two organizations have brought charges of rampant corruption in the Integrated Child Development Scheme (ICDS) amounting to more than Rs. 37 lakh in Panitola ICDS project of the district. While the officer-in-charge of the ICDS project in Panitola development block has drawn the money for 2007-2008 through two cheques (Nos. 107,895 and 017896) from UCO Bank, Dibrugarh after collecting the cheque from the district social welfare department, All India Youth Federation and All Assam Mottock Yuba Chatra Sanmilan unearthed through Right to Information (RTI) Act that the money has not been utilized till date. Suspecting misuse of the allotted money, the two organizations have demanded that the district administration institute an enquiry into the anomaly immediately. They have also demanded exemplary punishment on the erring officials (source: The Assam Tribune, 12 May 2008).
Village Health, Sanitation and Nutrition committee (VHSNC), one of the key elements of the National Rural Health Mission are non-functional in many of the states due to lack of funds. Similarly, Village Child Development Centres (VCDCs) were set up by state government of Maharashtra to provide malnourished children with medical care and nutritious meals. These centres are mostly non-functional due to lack of funds [19].
Toffees in the name of nutritious food
In Nigoha, the hot food scheme has stopped functioning due to lack of funds. The condition of Rampura AWC is also the same. The centre does not open on regular basis. The AWH, Sarvesh Kumari, distributes toffees instead of proper nutritional food to the limited number of children who come to the centre. Villagers are not even aware of the facilities provided to them by the AWC. Community participation is also lacking as parents do not sent their children to the centres (source: Dainik Jagran, Lucknow, 1 November 2009).
Social and cultural factors may also affect malnutrition. State government of Uttar Pradesh launched Hausla Poshan Yojana in 2016 to combat malnutrition among mothers and children by providing food cooked by Anganwadi Workers. Surprisingly beneficiaries refused to consume food because lower caste people prepared it [20]. Upper caste community considers lower caste as untouchables. Another cultural practice still prevalent in Indian communities is child marriage that is acting as limiting factor in improving health of children. 27% of girls in India are married before their 18th birthday and 7% are married before the age of 15. According to UNCIEF, India has the highest absolute number of child brides in the world [21]. A weak mother is likely to give birth to a weak child. This maintains the cycle of undernourishment.
As discussed earlier that poor sanitation is directly linked to malnourished children. The Census 2011 told us only 32% of India's rural households had toilets. 59% of the 1.1 billion people in the world who practice open defecation live in India. On 2 October 2014, Swachh Bharat Mission was launched throughout country with an aim to achieve the vision of a 'Clean and Open Defecation-Free India' by 2 October 2019 [22]. These targets are difficult to achieve, as implementation is poor, as observed from the slow progress in meeting the targets, and the existence of several newly constructed but non-functional toilets [21,23].
Diarrheal disease kills an estimated 300,000 children less than 5 years of age (13% deaths in this age-group) in India each year. Most mortality related to diarrhoea occurs in less developed countries, and the highest rates of diarrhoea occur among malnourished children under-1. The case fatality rate is highest among children aged 6-12 months because at this age the immune system is not yet fully mature, maternal antibodies are waning, and the foods introduced to complement breastfeeding may be contaminated. Among children who survive severe diarrhoea, chronic infections can contribute to malnutrition. In turn, malnutrition makes children vulnerable to diarrhoea infections. Better access to clean water and sanitation is the key, with fewer weak and malnourished children becoming infected [24,25].
Commitments and targets to track progress to end malnutrition
Recognizing the seriousness of malnutrition for global health, in 2012 and 2013, the member states of the World Health Organization (WHO) adopted a series of targets to significantly reduce the burden of many of these forms of malnutrition by 2025 ( Table 1).
Progress to tackle all forms of malnutrition remains unacceptably slow. The 2018 Global Nutrition Report [10] tracks country progress against the following global targets: child overweight, child wasting, child stunting, exclusive breastfeeding, diabetes among women, diabetes among men, anaemia in women of reproductive age, obesity among women and obesity among men. Data for 194 countries was analysed. As per this report, India is listed among those countries, which are on track for none (zero) of the nine targets. The key driver behind the goal to reach Zero Hunger and malnutrition is to ensure that no one is left behind in the pursuit of food and nutrition security. In the Indian context, this will also mean greatly improving the health of women and children.
Determinants of child malnutrition
The causes of malnutrition in India are several and multifaceted, from direct factors to underlying contributors. Malnutrition in children occurs as a complex interplay among various factors like socio-demographic, maternal, gender, home environment, dietary practices, hand washing and other hygiene practices, etc. Socio-economic and demographic factors: literacy status of parents especially mother's education, caste, birth order of child, gender of household head, residence, type of house, type of family (single/joint) lower socio-economic status, poverty, food insecurity, etc. are such important factors.
Gender: female gender is vulnerable to severe forms of malnutrition across all ages due to socio-cultural factors (responsible for child bearing and rearing, last one to consume food in the family). Undernourished girls grow up to become undernourished women who give birth to a new generation of undernourished children [26].
Maternal factors: short stature, mother's nutrition, mother's age, antenatal and natal care, infections, smoking and exposure to second hand smoke are important maternal factors.
Breastfeeding practices: inadequate, insufficient, inappropriate breastfeeding practices lay down foundation of malnutrition. Breastfed children are protected from infections in better way than who are not breastfed. Early initiation of breastfeeding and right timing of initiation of complementary feeding are also quite important [27].
Home environment: large family size, food insecurity, toilet facility, sanitation and hygiene practices, water storage and handling practices are extremely important factors.
Open air defecation: open defecation, the practice of people defecating out in the open wherever it is convenient, is one of the main factors leading to malnutrition. Approximately in the urban setting, 12% of the population open defecate and rural areas that number is 72%. Open defecation leads to polluted water; up to 75% of India's surface water is polluted.
Poor hand hygiene: role of hand hygiene is quite important in prevention of infections and thereby malnutrition. Availability of soap and water is an important determinant. Hand washing before preparation, serving and eating meals and after going to toilets can prevent malnutrition to a great extent.
Diarrhoeal disease: diarrhoea is a leading cause of malnutrition in children under 5 years old. Poor sanitation, lack of access to clean water and inadequate personal hygiene are responsible for an estimated 88% of childhood diarrhoea in India. Based on current evidence, washing hands with soap can reduce the risk of diarrheal diseases by 42-47%. A survey conducted by UNICEF in 2005 on well-being of children and women had shown that only 47% of rural children in the age-group 5-14 wash hands after defecation [28]. Figure 8 depicts the underlying drivers of malnutrition. They are complex and multidimensional which include inter alia poverty, inequality and discrimination. Control of malnutrition will require a comprehensive approach targeting all these causes and contributors across sectors and stakeholders.
The life-course approach on malnutrition
The challenge of malnutrition calls for a multidisciplinary approach that targets multiple underlying factors. Crucial stages in people's lives have particular relevance for their health, and the life-course approach recognizes the same. Taking a life-course perspective to tackle malnutrition emphasizes its intergenerational effects.
Intervening in the preconception period is fundamental to improve nutritional status and health behaviours in young people and adolescents and to prevent the transmission of risk to the next generation. Adopting a combination of top-down approaches through policy initiatives and bottom-up engagement of key stakeholders such as young people is recommended to prevent malnutrition over the first 1000 days of life. Targeting pregnancy and preconception periods increases nutrition awareness and influences dietary habits.
It is an established fact that preventing undernutrition during the first 1000 days of a child's life, i.e. from conception to the second birthday is quite important. This time period is very precious because child may not be able to grow to her or his full potential in the future and even irreversible damage may occur, if foundation for good nutrition is not properly established during this time period. However it does not mean that there are no other entry points to improve nutrition. Moreover, even with coverage of 90% of direct nutrition interventions, only 20% of stunting deficits would be addressed [29]. It is essential that preconception services are incorporated into a continuum from childhood to antenatal care, involving both partners and linked to interventions to promote school attendance in young girls, and the planning of first and subsequent pregnancies [30].
The life course approach underlines the dynamic nutritional needs at different stages of life, this holds true especially with women. It also explains that at each stage of life, nutrition can and should be addressed in order to break the crossgenerational cycle of malnutrition [31]. Figure 9 depicts the life course approach which explains how the first 1000 days are critically important. Investments in nutrition must extend as per the changing needs and risks at later stages in life, such as adolescent girls and women of reproductive age. It also points towards underlying causes of malnutrition and the need to address them. Underlying causes can only be satisfactorily addressed with intersectoral co-ordination and involvement like health, agriculture, water and sanitation, social protection and education. These sectors should be involved taking into account the specific needs and roles of women in order to work towards sustainable and inclusive solutions.
The fight against malnutrition
Massive and strategic investments have been made to combat malnutrition by governments of various countries, India being one of them. Recently (in April 2016), the United Nations General Assembly adopted a resolution proclaiming the UN Decade of Action on Nutrition from 2016 to 2025. The Decade aims to catalyse policy commitments that result in measurable action to address all forms of malnutrition. The aim is to ensure all people have access to healthier and more sustainable diets to eradicate all forms of malnutrition worldwide. Sustained and concrete results can only be achieved only if determinants of malnutrition are addressed with holistic approach [32].
Outcomes of these nutritional interventions are evident in the declining patterns in some of the India's key health variables as reported by National Family Health Surveys NFHS- 3 (2005-2006) and NFHS-4 (2015-2016) data.
Data on nutrition indicators as per the last available national survey (NFHS 4)
• 38% of children below 5 years (urban: 31%, rural: 41%) are stunted (low height for age).
• More importantly, 7.5% of children are suffering from severe acute malnutrition, as per the last available national survey.
Related indicators
• Only 41.6% newborns initiated on breastfeeding within 1 hour of birth while 54.9% children breastfed exclusively till 6 months of age.
• Complementary feeding started for only 42.7% children on time (more than 6 months of age).
• 58.4% of children in age group 6-59 months are anaemic. Figure 10 shows the comparison of nutrition indicators as per NFHS-3 and NFHS-4.
Status of child mortality in India
• The U5MR has declined at a faster pace in the period 2008-2016, registering a compound annual decline of 6.7% per year, compared to 3.3% compound annual decline observed over 1990-2007 [33].
• As per latest Sample Registration System, 2016 Report; The U5MR in India is 39/1000 live births, IMR is 34/1000 live births and NMR is 24/1000 live births. This translates into an estimated 9.6 lakh under-5 child deaths annually.
• About 46% of under-five deaths take place within the first 7 days of birth, 62% within first 1 month of birth.
The state of malnutrition in India is alarming and disturbing. A lot of work has been done, progress has been made but definitely pace of improvement is too slow. Following table shows the current status of important child health indicators and time bound targets to be achieved under National Health policy and Sustainable Development Goals (SDGs).
Policy level nutritional interventions to fight against malnutrition
Based on understanding towards a wide range of factors responsible for malnutrition among children, the policy called for the adoption of a multi-sectoral approach along with multiple measures to achieve the goal of optimum nutrition for all. Important government led policy level interventions and programmes to combat malnutrition are as follows:
Direct policy measures
a. Inclusion of all vulnerable groups (children, adolescent girls, mothers, expectant women) under the safety cover of ICDS.
b. Fortification of essential food items with legal provisions (e.g. twin fortification of salt with both iodine and iron).
c. Popularize low cost nutritious food.
d. Control of micro-nutrient deficiencies with special focus on vulnerable groups.
Indirect policy measures
a. Guarantee of food security to citizens by increasing production of food grains.
b. Improve dietary pattern by promoting production and increasing per capita availability of nutritionally rich food.
c. Prevention of food adulteration by law.
d. Strengthening nutrition surveillance.
e. Improving purchasing power of landless, rural and urban poor.
f. Improving public distribution system (PDS). The Government of India enacted the National Food Security Act (NFSA) in 2013 to enable food and nutritional security by ensuring access to adequate quantity of quality food at affordable prices to people to live a life with dignity. This legal provision has put the onus on the state to guarantee basic entitlements.
Strategic nutrition related interventions rolled out by government of India
Various community nutritional programmes are running in India to combat child malnutrition and to get nutrition on track. These are based on strategic nutrition related interventions. A few of them are discussed below.
Promotion of Infant and Young Child feeding practices (IYCF): exclusive breastfeeding for first 6 months, complementary feeding beginning at 6 months and appropriate infant and young child feeding practices (IYCF) are being promoted. Mother's Absolute Affection (MAA) programme was launched in 2016 to promote breastfeeding and infant feeding practices by building the capacity of frontline health workers and comprehensive IEC campaign.
Establishment of Nutritional Rehabilitation Centres (NRCs): NRCs have been set up at facility level to provide medical and nutritional care to Severe Acute Malnourished (SAM) children under 5 years of age who have medical complications. In addition, the mothers are also imparted skills on child care and feeding practices so that the child continues to receive adequate care at home.
Anaemia Mukt Bharat (AMB): to address anaemia, NIPI has been launched which includes provision of supervised bi-weekly Iron Folic Acid (IFA) supplementation by ASHA for all under-5 children, weekly IFA supplementation for 5-10 years old children and annual/biannual De-worming. The AMB strategy-Intensified Iron Plus Initiative-aims to strengthen the existing mechanisms and foster newer strategies of tackle anaemia, focused on six target beneficiary groups, through six interventions and six institutional mechanisms; to achieve the envisaged target under the POSHAN Abhiyaan. The strategy focuses on testing & treatment of anaemia in school going adolescents and pregnant women using newer technologies, establishing institutional mechanisms for advanced research in anaemia, and a comprehensive communication strategy including mass/mid media communication material.
National De-worming Day (NDD): recognising worm infestation as an important cause of anaemia, National Deworming Day (NDD) is being observed annually on 10th February targeting all children in the age group of 1-19 years (both school enrolled and non-enrolled).
Biannual Vitamin A Supplementation is being done for all children below 5 years of age.
Village Health and Nutrition Days (VHNDs) are also being organized for imparting nutritional counselling to mothers and to improve child care practices.
A few schemes and services rendered by them are tabulated ( Table 3) below as per target group.
NGO's working to combat malnutrition
• Akshaya Patra-the world's largest NGO-run mid-day meal programme serving wholesome school lunch to over 1.76 million children in 15,668 schools across 12 states in India.
• Avantha Foundation Fighting malnutrition in Bihar
• Nutrition CINI India
Case study
The following case study from Tamil Nadu, a southern state of India focuses on the complex challenges faced and the progress made so far as part of efforts towards combating malnutrition. It also demonstrates how lessons are being learned along the way.
The Tamil Nadu integrated nutrition project (TINP)
The Tamil Nadu Integrated Nutrition Project (TINP), a World Bank assisted intervention program in rural south India, offered nutrition and health services to children under-5 and pregnant and lactating women. TINP-I (1980-1989) eventually covered 174 blocks. It was a forerunner of the Bangladesh Integrated Nutrition Project (BINP). TINP-II (1991-1997) covered all non-ICDS blocks in the Tamil Nadu state. TINP-II was replaced by World Bank assisted ICDS III (WB-ICDS III) from 1998. Since 1975, Indian government is providing a package of services to combat child hunger and malnutrition under Integrated Child Development Services (ICDS) program through Anganwadi centres (AWCs). Anganwadi means 'courtyard shelter' in local language.
TINP I (1980-1989)
Approximately 1.25-2.40% points per year (ppt/year) drop in underweight prevalence was noted among beneficiaries. On comparing drop in underweight prevalence between TINP areas and non-TINP areas, it was noticed that drop was approximately 0.83-1.12 ppt/year in TINP areas whereas reduction in underweight prevalence was approximately 0.26-1.12 ppt/year in non-TINP areas.
At the same duration, reduction in the underweight prevalence was estimated as 0.7 ppt/year for the whole of India. Therefore it can be stated that quarter to half of the reduction in underweight prevalence was attributable to the TINP project.
Having achieved a significant reduction in severe early childhood malnutrition, TINP-1 became inspiration for others as a 'success story' during the 1980s. Evaluations indicated a decrease in underweight prevalence of about 1.5% points per year in participating districts, twice the rate in non-participating districts. Several factors contributed in the success story of TINP I viz. selective feeding (the careful focus on supplementing the dietary intake of young children when their growth faltered and until their growth resumed), clarity in job responsibilities and description, positive worker-supervisor ratio and robust monitoring system.
TINP II (1990-1997)
TINP II was rolled out to move beyond reducing severe malnutrition and with a more ambitious objective to significantly reduce the burden of moderate malnutrition. In other words, it shifted towards a more preventive focus. Core strategies adopted in TINP II were regular growth monitoring, nutrition education, health check-ups, supplementary feeding of malnourished children and growth-faltering children, high-risk pregnant and lactating women. Approximately 6.0 ppt/year drop in underweight prevalence was noted among TINP II beneficiaries. It was also noticed that drop was approximately 1.1 ppt/year in TINP areas. As per estimates of World Bank, the current underlying trend in the state was to be 5.0-7.0 ppt/year, which is most certainly an overestimate.
In the nutshell, TINP II achieved its objective to decrease severe malnutrition but failed to achieve its objective for moderate malnutrition.
A few lessons were learned from TINP II before planning a next phase nutritional intervention. For example, need to work on localized capacity building, improved home-based care by intensifying community mobilization and targeted interpersonal communications, and feeding of 6-24 months old children. Next phase of nutritional programme must incorporate improved service delivery, supportive counselling of caregivers, social mobilization and participatory learning.
Take home massage from TINP I was, interventions that are targeted using nutritional criteria, integrated within a broader health system and effectively supervised and managed can significantly reduce severe malnutrition. TINP II taught us that going further and preventing children from becoming moderately malnourished is in many ways a tougher task, and demands a significant shift in strategy [34,35].
Conclusion
The facts and discussion presented above, highlights the worrying unacceptably high prevalence and universality of malnutrition in all its forms in Indian communities, but it is both preventable and treatable. Beyond health, malnutrition is also impacting the social and economic development. In Indian context, poverty, maternal health illiteracy, LBW, diseases like diarrhoea, home environment, dietary practices, hand washing and poor hygiene practices are few important factors responsible for very high prevalence of malnutrition. Government of India has rolled out various community nutritional programmes to combat malnutrition and to get nutrition on track. Despite enormous challenges, India has made considerable progress in tackling hunger and undernutrition in the past two decades, yet this pace of change has been unacceptably slow, uneven and many have been left behind. But with sustained prioritization, increased resource allocation, adopting comprehensive, coordinated and holistic approach with good governance and help of civil society, India has the potential to end malnutrition in all its forms and turn the ambition of the Sustainable Development Goals into a reality for everyone.
|
v3-fos-license
|
2018-04-03T03:42:30.271Z
|
2016-04-20T00:00:00.000
|
2087763
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2016/1654056.pdf",
"pdf_hash": "04f11e1b1349f66cf8b938f73eada72ca8928e43",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41827",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "2f9e2a751dcf35abdc965310efb59427fbe2e9f9",
"year": 2016
}
|
pes2o/s2orc
|
The In Vitro and In Vivo Wound Healing Properties of the Chinese Herbal Medicine “Jinchuang Ointment”
“Jinchuang ointment” is a traditional Chinese herbal medicine complex for treatment of incised wounds. For more than ten years, it has been used at China Medical University Hospital (Taichung, Taiwan) for the treatment of diabetic foot infections and decubitus ulcers. Three different cases are presented in this study. “Jinchuang” ointment is a mixture of natural product complexes from nine different components, making it difficult to analyze its exact chemical compositions. To further characterize the herbal ingredients used in this study, the contents of reference standards present in a subset of the ointment ingredients (dragon's blood, catechu, frankincense, and myrrh) were determined by HPLC. Two in vitro cell based assay platforms, wound healing and tube formation, were used to examine the biological activity of this medicine. Our results show that this herbal medicine possesses strong activities including stimulation of angiogenesis, cell proliferation, and cell migration, which provide the scientific basis for its clinically observed curative effects on nonhealing diabetic wounds.
Introduction
It is well-known that diabetic foot ulcers are extremely difficult to be treated and are the dominant complication leading to amputations [1]. "Jinchuang ointment" is a traditional Chinese herbal medicine complex for treatment of incised wounds. Its recipe was first described in one ancient Chinese book of medicine, Medicine Comprehended, published in 1732. Clinical applications of this herbal medicine for diabetic foot infections and decubitus ulcers have been a successful course of treatment in the Division of Chinese Medicine, China Medical University Hospital, Taichung, Taiwan, for more than ten years. Despite its track record of curative effects, there is no literature published in the English language describing the clinical efficacy of "Jinchuang ointment" [2]. Moreover, neither a biological mechanism nor the compositions of effective components have yet to be systematically investigated. Like many Chinese herbal medicines, "Jinchuang ointment" is also a mixture of natural product complexes. The combination of compounds results 2 Evidence-Based Complementary and Alternative Medicine in complications when it determines the chemical composition and bioactivity of each component [3].
"Jinchuang ointment" is composed of lard, wax, starch, synthetic borneol, camphor, frankincense, dragon's blood, myrrh, and catechu. To further characterize the chemical content of each component in this complex, the ratio of stereoisomers in chemically synthesized borneol used in this study was analyzed by chiral gas chromatography (GC). Meanwhile, the content of reference standards in the herbal components, like frankincense, myrrh, dragon's blood, and catechin, was determined by high performance liquid chromatography (HPLC). Lard is the major component in this complex, and its weight percentage is as high as 67%. It is of great interest to determine the role of lard in this complex. Lard was therefore substituted for synthetic triacylglycerol, coconut oil, Vaseline5, and sesame oil. The activity of these reconstituted complexes was examined in this study.
"Jinchuang ointment" is directly applied to the wound surface during treatment. The components in this complex are neither digested nor absorbed in the gastrointestinal tract. It is therefore reasonable to evaluate its bioactivity by direct addition of this complex into media containing cultured human skin or endothelial cells. Wound healing is a very complicated process. In this study, an in vitro tube formation assay, a wound healing assay, and a cell proliferation test were carried out to examine the activity of "Jinchuang ointment." Here, we report the outcomes of treating patients with "Jinchuang ointment," the results of cell based activity assays, and characterization of herbal components by HPLC. The composition of "Jinchuang ointment" (100 g) is as follows: lard 67.3 g, dragon's blood 2.1 g, catechu 2.1 g, frankincense 2.1 g, myrrh 2.1 g, camphor 6.3 g, borneol 0.1 g, corn starch 8.4 g, and wax 9.5 g. For wound healing and tube formation assays, the DMSO stock solution of "Jinchuang ointment" is prepared as follows: two grams of "Jinchuang ointment" is dissolved in 10 mL DMSO and homogenized by ultrasonication just before use.
Determination of Reference Standard Content in Dragon's
Blood, Catechu, Frankincense, and Myrrh by HPLC. All experiments were carried out on a Hitachi L-7000 HPLC system, equipped with L-7100 quaternary gradient pump and a L-7450 photo diode array detector. Hitachi HSM software was used for machine controlling, data collecting, and processing. A Mightysil, RP-18, 5 m, 250 × 4.6 mm, analytic column (Kanto Chemical Co., Inc., Tokyo, Japan) was used for analysis.
For samples of dragon's blood, catechu, and frankincense, 0.1 g grounded solids were weighed and dissolved in 10 mL of methanol. After ultrasonicating for 30 minutes at room temperature, methanol extracts were transferred to a new glass vial by using disposable glass Pasteur pipettes. 4 mL of methanol was then added and ultrasonicated for another 30 minutes at room temperature. The final volume of the extract was adjusted to 25 mL by adding methanol. Undissolved particles were removed by centrifugation at 2500 ×g for 10 minutes at room temperature and filtrated through a 0.22 m syringe filter. For myrrh, 95% ethanol was used for extraction rather than methanol. Other preparation steps were identical to those of dragon's blood, catechu, frankincense, and myrrh.
The methanol extract of dragon's blood was separated using a gradient elution of solvent A (10% CH 3 CN containing 0.1% formic acid) and solvent B (90% CH 3 CN containing 0.1% formic acid) with a flow rate of 1 mL/min [4]. The elution program is given in Table 1. The UV detection wavelength was 254 nm.
The catechu methanol extract was separated using a gradient elution of A (H 2 O containing 0.1% formic acid), B (10% CH 3 CN containing 0.1% formic acid), and C (90% CH 3 CN containing 0.1% formic acid) with a flow rate of 1 mL/min [5]. The elution program is given in Table 1. The UV detection wavelength was 270 nm.
To determine the content of (E)-guggulsterones in myrrh, an ethanol extract was separated using an isocratic elution of 0.1% H 3 PO 4 : CH 3 CN (45 : 55, v/v) with a flow rate of 1 mL/min for 25 minutes [7]. The UV detection wavelength was 240 nm.
Determination of the Ratio of Stereoisomers in Synthetic
Borneol. Chiral GC was used to determine the ratio of stereoisomers in synthetic borneol. Analysis was performed Evidence-Based Complementary and Alternative Medicine 3 Chromatographic conditions were as follows: helium used as a gas carrier; a constant flow of 1.0 mL/min; 2 L injection volume (splitless model), and a 280 ∘ C injector temperature.
In Vitro Wound Healing Assay. Confluent HaCaT cells in
12-well plates were starved overnight in DMEM medium. The surface of the plate was scraped with a 200 L pipette tip to generate a cell-free zone. Free cells were then removed by two HBSS washes, and cells were incubated in DMEM medium containing 200 g/mL, 20 g/mL, or 2 g/mL "Jinchuang ointment." After 24 hours of incubation, cells were imaged using microscopy. The area of wound closure was quantitatively determined using Image J software (National Institutes of Health, Bethesda, MD). Stimulation effects on in vitro wound healing assay by "Jinchuang ointment" were also examined using human microvascular endothelial cells (HMEC-1). Ibidi Culture-Inserts (Ibidi Gmbh, Martinsried, Germany) were placed on the chamber of 24-well cell culture plates. About 70 L of HMEC-1 (5 × 10 5 cells/mL) was seeded per well and plates were incubated at 37 ∘ C and 5% CO 2 . After 24hour incubation, Ibidi Culture-Inserts were removed, and 1 mL of MCDB131 media containing 1 L DMSO solution of "Jinchuang ointment" was then added into individual wells. The migration of cells was observed by microscopy over a period of 6-24 hours. The gap size was measured by using software Image J.
Cell Proliferation Assay.
HaCaT cells (5 × 10 4 /well) were seeded in 96-well plates. The medium containing "Jinchuang ointment" at various concentrations was then added after cell adhesion. Cells were incubated in DMEM medium for the indicated time. The proliferation of HaCaT cells was subsequently determined by using Cell Proliferation Reagent WST-1 (Roche, Indianapolis, IN, USA). The statistical method used is Student's -test.
2.6. Western Blotting. Confluent monolayers of HaCaT cells were treated with various concentrations of Jinchuang ointment for the indicated time. Equal quantities of cell lysate proteins were separated by 10% SDS-PAGE and electroblotted onto PVDF membranes (Millipore, Billerica, MA). Membranes were blocked for 1 h with 5% low-fat milk powder solubilized in phosphate-buffered saline (PBS) containing 0.05% Tween 20. Levels of Cdc25b, Cdc25c, CDK2, CDC D2, Cyclin B, Cyclin D3, and -tubulin were determined by western blotting using specific antibodies and enhanced chemiluminescence detection methods. The intensity of the resulting bands was measured by densitometric analysis using Image J software and presented as the ratio relative to the internal control.
In Vitro Tube Formation Assay. Human umbilical vein endothelial cells (HUVEC) were bought from Bioresource
Collection and Research Center (Hsinchu, Taiwan). 1 mL of HUVEC (1 × 10 5 cells/well) was placed into the wells of a 24-well flat bottomed plates precoated with 200 L Matrigel (BD Biosciences, Bedford, MA, USA). Cells were then mixed with 1 mL medium containing "Jinchuang ointment" (final concentration of 200 g/mL). Cells were incubated at 37 ∘ C for a 24 h exposure [8]. After incubation, cell tube or network formation was observed using a phase-contrast microscope.
Statistical Analysis.
Results are expressed as the means ± SD from at least three independent experiments. Differences between groups were assessed by one-way analysis of variance (ANOVA). A value less than 0.05 was considered statistically significant.
Clinical Treatment Observations on a Nonhealing Diabetic
Wound by Treating with "Jinchuang Ointment". To the best of our knowledge, there are no English language case reports describing "Jinchuang ointment" treatment for nonhealing diabetic wounds. Three different cases are presented in this study. The first subject, Mrs. Wu, is a 75-year-old female patient with type II insulin-dependent diabetes accompanied by peripheral arterial occlusion disease (PAOD) which led to left lateral leg and ankle necrotizing fasciitis. She was treated with percutaneous transluminal angiography (PTA) to improve lower limb circulation on Feb 6, 2013. As a result of reperfusion injury, the ulcer had enlarged with erythema. After examination, a below-knee amputation was immediately scheduled two weeks later at the Surgery Division, the China Medical University Beigang Hospital. As suggested by a doctor from the Division of Chinese Medicine, she decided to use traditional Chinese medicine to treat her wound. She was referred to the Division of Chinese Medicine for wound management. Normal saline was first used to clean the wound. About 2-3 g "Jinchuang ointment" was applied directly to the wound once daily. Pictures depicting wound healing under treatment with "Jinchuang ointment" are shown in Figure 1.
Mr. Tsai, a 71-year-old male, with past history of type II diabetes was diagnosed on September 8, 2013. He had a 3 × 0.8 cm and a 4 × 1.5 cm grade 3 pressure sore in the sacral region. 1 g of "Jinchuang ointment" was applied topically to the wound area once per day. Pictures documenting wound healing under treatment of "Jinchuang ointment" are shown in Figure 2. It is well-known that all treatments for bedsores are to prevent wounds from worsening. Complete wound closure was observed on October 27, 2013.
Mr. Wang, a 64-year-old male with a past history of hypertension, had a chronic wound measuring 6.2 × 5.3 cm which had not healed for more than six months. He received the topical application of "Jinchuang ointment" once per day beginning November 26, 2014. A great improvement was observed after two months of treatment as shown in Figure 3.
Content of Reference Standards Present in Dragon's Blood,
Catechu, Frankincense, and Myrrh. One of the main problems associated with herbal medicine is the high level of batch to batch variation in the amounts of active components. The content of pure chemical reference standards in herbal products is therefore used as an indicator for the purposes of quality control and standardization. Accordingly, the content of reference standards in dragon's blood, catechu, frankincense, and myrrh used was measured by HPLC in this study. All the calibration curves of reference compounds were linear over the concentration range studied ( Table 2). A linear interpolation method was used to calculate the percentage by mass of each reference standard in the herbal extract that we examined.
Dracorhodin is a red anthocyanin pigment that is a major component in "dragon's blood" resin of the plant Daemonorops draco. It possesses antimicrobial, anticancer, and cytotoxic activity [9,10]. Figure 4(a) shows the separation of dracorhodin in dragon's blood. The mass percentage of dracorhodin in the "dragon's blood" used in this study is 0.15%.
Both catechin and epicatechin are phenol-type antioxidants in catechu, an extract of acacia trees. Catechu is a common component of herbal medicine. In addition to their ability to scavenge free radicals in plasma, the health benefits of catechin and epicatechin also include stimulation of fat oxidation, expansion of the brachial artery, and resistance of LDLs to oxidation [11]. At the cellular and molecular level, catechin can enhance the expression of human PTGS2 (a dioxygenase gene), 1L1B (cyclooxygenase-2 gene), SOD (superoxide dismutase gene), MAPK1 (Mitogen-activated protein kinase 1), and MAPK3 [12][13][14]. Figure 4(b) shows the separation of catechin and epicatechin in catechu. The mass percentage of catechin and epicatechin in the catechu used in this study is 24.2% and 1.7%, respectively.
Frankincense is a resin from plants in the genus Boswellia. In Africa and Asia, it is widely used in incense, perfume, and traditional medicine. Boswellic acids, a series of pentacyclic triterpene molecules, are one of the major components of frankincense. Theiranti-inflammatory properties and ability to induce cancer cell apoptosis have been reported in vitro [15][16][17]. The expression of TOP1 (DNA topoisomerase I) and TOP2A (DNA topoisomerase II) genes can be altered in the presence of 11-keto--boswellic acid derivatives and acetyl-boswellic acid [16,18]. Figure 4(c) shows the separation of acetyl-11-keto--boswellic acid in frankincense. The mass percentage of acetyl-11-keto--boswellic acid in the frankincense used in this study is 1.62%.
Myrrh is also a resin from plants in the genus Commiphora. The usage of myrrh is similar to that of frankincense. In fact, both myrrh and frankincense are frequently used in concert in many traditional Chinese medicine recipes. In western medicine, myrrh is also used in liniment and healing salves for minor skin ailments. The chemical composition of myrrh is rather complicated [19]. Notably, both (Z)-and (E)-isomers of guggulsterone possess high affinity toward a variety of steroid receptors [20]. Both isomers seem equipotent as inhibitors of HUVEC tube formation [21]. The mass percentage of (E)-guggulsterone in the myrrh used in this study is 0.02%. Figure 4(d) shows the separation of (E)guggulsterone in myrrh. is the lowest among the sources discussed above. Previous results show that borneol stereoisomers can interact with GABA receptors [22,23] and possess antimicrobial activity [24]. Chiral GC was used to analyze the composition of borneol stereoisomers in synthetic borneol used in this study.
Evidence-Based Complementary and Alternative Medicine 7 Figure 6: Wound healing assay with HaCaT cells displaying the increased cell migration induced by "Jinchuang ointment." Cells were treated with (a) DMSO alone (control), (b) 100 ng/mL EGF (positive control), (c) 200 g/mL "Jinchuang ointment," (d) 20 g/mL "Jinchuang ointment," and (e) 2 g/mL "Jinchuang ointment." Cell migration was documented by phase contrast microscopy over a 24-hour time course where time 0 is the time of wound scratching.
In Vitro Wound Healing Assay. Confluent
HaCaT cells were scratched and treated with "Jinchuang ointment." The wound area was measured after 24 hours of treatment. All experiments were performed in triplicate. The percentage of wound closure was calculated as follows: (initial wound area − 24 h posttreatment wound area)/initial wound area × 100%. The wound closure percentage with 200 g/mL, 20 g/mL, and 2 g/mL "Jinchuang ointment" treatment was 78.2±5.3%, 78.2 ± 4.3%, and 63.8 ± 1.7%, respectively. In contrast, the negative control HaCaT cells without "Jinchuang ointment" treatment only showed 64.4 ± 4.0% wound closure. The positive control treated with 100 ng/mL EGF treatment showed 71.8 ± 1.0% wound closure. It is obvious that application of 200 g/mL or 20 g/mL "Jinchuang ointment" is more potent than 100 ng/mL EGF in promoting in vitro wound closure, showing statistically significant differences when compared to both positive and negative controls ( Figure 6).
The wound-healing assay was also used to assess the stimulatory effect of "Jinchuang ointment" on the migration of HMEC-1 cells. The wound closure percentage with 200 g/mL "Jinchuang ointment" 24 hours after treatment was 85.0 ± 12.3%, whereas the wound closure percentage observed with the DMSO control was only 43.3 ± 8.2% (Figures 7(a) and 7(b)). The weight percentage of lard is as high as 67% in "Jinchuang ointment," and sesame oil is used to prepare another famous traditional Chinese herbal ointment, Shiunko. The contribution of lard to the total cell migratory activity was therefore evaluated by reconstituting "Jinchuang ointment" with various fats. It is apparent that the significant cell migration into the wound region is seen with Jinchuang ointment-treated cells when compared to the control group ( Figure 8). When sesame oil, synthetic triacylglycerol, coconut oil, and Vaseline were used as lard substituents, 85%, 91%, 74%, and 105% migration activity can be observed (Figure 8). Unlike natural fat, Vaseline is mainly made from petroleum jelly, a semisolid mixture of hydrocarbons with carbon numbers greater than 25. The chain length of carbon atoms in the synthetic triacylglycerols used in this study is C 8 and C 10 . These results suggest that the carbon chain length of the fat used in the ointment plays a minor role in stimulating cell migration.
The Effect of "Jinchuang Ointment" on HaCaT Cell
Proliferation. The stimulatory effect of "Jinchuang ointment" on HaCaT cell proliferation was evaluated in the presence of 10% fetal bovine serum (FBS) by WST-1 assay. The increased percentages of cell proliferation by 200 g/mL, 20 g/mL, and 2 g/mL "Jinchuang ointment" at 24 and 48 hours were 117±2.66%, 133±12.7%, and 129.7±14.1% and 126.4±3.5%, 127.3 ± 5.8%, and 124.1 ± 1.3%, respectively (Figure 9). These results indicate that treatment with "Jinchuang ointment" leads to an increase in HaCaT cell growth. 3.6. The Effect of "Jinchuang Ointment" on the Expression of G1/S Transition-Related Regulators. To further investigate the underlying mechanisms involved in "Jinchuang ointment"induced effects on cell proliferation in HaCaT cells, the expression of several key cellular proteins involved in cell cycle progression was investigated by western blot analysis. As shown in Figure 10, a six-hour treatment with "Jinchuang ointment" leads to a dose-dependent increase in Cdc25b, Cdc25c, and Cyclin D3 levels. After treatment for 12 hours, significant changes in the expression pattern of those proteins between experiment and control groups were observed, Lanes one to four are as follows: control, 100 ng/mL EGF, 200 g/mL "Jinchuang ointment," and 20 g/mL "Jinchuang ointment," respectively.
Tube Formation
Assay. The process of wound healing has been divided into three different stages, inflammatory, proliferative, and remodeling phases [25]. Angiogenesis is responsible for new blood vessel formation and oxygen and nutrient supply and plays an important role in the proliferative phase of wound healing [26]. The tube formation assay was used to evaluate the in vitro angiogenic effect of "Jinchuang ointment" on HUVEC cells. As shown in Figure 11, treatment of HUVEC cell with 200 g/mL "Jinchuang ointment" for 24 hours can efficiently induce endothelial cell capillary tubes and network formation.
3.8. Conclusions. "Jinchuang ointment" is a Chinese herbal medicine complex. It has been clinically used in the treatment of diabetic foot infection and decubitus ulcers in China Medical University Hospital for more than ten years. Because of its complicated composition, its biological activities have never been investigated. To further characterize its herb ingredients, the content of reference standards present in dragon's blood, catechu, frankincense, and myrrh was determined by HPLC. Two cell based assay platforms, in vitro wound healing and tube formation, were used to examine activity. Our results show that this herbal medicine possesses potent activities stimulating cell proliferation, migration, and angiogenesis. This provides a scientific rationale to account for the observed clinical curative effects on wound healing by "Jinchuang ointment." According to current pharmaceutical regulations in Taiwan, only traditional Chinese herbal medicine manufactured from cGMP pharmaceutical factories can be sold in drug stores or by hospital marketing channels. However, for homemade traditional Chinese herbal medicine, they can only be administered to patients by the doctors who made the respective medicine. For this reason, "Jinchuang ointment" cannot be widely used in the Taiwan area since many clinicians are not able or unwilling to prepare this remedy. To facilitate the manufacturing process of "Jinchuang ointment" by cGMP factories, it will be very important to find out the activity indicator markers for components, such as dragon's blood, catechu, frankincense, and myrrh.
preparations, clinical treatment, and observations; Shinn-Jong Jiang helped in western blot analysis, cell proliferation, and wound healing assay of HaCaT cells; Guang-Huey Lin and Ming-Chuan Hsieh helped in wound healing assay of HMEC-1 cells; Jai-Sing Yang helped in tube formation assay; Lih-Ming Yiin helped in GC analysis; Hao-Ping Chen helped in HPLC analysis, experiment design, "Jinchuang ointment" preparation, and paper preparation.
|
v3-fos-license
|
2017-06-24T23:00:02.377Z
|
2007-03-27T00:00:00.000
|
582464
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://cmjournal.biomedcentral.com/track/pdf/10.1186/1749-8546-2-3",
"pdf_hash": "4b846d6ae90944bb0e5d42f787826178008a4120",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41828",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "dd482501b67af4d162ddcdd536956e9ab961c968",
"year": 2007
}
|
pes2o/s2orc
|
Enhancement of ATP generation capacity, antioxidant activity and immunomodulatory activities by Chinese Yang and Yin tonifying herbs
Chinese tonifying herbs such as Herba Cistanche, Ganoderma and Cordyceps, which possess antioxidant and/or immunomodulatory activities, can be useful in the prevention and treatment of age-related diseases. Pharmacological studies on Yang and Yin tonifying herbs suggest that Yang tonifying herbs stimulate mitochondrial adenosine triphosphate (ATP) generation, presumably through the intermediacy of reactive oxidant species, leading to the enhancement of cellular/mitochondrial antioxidant status. Yin tonifying herbs, however, apart from possessing antioxidant properties, exert mainly immunomodulatory functions that may boost a weak immune system and may also suppress overreactive immune responses. The abilities of Yang and Yin Chinese tonifying herbs to enhance ATP generation and to exhibit antioxidant and/or immunomodulatory actions are the pharmacological basis for their beneficial effects on the retardation of aging.
Background
Aging is a process of bodily change with time, leading to increased susceptibility to disease, and ultimately death. Because reactive oxidant species (ROS) and immune dysfunction are major causes of age-related diseases [1][2][3], the maintenance of antioxidant and immune fitness is a rational approach to preventive health care. Accumulation of ROS-induced oxidative damage to DNA, proteins, and other macromolecules has been regarded as a major endogenous cause of aging [1]. In addition to ROS-mediated cellular damage, aging was found to be associated with immune senescence, attributable at least partly to the loss of T lymphocyte functions [2,3]. Such loss increases the prevalence of infectious diseases in the elderly. With advances in modern medical research techniques, research on age-related chronic illnesses has become intense, in the quest for valuable preventive and therapeu-tic measures. Humans have been making continuous efforts to fight aging. As Chinese medicine has always emphasized the prolongation of a healthy lifespan, many Chinese tonifying herbs have long been used to safeguard health and to delay the onset of senility.
Under both normal and pathological conditions, ROS are generated in all cells undergoing aerobic metabolism, particularly from mitochondria. The cell possesses two distinct antioxidant defense systems to counteract damaging ROS: (1) enzymatic antioxidants such as catalase, superoxide dismutase (SOD), glutathione peroxidase and other related enzymes/molecules, and (2) non-enzymatic antioxidants such as ascorbic acid (vitamin C), α-tocopherol (vitamin E) and β-carotene. To achieve optimal antioxidant fitness, every component of the antioxidant defense system should function optimally because antioxidants must work together in a synergistic manner. Chinese tonifying herbs have been shown to possess both in vitro and in vivo antioxidant activities [4,5].
The immune system fights against 'foreign invaders' such as bacteria, viruses, fungi, yeasts and parasites. The humoral and cell-mediated immune responses show great competence in dealing with intruders. Moreover, the surveillance function of the immune system tends to prevent cancers, particularly in old age. However, an overreactive or imbalanced immune system can cause allergies or autoimmune disorders. A well-constituted and balanced immune system is thus crucial for safeguarding health. Chinese tonifying herbs have been shown to stimulate or suppress the cell-mediated immune response both in vitro and in vivo [6].
The importance of disease prevention has been recognized by Chinese medicine through experience accumulated over centuries. Many Chinese tonifying herbs have long been used for safeguarding health and for delaying the onset of senility. According to Chinese medicine theories, tonifying herbs prescribed for various symptoms of ill-health are generally classified into four categories on the basis of their health-promoting actions, namely 'Yanginvigorating', the 'Qi-invigorating', the 'Yin-nourishing' and the 'Blood-enriching' herbs [7]. The 'Qi-invigorating' and 'Blood-enriching' herbs are of Yang and Yin characteristics respectively. Chinese medicine theories suggest that a balance of Yin and Yang is essential to sustain optimal body function [8]. From a modern medical perspective, the maintenance of Yin and Yang in harmony may be described as the attainment of bodily homeostasis. The long-known antagonistic relationship between parasympathetic and sympathetic neural activities affords an example of both a phenomenon well-recognized by Western medicine and the Yin/Yang balance. A recent psychophysiological investigation in humans revealed an association between decreased parasympathetic or sympathetic activities with deficiencies of Yin or Yang respectively [9].
The theoretical framework of Chinese medicine is based on the Chinese cultural fabrics and clinical experience, while modern Western medicine has been established on the basis of laboratory and clinical investigations [10]. As the two distinct medical systems are complementary, bridging of the knowledge gap between Chinese and Western medicine is essential for their integration, in clinical practice, for disease prevention and treatment. Expounding Chinese medicinal theories in modern scientific terms to a Western audience facilitates communication between practitioners of the two systems.
In our earlier studies, we found that tonifying herbs with Yang or Yin properties were associated with antioxidant and immunostimulatory activities respectively [4]. Recent studies indicated that only Yang tonifying herbs (not Yin tonifying herbs) enhanced mitochondrial ATP generation capacity in mouse hearts [11]. We therefore suggest that Yang tonifying herbs enhance mitochondrial ATP generation, while Yin tonifying herbs are associated with immunomodulatory activities. In this mini-review, we summarize the abilities of Yang and Yin tonifying herbs to enhance ATP generation capacity, and to potentiate antioxidant and/or immunomodulatory actions, in an effort to characterize their respective pharmacological properties.
Enhancement of ATP generation by Yang tonifying herbs
In Chinese medicinal theories, Yang is a manifestation of body functions supported by various organs. A 'Yanginvigorating' action therefore involves the enhancement of bodily functions in general and cellular activities that consume ATP in particular. The mitochondrion is responsible for the generation of ATP through oxidative metabolism. To establish the pharmacological basis of 'Yanginvigorating' action, we have recently investigated the effect of Yang herbs on ATP generation capacity in heart homogenates prepared from mice that were pretreated with methanolic extracts of herbs [11]. Tonifying herbs from other functional categories were examined for comparison. While Chinese herbs are usually extracted by water for human oral consumption, water was replaced by methanol in our study for convenience in the processing and storage of samples. Yang herbs invariably enhanced myocardial ATP generation, with stimulation ranging from 20-130%. Herba Cynomorii and Semen Cuscutae were the most potent herbs examined. By contrast, none of the Yin herbs enhanced ATP generation; some Yin herbs even suppressed ATP generation slightly (Table 1). A preliminary mechanistic study indicated that Yang herbs may speed up ATP synthesis by increasing mitochondrial electron transport [11].
Correlation between enhancement of ATP generation capacity and antioxidative capacity
Mitochondrial oxidative phosphorylation generates ROS as byproducts. Highly reactive chemically, ROS attack cellular structures located near the sites where ROS are generated. Mitochondrial DNA, proteins, and lipids in the inner membrane of mitochondria are thus vulnerable to oxidative damage [12], resulting in generalized organelle dysfunction, defective mitochondrial biosynthesis and poor energy metabolism [13].
Under normal physiological conditions, the mitochondrial antioxidant defense system adequately handles the potentially detrimental effects of ROS derived from energy metabolism [14]. When a functional imbalance between ROS levels and antioxidant concentrations caused by various disease states and/or aging occurs, age-related disorders such as cancer, cardiovascular diseases, brain dysfunction, or cataract may occur [15]. Antioxidant supplementation, particularly from herbal extracts, has become a trend in preventive health care.
Using an oxygen radical absorbance capacity assay, Ou et al. recently compared the free radical scavenging (i.e. antioxidant) activities of Yang and Yin herbs [16]. The results indicated that Yin herbs generally possessed higher antioxidant activities than Yang herbs and that the antioxidant potencies correlated well with the amounts of total phenolic compounds in the herbs. The authors suggested an analogy between Yin/Yang balance and antioxidation/ oxidation in energy metabolism. These findings of higher antioxidant activities in Yin herbs as compared with those in Yang herbs do not agree with the findings from one of our earlier studies which showed that most of the Yang herbs possessed a more potent 1,1-diphenylpicryhydrazyl radical-scavenging action than other tonifying herbs [4] ( Table 2). Although the use of different herbal extraction methods and distinct antioxidant assays precludes direct comparison of the two studies, the discrepancy might be due to the selection of almost completely different sets of Yin and Yang herbs for testing in the two studies. Our study focused on herbs used for safeguarding health (i.e. herbs used for tonifying purposes) ( Tables 2, Table 3 of reference [17]). Ou et al. probably used a selection criterion based on the general Yin and Yang properties of the herbs instead of their Yin-tonifying and Yang-tonifying actions [16]. Szeto and Benzie, using the same set of herbs described in Ou et al. to examine possible protective effects on DNA oxidative damage, found that the Yang herbs showed an antioxidant effect superior to that of Yin herbs [5]. (Table 4). In vitro free radical-scavenging activities were detected in herbal extracts prepared from Herba Epimedii [4,18], Radix Dipsaci [4,16], Fructus Psoraleae [4], Semen Cuscutae [16], Herba Cistanche [4,16,18], Cortex Eucommiae [19] and Rhizoma Cibotii [4,16]. Aqueous extracts of Rhizoma Drynariae and Cortex Eucommiae were found to inhibit oxidant production from rat osteoblasts [20], and also inhibited biomolecular oxidative damage [21]. Active ingredients (bakuchiol, isobavachin and isobavachalcone) from Fructus Psoraleae inhibited the NADPH-dependent peroxidation of rat microsomal and mitochondrial lipids in vitro [22]. An ethanolic extract of Radix Dipsaci enhanced the antioxidant status of blood and liver in rodents [23] and a Radix Morindae extract increased blood antioxidant enzyme activities in diabetic rats [24]. Phenylethanoids isolated from Herba Cistanche were found to prevent cell damage induced by in vitro and in vivo exposure to carbon tetrachloride in rats [25]. A recent study from our laboratory indicated that pretreatment with the methanolic extract of Herba Cistanche protected against ischemiareperfusion injury in rat hearts ex vivo and enhanced mitochondrial ATP generation in the rat hearts ex vivo and H9c2 cells in situ. The ATP-stimulating action was possibly due to enhanced oxidative phosphorylation caused by increases in the activities of complexes I and III [26]. As good body function requires a large amount of energy and antioxidant defense is essential in sustaining mitochondrial ATP production [27], the antioxidant activities of Yang herbs may safeguard ATP generation, particularly under conditions of upregulated cellular activities.
Antioxidant activities of Yin tonifying herbs
Methanolic extracts of both Fructus Ligustri and Herba Ecliptae were found to enhance hepatic glutathione (GSH) regeneration capacity in rats [4,28]. The enhancement of Mice were pretreated with herbal extracts at daily doses of 1 g/kg for 3 days. The mean value of myocardial ATP generation capacity in unpretreated mice was 147 ± 17.6 (S.D.) nmol ATP/mg protein/10 min, (n = 6). * P < 0.05; ** P < 0.01, Student's t test hepatic GSH regeneration capacity by Fructus Ligustri was associated with a hepatoprotective action against carbon tetrachloride toxicity [28]. Activity-directed fractionation of Fructus Ligustri indicated that the hepatoprotective principle(s) resided mainly in the oleanolic acid-enriched butanol and chloroform fractions [28]. Moreover, our recent studies showed that both short and long term pretreatment with oleanolic acid protected against myocardial ischemia-reperfusion injury in rats [29,30]. It was suggested that the cardioprotection afforded by oleanolic acid pretreatment was related to the enhancement of mitochondrial antioxidant mechanism mediated by GSH and α-tocopherol [29]. Both experimental and clinical investigations indicated that the antioxidant status influenced immunocompetence, particularly under conditions of stress such as physical exercises or chronic diseases [31]. The antioxidant activities of Yin tonifying herbs may positively influence immunostimulatory activities.
Experimental studies on a 'Yang-invigorating' herbal formula
A 'Yang-invigorating' herbal formula named VI-28 has been shown to produce 'Yang-invigorating' effects [32] and enhance red cell antioxidant status, particularly Cu-Zn-superoxide dismutase (SOD) activity, in elderly male human subjects [33]. This herbal formula is comprised of Radix Ginseng, Cornu Cervi, Cordyceps, Semen Allii, Fructus Cnidii, Fructus Evodiae and Rhizoma Laemferiae. Recently we investigated the effects of long-term VI-28 treatment on red cell Cu-Zn-SOD activity, mitochondrial functional ability, and antioxidant levels, in various tissues of rats of both sexes [34]. The results indicated that VI-28 treatment increased red cell Cu-Zn-SOD activity and mitochondrial ATP generation capacity, increased the levels of reduced GSH and α-tocopherol, and reduced Mn-SOD activities.
The enhancement of ATP generation by VI-28 increased mitochondrial ROS production, resulting in the upregulation of mitochondrial antioxidant mechanism. The VI-28induced increase in mitochondrial antioxidant capacity in various tissues was evidenced by a significant reduction in ROS generation. Given that cellular energy status and mitochondrial ROS generation are factors critically involved in aging, the dual effect of 'Yang-invigoration' produced by VI-28 may have clinical implications in the prevention of age-related diseases.
Immunomodulatory activities of Yin tonifying herbs
It was suggested that the proper functioning of the immune system requires dynamic interactions between Yang and Yin. And while the antigen-nonspecific immune response is associated with Yang, the antigen-specific response is related toYin [35]. One of our earlier studies 50 was > 5 mg/ml). b Splenocytes isolated from mice were cultured in 96-well microtiter plates in a final volume of 100 μl of culture medium, with the respective methanol extracts added at final concentrations ranging from 15.6-1000 μg/ml. Values given are means ± S.E.M., (n = 4). c Animals were pretreated orally with the methanol extracts at a daily dose of 1 g/kg for 3 days. All animals were sacrificed 24 hours post-dosing. Splenocytes isolated from pretreated animals were cultured in microtiter plates in a final volume of 100 μl culture medium. Values given are means ± S.E.M., (n = 3-5). * Significantly different from the control group (P < 0.05) investigated antioxidant and immunomodulatory activities in different categories of tonifying herbs. The results showed that 6 and 7 of a total of 8 Yin herbs tested potentiated concanavalin A (Con A)-stimulated splenocyte proliferation (an antigen-specific response) in mice in vitro and ex vivo respectively. By contrast, only 3 of 9 Yang herbs tested showed a similar enhancement of the Con A-stimulated immune response [4] ( Table 2).
Among the Yin herbs, the methanolic extract of Fructus Ligustri yielded the most robust immunostimulatory action in mouse splenocytes [4]. Differential extraction of Fructus Ligustri by solvents of increasing polarity indicated that the immunostimulatory activity resided mainly in the petroleum ether fraction [36]. Oleanolic acid, an immunomodulatory triterpenoid commonly found in herbs including Fructus ligustri [37,38], was undetectable in this fraction [36]. Currently, activity-directed fractionation of the petroleum ether extract of Fructus Ligustri is under way in our laboratory. Various immunomodulatory actions of Yin tonifying herbs, and the active ingredients of the herbs, have been reported in other studies (Table 5). An [17] aqueous extract of Radix Asparagi was found to inhibit tissue necrosis factor-α (TNF-α) secretion by suppressing Interleukin (IL)-2 secretion from astrocytes, implicating that the extract might exhibit anti-inflammatory activity in the central nervous system [39]. Both the crude aqueous extract and the two active ingredients (ruscogenin and ophiopogonin D) of Radix Ophiopogonis produced antiinflammatory effects in rodents [40]. While the aqueous extract inhibited xylene-induced ear swelling and carrageenan-induced paw edema in mice, it also suppressed carrageenan-induced pleural leukocyte migration in rats, and the zymosan-evoked migration of peritoneal total leukocytes and neutrophils in mice. Treatments with ruscogenin and ophiopogonin D decreased zymosaninduced peritoneal leukocyte migration in mice and reduced the phorbol-12-myristate-13 acetate-induced adhesion of HL60 cells to ECV304 cells [40]. Several sesquiterpenes isolated from Herba Dendrobii were found to exhibit immunomodulatory activity by exerting comitogenic effects on Con A and lipopolysaccharide-stimulated mouse splenocytes [41,42]. It has recently been reported that an ethanolic extract of black rice (the fruit of Oryza sativa) showed anti-asthmatic effects in a mouse model [43]. Treatment with the ethanolic extract of black rice reduced the number of eosinophils in bronchoalveolar lavage fluid, alleviated the airway hyper-response, and decreased the extent of airway inflammation in ovalbumin (OVA)-immunized and -aerolized mice challenged with OVA. Moreover, the ethanolic extract treatment decreased interferon-γ (INF-γ), IL-4, IL-5 and IL-13 levels in the supernatants of cultured splenocytes and suppressed the plasma levels of OVA-specific immunoglobulin (Ig)G, IgG2α, IgG1 and total IgE in OVA-immunized and -challenged mice [43]. Clinical investigations indicated that intramuscular injection of undiluted Fructus Ligustri extract at a dose of 2-4 ml once or twice daily could prevent leucopenia caused by chemotherapy or radiotherapy. Fructus Ligustri treatment normalized white blood cell counts, thereby increasing tolerance to chemo/ radiotherapy [44]. Oral administration of Fructus Ligustri tablets at a daily dose of 50 g equivalence of crude herb was found to ameliorate the symptoms of chronic bronchitis [44]. A herbal formula comprising Fructus Ligustri, Radix Scutellariae, Radix Astragalus and Eupolyphaga et polyphae was found to alleviate symptoms and improve immune function in HIV/AIDS patients [45].
Ganoderma -A 'Fu Zheng' tonifying herb
Ganoderma, another Yin tonifying herb with immunomodulatory effects, is widely consumed by the Chinese people who believe that it promotes health and longevity, lowers the risk of cancer and heart diseases and boosts the immune system [46]. In Chinese medicine, Ganoderma is regarded as a very potent herb for 'Fu Zheng', a Chinese medicine concept comparable to immunotherapy/immunomodulation in Western medicine. While Ganoderma is traditionally used to increase the resistance of the body immune system to pathogens and to restore normal body functions, the herb has now also been used to decrease the side effects of Western medical procedures, such as surgery, radiotherapy and chemotherapy which often weaken the immune system. The anti-cancer/immunomodulatory effects of Ganoderma were associated with triterpenes [47], polysaccharides [48,49] or immunomodulatory proteins [50] through mechanisms involving inhibition of DNA polymerase [51], inhibition of posttranslational modification of the Ras oncoprotein [52] or the stimulation of cytokine production [53]. Recent studies on the immunomodulatory activities of Ganoderma indicated that Ganoderma extract stimulated the proliferation of human peripheral blood mononuclear cells and raised the levels of mRNAs encoding Th1 and Th2 cytokines in these cells [54]. Moreover, polysaccharides of Ganoderma activated mouse splenic B cells and induced these cells to differentiate into IgM-secreting plasma cells. This process was dependent on the polysaccharide-mediated induction of Blimp-1, a master regulator capable of triggering a cascade of gene expression during plasmacytic differentiation [55]. In human peripheral B lymphocytes, the Ganoderma polysaccharide fraction enhanced antibody secretion and induced the production of Blimp-1
Fructus Ligustri
Methanolic extract or petroleum ether fraction enhanced Con A-stimulated proliferation of mouse splenocytes in vitro and ex vivo [4,36] Radix Asparagi Water extract inhibited TNF-α secretion by suppressing IL-2 secretion from astrocytes [39] Radix Ophiopogonis Water extract inhibited xylene-induced ear swelling and carrageenan-induced paw edema in mice Active ingredients (ruscogenin and ophiopogonin D) decreased zymosan-induced adhesion of HL60 cells to ECV304 cells [40] Herba Dendrobii Active ingredients (sesquiterpenes) showed a co-mitogenic effect on Con A and lipopolysaccharide-stimulated mouse splenocytes [41,42] Radix Oryza Ethanolic extract of black rice (the fruit of Oryza sativa) decreased the extents of airway inflammation and hyperresponse in OVA-immunized and aerolized OVA-challenged mice Ethanolic extract of black rice decreased various cytokine levels in the supernatant of cultured splenocytes and suppressed the plasma levels of OVA-specific IgG and total IgE in OVA-immunized and challenged mice [43] mRNA, though it failed to induce lymphocyte differentiation [55].
In addition to immunomodulating activities, Ganoderma possesses in vivo antioxidant potential, another aspect of Yin tonifying action. Treatment with Ganoderma extract was found to enhance the hydroxyl radical scavenging activity of rabbit blood plasma [56,57]. Ganoderma acted by stimulating cellular and mitochondrial SOD activities, thereby enhancing the antioxidant capacity of the body [58]. It was shown that an intraperitoneal injection of Ganoderma extract following a lethal dose of cobalt X-ray radiation caused a marked prolongation of survival time in mice [59]. Pretreatment with Ganoderma extract also markedly protected against carbon tetrachloride-induced hepatic damage and the associated impairment in hepatic antioxidant status [60].
Antioxidation
• Preserving the mitochondrial structural and functional integrity • Maintaining immune competence Retarding the aging process [72] and stimulation of testosterone biosynthesis [73]. We have recently investigated the effects of wild and cultured Cordyceps on Con A-stimulated splenocytes (an in vitro bioassay for Yin tonifying action) and myocardial ATP generation capacity (an ex vivo bioassay for Yang tonifying action) [74]. The results indicated that methanolic extracts of wild and cultured Cordyceps enhanced both the Con A-stimulated splenocyte proliferation in vitro and myocardial mitochondrial ATP generation ex vivo in mice, with no significant difference in potencies when the two types of Cordyceps were compared. While the immunopotentiating effect was associated with an increase in IL2 production, the stimulation of myocardial ATP generation was paralleled by an enhancement in mitochondrial electron transport. When compared with typical Yin and Yang tonifying herbs (Fructus Ligustri and Herba Cynomorii respectively), Cordyceps was found to possess both Yin and Yang tonifying actions, with a lower potency in both modes of action. The observation of both immunopotentiating and ATP-enhancing activities in Cordyceps extracts further supports the pharmacological basis of Yin and Yang tonifying herbs in Chinese medicine.
Conclusion
Yang tonifying herbs stimulate mitochondrial ATP generation, leading to the enhancement of cellular/mitochondrial antioxidant status, presumably through the intermediacy of ROS. Yin tonifying herbs, which also possess antioxidant properties, are mainly immunomodulatory, thereby boosting weak immune functions and suppressing overreactive or unbalanced immune responses. Cordyceps, highly regarded as a tonifying herb with a dual action of Yin and Yang, stimulates mitochondrial ATP generation and enhances cellular immune responses. Given that impairment in mitochondrial functional ability and antioxidant status, and a decline in immunocompetence, are believed to be critically involved in the development of age-related diseases and the aging process, the abilities of Yang and Yin tonifying herbs to enhance ATP generation capacity and to produce antioxidant and immunomodulatory actions are beneficial for safeguarding health and delaying the onset of senility ( Figure 1). While animal models may be used for testing working hypotheses on Yang and Yin tonifying actions, clinical studies, using Yang and Yin tonifying herbs and/ or defined chemicals isolated from the herbs or synthesized in the laboratory, on age-related variations in antioxidant and immune function, would be of considerable value.
Competing interests
The author(s) declare that they have no competing interests.
|
v3-fos-license
|
2020-05-21T09:18:38.634Z
|
2020-05-18T00:00:00.000
|
219466128
|
{
"extfieldsofstudy": [
"Medicine",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/sc/d0sc01039a",
"pdf_hash": "88b8e84bf116f974aa56dbb04d345699a027696a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41830",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Physics"
],
"sha1": "5d749edeb4442901898a9fe83a117b1e4fd4ecc6",
"year": 2020
}
|
pes2o/s2orc
|
Surface passivation extends single and biexciton lifetimes of InP quantum dots
Combined optical spectroscopic study now reveals the photophysical changes of InP QDs upon surface passivation by various methods.
Introduction
Colloidal quantum-conned nanocrystals, especially spherical quantum dots (QDs), can exhibit a broad tunability of their band-gaps, multiexciton lifetimes and band-edge positions by simply varying the particle sizes, 1,2 enabling their applications in lasing, 3,4 light-emitting diodes (LEDs), [5][6][7] and solar fuel generation. [8][9][10][11] In the past few decades, Cd and Pb-based chalcogenide nanocrystals (e.g., CdSe 3,4 and PbSe 12 ) have been widely investigated, leading to signicant advancement of our fundamental understanding of exciton and carrier dynamics of QDs and how these properties can be optimized through size, shape and composition control of QDs and heterostructures to improve their device performances. 2,13,14 However, the toxic heavy metals in Cd-and Pb-based QDs pose potential human health risks, hindering their commercial applications. Meanwhile, InP, a binary III-V semiconductor, is considered as one of the most promising environmentally friendly nanocrystals. With a bulk bandgap at 1.35 eV and suitable bandedge energies (CB: À3.85 eV and VB: À5.2 eV), 15 InP QDs can be tuned to absorb a wide range of photoenergies and see emerging applications in solar cells, 16,17 LEDs, 7,18 and photocatalytic reactions. 8,10 However, despite the desirable material properties, early studies 19,20 have shown that these materials contain a large density of traps states (Scheme 1a), evidenced by their low photoluminescence (PL) quantum efficiencies. Two postsynthetic methods have been developed to passivate the trap states in InP QDs and improve their PLQEs. [20][21][22][23][24][25][26] The rst passivation method employed a post-synthetic HF treatment of InP QDs under illumination. [20][21][22][23]25 The HF treatment has been suggested to remove the surface P dangling bonds through a photochemical reaction of trapped holes with the P atom, which is then attacked by the F À ions and eventually detach from the InP surface. 25 Meanwhile, a recent study suggested an alternative/complementary mechanism where the F À treatment may act through removing the electron traps caused by the surface indium dangling bonds. 22 Thus, the underlying mechanism remains to be further claried. The second passivation method, also a general method for many other nanocrystals, is to grow an inorganic shell around the InP core. 18,24,27 Although the epitaxial growth of shells around a nanocrystal core is known to passivate the surface states of the core materials, [28][29][30][31] it is unclear whether electron and/or hole traps are removed and how this method differs from the HF treatment. Therefore, a clear understanding of these two passivation methods and their impact on exciton and carrier dynamics can provide important insights towards the rational design and improvement of InP QDs for many applications.
Furthermore, many optoelectronic applications of QDs, e.g., lasing 3,4 and LED, [5][6][7] involve multiexciton states under operational conditions. The dominating energy loss mechanism in the multiexciton regime is through Auger recombination (AR) processes, [32][33][34] where the nonradiative decay of one exciton simultaneously promotes another exciton or carriers into its higher energetic state (Scheme 1b). As such, the study of AR processes in QDs, especially exploring effective ways to suppress the AR processes in QDs, is crucial for the development of optoelectronic devices with improved performance. 32,33,35,36 So far, the impact of surface passivation on multiexciton states in InP QDs remains poorly characterized and understood. 37 It is important to develop surface passivation schemes that can improve the lifetime of not only the single but also the multiple exciton states.
Herein, we examine the mechanism by which the HF treatment and core/shell structure improve the PLQE of InP QDs and the effect of these treatments on single/multiple exciton lifetimes. We directly measure how these passivation schemes affect the electron and hole trapping processes by comparing transient absorption and time-resolved photoluminescence decay kinetics. We show that HF treatment predominately removes hole traps, while the growth of the ZnS shell can effectively remove both electron and hole traps on InP QDs. More interestingly, we observe that the biexciton lifetime of InP QDs is signicantly shorter than CdSe QDs of similar sizes. While HF treatment has a minor impact on this short biexciton lifetime, the growth of a ZnS shell ($0.2 nm) around the InP core can result in a dramatic 20-fold increase of the biexciton lifetime in InP QDs. The latter effect has not been reported previously for Cd chalcogenide QDs. These results highlight that, as compared to traditional Cd-chalcogenide nanocrystals, rational surface treatment is more crucial for InP QDs to passivate trap states and extending biexciton lifetimes for various optoelectronic applications.
Existence of traps states in InP
The InP QDs used in the present study were synthesized by following a recently developed "greener" procedure using tris(diethylamino)phosphine as the phosphine precursor in the present of Zn 2+ additives. 8,27,38 Fig. 1 shows the UV-vis absorption and PL spectra of the as-synthesized InP QDs of four different sizes (estimated diameters: $2-3 nm (ref. 39)) with band-edge exciton absorption peaks at 440, 456, 505 and 540 nm, respectively. The PL spectra of these QDs show only weak, even negligible, emission from the band-edge exciton and are dominated by broad emission at the longer wavelength (between 600-800 nm). The broad PL emission in InP QDs has also been previously observed for InP synthesized from other methods and attributed to the hole trap-assisted emission, 15 similar to those reported in Cd-based nanocrystals. 40 The absence of band-edge exciton emission and the appearance of the trap-assisted emission indicates fast carrier trapping by trap Scheme 1 Schematic of photophysical processes in InP QDs in single and multiple exciton states. (a) Single exciton regime: the photogenerated excitons decay predominately through radiative electron and hole recombination (k e/h,rad ), electron trapping (k e,non ) and hole trapping (k h,non ) processes. (b) Biexciton regime: the photogenerated excitons decay through an additional fast Auger recombination process, where the nonradiative decay of one exciton (k xx ) simultaneously promotes another electron (blue line) or hole (red line) into its higher energetic state. In this study, we discuss how surface treatments affect both processes in the InP QDs. states in the as-synthesized InP QDs. As such, the PLQEs of all as-synthesized InP QDs are low (<1%), which highlights the importance of understanding the trap passivation mechanisms in InP QDs. Below we use InP QDs with the band-edge exciton absorption at 525 nm as a model system to investigate the exciton dynamics of untreated InP QDs and their passivation mechanisms by both HF treatment (or "HF etching" 20 ) and ZnS shell growth.
Exciton dynamics and carrier trapping processes in untreated InP QDs
We start with the assignment of spectral features in the transient absorption (TA) spectra of untreated InP QDs and then use these features to characterize their exciton dynamics. Fig. 2a shows the TA spectra of InP QDs aer photoexcitation at 400 nm under low uence, corresponding to an estimated average exciton number per QD of $0.07 (Fig. S1 †). Thus, the following results represent the exciton dynamics of InP QDs in the single exciton regime. The TA spectra show two major features: a strong bleach region between 450 nm and 550 nm (labeled XB below) and a weak photoinduced absorption region from 625 to 700 nm (labeled PA below, Fig. 2a inset). In QDs, the XB results from the state-lling effect of photogenerated electrons and/or holes, which block the ground-state excitonic transition of the QDs and leads to an absorbance decrease (or bleach) feature in TA spectra. 2,41,42 Due to the large difference in the effective mass of electrons (m e ¼ 0.08) and holes (m h ¼ 0.64) in InP QDs, 15 the density of hole states is much larger than that of electrons, and XB signal is expected to originate mostly from the state-lling effect of electrons. The nature of PAs in QDs vary between different systems and have been previously shown to originate from the intraband transition of photogenerated electrons 15 or free/trapped holes. 40,43 To verify the spectral origins of the XB and PA regions, we conduct TA measurements with the selective addition of electron scavengers, benzoquinone (BQ), into the QD sample. The addition of electron scavengers can induce a corresponding electron transfer from the InP QDs to the acceptor molecules, thus reducing the amplitude of TA signals originating from electrons. Fig. 2b shows that the addition of BQ into InP QDs results in a faster decay for both the XB and PA regions, with almost complete quenching achieved in <1 ns ($97%, original TA spectra shown in Fig. S2 †). Similar fast decays of XB and PL signals have been observed previously for InP QD/methylviologen complexes and were attributed to electron transfer from the QD to the electron acceptor. 15 Thus, the above results, strong quenching of both XB bleach and PA signals in the presence of electron scavengers and the large difference in electron and hole effective mass, indicate that both the XB and PA in the TA spectra of InP QDs originate predominantly from the photogenerated electrons. This assignment is further supported by the identical decay kinetics of the XB and PA signals in the absence of any scavengers (Fig. 2c). It should be noted that the early-time PA decay (within 1 ps) is, however, not identical to that of the XB decay. There exists a simultaneous decay of the PA signals (0.22 AE 0.05 ps) and an increase of the XB signals (0.32 AE 0.05 ps) (Fig. 2c inset). This early-time mismatch is attributed to the spectral extension of the PA signals into the XB region, which causes the decay of the PA signals to manifest themselves as a rise in the XB region. Similar spectral overlap and their assignments in InP QDs have also been previously reported, 15 indicating that the spectral natures of the XB and PA in InP QDs are not sensitive to their synthesis methods.
With the above spectral assignment, we now focus on understanding the exciton decay dynamics of InP QDs. Due to the existence of trap states, 20,22,23,44 the XB decay in InP QDs should reect the decay of photogenerated electrons through both the radiative channel by the electron-hole recombination process (k e/h,rad ) and the nonradiative channel by the electron trapping process (k e,non ) (Scheme 1a). Meanwhile, the PL decay of the band-edge exciton (k PL ) reects the radiative and nonradiative decay from both electrons and holes. Therefore, by directly comparing the PL decay of the band-edge exciton with the XB decay from the TA measurement 23,45 (Fig. 2d), one can calculate the nonradiative decay component of holes (k h,non , Scheme 1a) through eqn (1). Fig. 2d shows the PL decay of InP QDs monitored at 550 nm, where the emission is predominantly from the band-edge exciton ( Fig. 1 and more discussions in Fig. S3 †). Due to the ns resolution of our setup (instrument response time, IRF $ 0.5 ns), we compare the PL and XB decay by normalizing them at 1 ns. As such, the extracted hole dynamics reect only those longer than $1 ns. As shown in Fig. 2d, the PL decay of the InP QDs is found to be much faster than its TA XB decay, indicating the existence of hole trapping processes in addition to radiative electron-hole recombination and electron trapping. From the best t to an empirical three-exponential decay function, the amplitude-weighted average time constants of PL and XB decays are calculated to be $3.1 and 26.7 ns, respectively (detailed parameters listed in Table S1 †). Using eqn (1), the time constant of the hole-trapping processes can be therefore estimated to be $3.4 ns, which is signicantly faster than the radiative decay time (>26.7 ns). This fast trapping time constant is consistent with the observation of weak band-edge exciton emission and strong trapped hole-assisted emission shown in Fig. 2b. In the following section, we will illustrate how the two passivation methods for InP QDs: the HF treatment and the growth of a ZnS shell, affect the exciton decay and carrier trapping processes in InP QDs.
Exciton dynamics in HF treated InP QDs: passivation of hole traps
The HF treated InP QDs (named as InP@F À QDs) were prepared by adding different amounts of diluted HF solution (0.527 ml HF 45 wt%, 0.0625 ml H 2 O and 5 ml butanol) into InP QDs solution and illuminating (xenon lamp, 10 mW cm À2 ) the mixtures under ambient conditions, following literature procedures. 20 The HF amounts were varied from 5, 25, 100, to 250 ml, resulting in the estimated HF/InP molar ratios of 480, 2410, 9630, 24 100, respectively, and these samples are referred to as InP@F À 5 ml , InP@F À 25 ml , InP@F À 100 ml , InP@F À 250 ml below. Fig. 3a shows the absorption and photoluminescence spectra of InP@F À 100 ml as an example to illustrate the spectral changes aer the HF treatment; data for other samples are similar and shown in Fig. S4 and S5. † Aer the treatment, the band-edge absorption peak of the InP QDs is blue-shied from 525 to 485 nm; the PL spectra change signicantly with the appearance of an intense band-edge emission peaked at 550 nm and the almost complete suppression of the hole trap-assisted emission. Furthermore, the PLQE of the InP@F À QDs is found to improve gradually with the increase in the HF amounts (inset in Fig. 3a), with the highest PLQE of 16-20% achieved in InP@F À 100 ml . Further increase of the HF concentration results in, however, the aggregation of the InP@F À QDs (Fig. S6 †), where the strong light scattering nature of the sample hinders a precise quantication of PLQE and is thus omitted from the discussion below. The observed improvement of InP PLQE by HF treatment is in good agreement with previous reports. [20][21][22]25 Below we investigate the effect of HF treatment on carrier dynamics and the mechanism of PLQE improvement of InP@F À QDs by time-resolved spectroscopy.
The inset in Fig. 3b shows the TA spectra of InP@F À 100 ml aer 400 nm photoexcitation. In comparison with the TA spectra of the untreated InP QDs (Fig. 2a), the XB peak is blue-shied to 485 nm, in agreement with the shi in their groundstate absorption spectra (Fig. 3a); data for other samples are similar and shown in Fig. S7. † Comparison of the normalized XB decay of samples with different HF treatment (Fig. 3b) shows that the XB decay of the InP@F À samples is faster than the untreated InP QDs (black line in Fig. 3b) at low HF concentration (InP@F À 5 ml and InP@F À 25 ml ); and is the same as the untreated InP QDs at higher HF concentration. Control measurements show that in the absence of HF, illumination of the InP QDs results in the largest acceleration of the XB decay (Fig. S8 †). Thus, we attribute this faster XB decay at low HF concentration to the photodegradation of InP, probably due to the formation of an oxide layer around InP QDs 46 which may act as an additional electron trap. With enough amounts of HF (InP@F À 100 ml and InP@F À 300 ml ), this degradation pathway is, however, effectively suppressed, resulting in identical decay dynamics of XB in InP@F À QDs as the untreated InP QDs.
The comparison of the normalized PL decay of the band-edge exciton at 550 nm for InP@F À QDs (Fig. 3c) shows that the PL decay lifetimes of all samples are longer than the untreated InP QDs, and the lifetime increases at higher concentrations of HF treatment. Because the electron decay of InP@F À is either faster or remains unchanged compared to untreated InP QDs (see above), the prolonged PL decay must therefore arise from a slower hole trapping rate caused by the HF treatments according to eqn (1). In other words, HF treatments can effectively remove the hole traps in InP QDs. To quantify the change of hole trapping rate, Fig. 3d compares the XB and PL decay of the best InP@F À sample, InP@F À 100 ml , using a similar approach as that in Fig. 2d. The XB (black line) and PL decay (grey line) of the untreated InP QDs is also plotted for comparison. Fitting of the XB and PL decays of InP@F À 100 ml by multi-exponential functions results in the amplitude weighted-average time constants of $32.4 and 18.1 ns, respectively (Table S2 †), from which the hole trapping time constant in the InP@F À 100 ml sample is estimated to be $41.0 ns (eqn (1)). A previous X-ray photoelectron spectroscopic study of InP QDs suggests that the HF treatment proceeds through the removal of P dangling bonds by F À under illumination, 25 in agreement with this report, our optical spectroscopic study result shows that HF treatment also reduces hole trap densities in InP QDs synthesized herein 8,27,38 and slows down the hole trapping time constant from $3.4 to 41.0 ns. Our result also agrees with recent calculations, which suggested that the presence of the F À termination on the InP surface decreases the oscillator strength of its excitonic transition, thus resulting in longer radiative PL decay. 22 On the other hand, our result is not consistent with other reports that suggest that HF treatment removes electron traps on InP QDs by the passivation of surface indium dangling bonds. 22,23 Exciton dynamics in InP@ZnS core@shell QDs: passivation of both electron and hole traps The ZnS shell growth is achieved by the gradual injection of a 2 M sulfur solution dissolved in trioctylphosphine into the assynthesized InP QD solution at 260 C. 8 To understand the effects of the ZnS shell on the carrier dynamics, we sampled the solution during the ZnS shell growth at different times: 0 (right aer the injection of all the S precursor), 10, 20, 30, and 60 min, and measured the optical properties of these aliquots (named below as InP@ZnS 0 min , InP@ZnS 10 min , InP@ZnS 20 min , InP@ZnS 30 min , respectively). Fig. 4a shows the absorption and PL spectra of the InP@ZnS 30 min sample as an example to demonstrate the spectral changes of InP QDs due to the ZnS shell; data for other InP@ZnS samples are similar and shown in Fig. S9 and S10. † Aer coating the ZnS shell, the band-edge exciton absorption peak of InP@ZnS core@shell QDs is red-shied from 550 nm (peak of the InP core) to 590 nm; the PL of the QDs is signicantly improved, with the appearance of a well-dened band-edge emission peak at 620 nm (full width half maximum: $60 nm). Meanwhile, the PLQE of the InP@ZnS samples increases with the shell growth time until stabilizes at 30 min (inset in Fig. 4a), with the maximum achievable PLQE of 15 and ZnS (CB: À3.1 eV, VB: À6.6 eV), 47 the core@shell InP@ZnS QDs studied here is expected to form a type-I structure, i.e., both the electron and hole wavefunctions are mostly conned within the InP core with only small tunneling into the ZnS shell. As such, their absorption spectra are expected to show a small redshi compared to InP core only QDs. The observed absorption spectra of InP@ZnS QDs are in good agreement with the expected type I band assignment and the calculated spectra by the EMA method ( Fig. S2e-1 †).
To understand the passivation mechanism of InP QDs by the ZnS shell, we compare the XB and PL decays of the InP@ZnS QDs sampled during the shell growth at different times. The TA spectra of InP@ZnS 30 min aer 400 nm photoexcitation (the inset in Fig. 4b) show a XB feature centered at $590 nm, consistent with its ground state absorption; data for other samples are similar and shown in Fig. S13. † The average time constants of the XB decay increase immediately from $26.7 ns in untreated InP to 70.0 ns in InP@ZnS 0 min and then more gradually to 156.7 ns in InP@ZnS 60 min with the further growth of the ZnS shell (Fig. 4b, Table S3 †), indicating a dramatic effect of shell growth on the conduction band electron lifetime. Due to the above-mentioned type-I connement structure of the InP@ZnS QDs and the small thickness of ZnS shell, the growth of ZnS shell should only have little impact on the electron-hole radiative recombination process. In the ESI Note S2e, † we estimate, using EMA, that this change should be about 3% and cannot account for the large change of the electron lifetimes observed here. Instead, the above results indicate that the ZnS shell, in contrast with HF treatments, can lead to effective passivation of electron trap states in InP@ZnS QDs and extend the electron lifetime.
Unlike the XB decays, the PL decays of these InP@ZnS samples do not show monotonic dependence on the growth time of the ZnS shell (Fig. 4c). These PL decays can be well tted to three exponential functions. The amplitude-weighted average PL lifetime (Table S4 † InP@ZnS 30 min , InP@ZnS 60 min , respectively. These hole trapping time constants are all signicantly slower than that ($3.4 ns) in the untreated InP QDs (Fig. 2d), therefore indicating that the growth of the ZnS shell also passivates the hole traps. However, increasing the ZnS shell growth appears to increase the hole trapping rates with the trapping time decreasing from 129.9 to 48.9 ns. Although shell growth has been shown to induce lattice constrain and increase hole trapping time, the thickness of shells are relatively small in our sample ($0.2 nm in InP@ZnS 30 min ) and is unlikely to cause the observed PL changes. 48,49 Recent studies suggested that the growth of the ZnS shell around InP QDs can introduce interior/surface lattice disorder due to the incorporation of Zn 2+ into the InP lattice, 44,50 which introduces new trap states with broad PL emission. 44 We therefore tentatively attribute the observed faster hole trapping rate at long shell growth time to the incorporation of Zn 2+ into the InP lattice. As such, although the prolonged growth of ZnS can further improve the electron lifetime, it also leads to the shortening of the hole lifetime; as a result of these two competing effects, the PLQE levels off aer 30 minutes of shell growth time (Fig. 4b inset). These above results demonstrate that the growth of a thin ZnS layer can effectively passivate both the electron and hole traps in InP QDs, unlike the HF treatment that only passivates hole traps herein. Our results also show that the initial addition of S precursor has the most pronounced effects on the passivation of both electron and hole traps. Similarly, recent study 23 has shown that post-synthetic treatment of InP QD with Cd 2+ or Zn 2+ cations, which are thought to replace the In 3+ on its surface, can result in the improvement of the PLQE approximately from 1 to 50%. Although the exact chemical nature of these surface treatments remains to be further understood, these results collectively demonstrate that the surface of InP QDs is indeed the key to the improvement of their PLQE and optoelectronic applications.
Biexciton Auger recombination processes in InP, InP@F À and InP@ZnS QDs Finally, we compare biexciton decays in InP, InP@F À and InP@ZnS QDs and discuss the effect of HF treatment and ZnS shell growth on biexciton lifetimes. To study this, we follow previous procedures 51,52 by monitoring the XB decay kinetics of QDs as a function of the excitation uences as shown in Fig. 5ac for InP, InP@F À 100ml and InP@ZnS 30 min QDs, respectively. The corresponding uence dependent TA spectra are shown in Fig. S14-16. † In these kinetic analyses, the contribution from the overlapping PA signal, as mentioned above (Fig. 2c), has been subtracted. In all cases, at low excitation uence, the XB decay is slow (>1 ns), consistent with the fact that the TA signal is dominated by QDs with one exciton. At higher excitation uences, as the averaged exciton number per QDs increases, both the amplitude of the total XB signal and the amplitude of a fast decay component increase. The appearance of the latter can be better seen in the lower panels of Fig. 5a-c, where the decay kinetics at different excitation uence has been normalized at 1 ns. These normalized decay curves show identical decays starting from 100-1000 ps, indicating that aer $100 ps, the XB signal reects the decay of single exciton states (Scheme 1a). The normalized XB kinetics also show a fast decay channel within 1-100 ps whose amplitudes increase at higher uences, and the decay time constants are independent of the uence. This fast component can be attributed to the Auger recombination (AR) of multiple excitons. Because the XB decay in InP QDs reects only the dynamics of the band-edge 1S e electrons with a two-fold degeneracy, 53 the faster component is a direct probe of the biexciton AR process (Scheme 1b), 54,55 whose contribution increases as the percent of excited QDs with n ¼ 2 states increases at higher uences. Although at higher uence, the contribution of n > 2 exciton states also increase, their effect cannot be directly probed at the 1S XB which saturates at n ¼ 2. 56,57 To obtain the biexciton decay kinetics, we subtract the XB decay of each sample under the highest uence (hNi estimated to be 1.0, 1.5, 1.5 for the InP, InP@F À 100ml and InP@ZnS 30 min , respectively, Fig. S17 †) by its decay under the lowest uence, and the resulting biexciton decays kinetics are shown in Fig. 5d. For InP and InP@F À 100ml QDs, the decay kinetics can be well t by a single exponential decay function to reveal a biexciton lifetime of InP and InP@F À 100ml of 1.3 AE 0.1 and 2.3 AE 0.2 ps, respectively. For InP@ZnS QDs, a satisfactory t to the decay kinetics requires a biexponential function with time constants (amplitudes) of 1.3 AE 0.2 ($36 AE 2%) and 20.2 AE 1 ($64 AE 2%) ps, respectively. The faster component is similar to the biexciton lifetime of the untreated InP QDs characterized above. We attribute this to the non-uniform ZnS shell growth around InP QDs, which produces a portion of InP QDs without ZnS shell. This nonuniformity can be rationalized by considering the irregular shape of InP QDs, evidenced in Fig. S12 † and reported previously, 50 and the very thin thickness of ZnS shell studied here. The slower component (20.2 AE 1 ps) can be therefore assigned to the biexciton Auger lifetime of InP@ZnS QDs. Importantly, these results show that although the HF treatment is efficient in removing the surface hole traps, it is, however, not sufficient to extend the biexciton lifetime of InP QDs used in the current study. The growth of the ZnS shell can, on the other hand, result in a dramatic 20-fold improvement of the biexciton lifetime in InP@ZnS QDs.
The measured biexciton lifetime in InP QDs (1.3 AE 0.2 ps) is considerably faster than previously reported biexciton lifetimes of CdSe QDs (8.5 ps) of a similar size (2.4 nm). 55 Similarly, other InP QDs used in the present studies (440 nm, 456 nm and 505 nm in Fig. 1) also show ultrafast biexciton lifetime (all <1 ps, Fig. S18 †). These Auger lifetimes in the untreated InP QDs also seem faster than expected results ($10 ps) predicted by the "universal volume scaling law", where the Auger recombination rate was showed to follow an inverse R 3 dependence regardless the chemical nature of the quantum dots. 58 Furthermore, the 20-fold improvement in Auger lifetime by a thin type-I ZnS shell is unexpected. Though the growth of ZnS shell can delocalize the electron and hole wavefunctions into the ZnS and therefore reduce the overlap of electron and hole wavefunctions in the Auger process, the type-I nature of the core/shell structure used here should only result in a minor electron and hole delocalization ( Fig. S2e-1 †). Even for the type-II structure, where the electron and hole wavefunction overlap can be signicantly reduced, a previous study has shown that the growth of $0.2 nm thick CdS shell around a 2.4 nm core CdSe only resulted in a $3.5 fold increase of the biexciton lifetime from 8.5 to 29 ps. 55 Thus, the underlying mechanism of this signicant improvement of biexciton lifetime by the ZnS shell seems to go beyond the simple consideration of the effect of type-I heterostructure on the electron and hole wavefunctions.
The key passivation mechanism difference between the HF and ZnS treatment is the signicant electron trap passivation by the growth of ZnS shell. We hypothesize that the large density of electron trap states in InP and InP@F À QDs is likely responsible for their fast Auger recombination. In bulk semiconductor, the trap-assisted Auger processes are known to cause a faster decay of the biexciton states. [59][60][61] This effect was also recently suggested to occur on ZnO nanocrystals to account for its abnormal volume scaling law and fast Auger process. 62 Specically, in ndoped ZnO nanocrystals, 62 the negative trion state can decay by the recombination between a conduction band electron with a surface trapped hole while exciting the additional electron to a higher energy state. The involvement of a localized trap states may further relax the requirement of momentum conservation and thus accelerate the Auger decay rate. 63 Thus, it is possible that the effective removal of surface electron trap states by the growth of ZnS shell suppresses the fast trap-assisted Auger decay channel and lengthens the biexciton lifetime in InP@ZnS core/shell QDs. To precisely understand the trap-assisted Auger processes, further high-level atomistic calculation is required and should provide crucial correlation between Auger efficiency and the nature of the trap states. This precise understanding of the nature and location of trap states and its photophysical impact are vital for a deeper understanding and more rational design of InP QDs.
The results reported above reveal different passivation mechanisms by HF treatment and the growth of ZnS shell for the InP QDs synthesized herein. It also reveals the dramatic difference of Auger lifetime of InP QDs as a result of the surface treatments. Because the central role of single and multi-exciton lifetimes in optoelectronic applications, these results provide useful insight for a rational design and improvement of these materials. The further optimization of core/shell heterostructures or a combined application of both HF treatment and the core-shell passivation schemes, 7 holds the promise for improvement of InP QDs. It is noted that, in the state-of-art high PLQE InP QDs, a ne gradient core-shell structure was utilized to tune the carrier wavefunction delocalization and passivate the surface trap states, 7,18,49 achieving near unity PLQE. For these highly emissive InP QDs, the surface trap states are likely effectively passivated to minimize the effect of electron-hole nonradiative recombination. Furthermore, the reduced overlap of the carriers' wavefunctions afforded by the gradient core/ shell structure and the reduced trap-assisted Auger process likely have a combined effect in extending their bi-and multiexciton lifetime, therefore enhancing their performances as light-emitting materials.
Conclusion
In summary, the present study investigates the mechanisms by which HF treatment and InP@ZnS core/shell heterostructures affect the PL quantum efficiency as well as single exciton and biexciton lifetimes in InP QDs. The transient absorption study of exciton and carrier decay dynamics in untreated InP QDs reveals the existence of both fast electron and hole trapping processes. Surface passivation of InP QDs by both the HF treatment and the growth of a ZnS shell (type-I heterostructure) leads to large increases in the PLQE of InP QDs. However, the mechanism of this improvement by HF treatment is predominantly through removing the fast surface hole trapping channel (s hole ¼ 3.4 AE 1 ns), while the growth of a ZnS shell slows down both electron and hole trapping processes simultaneously. Thus, although both surface modications are effective in improving PLQE, they have different passivation mechanisms and effects on carrier trapping processes, which is important for the further rational improvement of InP QDs for various photocatalytic and optoelectronic applications.
Furthermore, the biexciton lifetime of untreated InP QDs is only $1.2 ps, signicantly shorter than that of CdSe QDs of similar sizes, and the HF treatment is insufficient to suppress the fast Auger recombination processes. The growth of a thin ZnS shell ($0.2 nm, approximately one monolayer ZnS), on the other hand, can extend the biexciton lifetime 20-fold, up to 20.2 AE 1 ps. These results highlight that the untreated InP QDs suffer severe Auger recombination losses, representing a major limitation in their development for LEDs and other devices that involve the participation of multiexciton states in operation. On the other hand, core-shell passivation, even without signicantly changing the wavefunction overlap between electrons and holes in the InP core, is a very promising approach for reducing this Auger loss. Although the mechanism of this large decrease in biexciton lifetime remains to be further claried, our results imply the likely signicant role of the trap-state assisted Auger processes in InP QDs. Further computational studies on the relationship between trap state passivation and the Auger recombination rates should provide vital insights.
Associated content
Additional sample characterization and spectroscopic data.
Conflicts of interest
The authors declare no competing nancial interest.
|
v3-fos-license
|
2011-03-30T10:29:52.000Z
|
2010-12-20T00:00:00.000
|
106398855
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://iopscience.iop.org/article/10.1088/1367-2630/13/3/035013/pdf",
"pdf_hash": "3003abf2f40419cfcf08689479c13bbfb2df6994",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41835",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "15105440ee4662af22b91c79b5a08afe6eec3798",
"year": 2010
}
|
pes2o/s2orc
|
Magnetism and domain formation in SU(3)-symmetric multi-species Fermi mixtures
We study the phase diagram of an SU(3)-symmetric mixture of three-component ultracold fermions with attractive interactions in an optical lattice, including the additional effect on the mixture of an effective three-body constraint induced by three-body losses. We address the properties of the system in $D \geq 2$ by using dynamical mean-field theory and variational Monte Carlo techniques. The phase diagram of the model shows a strong interplay between magnetism and superfluidity. In the absence of the three-body constraint (no losses), the system undergoes a phase transition from a color superfluid phase to a trionic phase, which shows additional particle density modulations at half-filling. Away from the particle-hole symmetric point the color superfluid phase is always spontaneously magnetized, leading to the formation of different color superfluid domains in systems where the total number of particles of each species is conserved. This can be seen as the SU(3) symmetric realization of a more general tendency to phase-separation in three-component Fermi mixtures. The three-body constraint strongly disfavors the trionic phase, stabilizing a (fully magnetized) color superfluid also at strong coupling. With increasing temperature we observe a transition to a non-magnetized SU(3) Fermi liquid phase.
Introduction
Cold atoms in optical lattices provide us with an excellent tool to investigate notoriously difficult problems in condensed matter physics [1,2]. Recent progress towards this goal is exemplified by the experimental observation of the fermionic Mott insulator [3,4] in a binary mixture of repulsively interacting 40 K atoms loaded into an optical lattice, and of the crossover between Bardeen-Cooper-Schrieffer (BCS) superfluidity and Bose-Einstein condensation (BEC) [5,6,7] in a mixture of 6 Li atoms with attractive interactions.
At the same time, ultracold quantum gases also allow us to investigate systems which have no immediate counterparts in condensed matter. This is the case for fermionic mixtures where three internal states σ = 1, 2, 3 are used, instead of the usual binary mixtures that mimic the electronic spin σ =↑, ↓. These multi-species Fermi mixtures are already available in the laboratory, where three different magnetic sublevels of 6 Li [8,9,10,11] or 173 Yb [12], as well as a mixture of the two internal states of 6 Li with a lowest hyperfine state of 40 K [13] have been successfully trapped. In the case of Alkali atoms, magnetic or optical Fano-Feshbach resonances can be used to tune magnitude and sign of the interactions in the system, and in the case of Ytterbium or group II atoms, it is possible to realise three-component mixtures where the components differ only by nuclear spin, and therefore exhibit SU(3) symmetric interactions [14,15,16]. Moreover, loading these mixtures into an optical lattice would give experimental access to intriguing physical scenarios, since they can realize a three-species Hubbard model with a high degree of control of the Hamiltonian parameters.
Multi-species Hubbard models have attracted considerable interest on the theoretical side in recent years. First studies were focused on the SU (3)-symmetric version of the model with attractive interaction. By using a generalized BCS approach [17,18,19], it was shown that the ground state at weak-coupling spontaneously breaks the SU (3) ⊗ U (1) symmetry down to SU (2) ⊗ U (1), giving rise to a color superfluid (c-SF) phase, where superfluid pairs coexist with unpaired fermions. Within a variational Gutzwiller technique [20,21] the superfluid phase was then found to undergo for increasing attraction a phase transition to a Fermi liquid trionic phase, where bound states (trions) of the three different species are formed and the SU (3)-symmetry is restored. More recently [22,23], the same scenario has been found by using a self-energy functional approach for the half-filled model on a Bethe lattice in dimension D = ∞. It was suggested [24] that this transition bears analogies to the transition between quark superfluid and baryonic phase in the context of Quantum Chromo Dynamics.
Both the attractive and the repulsive version of the model was addressed by numerical and analytical techniques for the peculiar case of spatial dimension D = 1 [25,26,27,28], while Mott physics and instabilities towards (colored) density wave formation have been found in the repulsive case in higher dimensions [17,29,30]. It is important to mention that substantial differences are expected in the attractive case at strong coupling when the lattice is not present [31,32]. Those differences are essentially related to the influence of the lattice in the strong coupling limit in the three-body problem, favoring trion formation [33,34] with respect to pair formation in the continuum, as was shown in Ref. [32,35,36].
Here we consider the SU (3)-symmetric system in a lattice for D ≥ 2 in the presence of attractive two-body interactions by combining dynamical mean-field theory (DMFT) and variational Monte Carlo (VMC). We analyze several cases of interest for commensurate and incommensurate density. Ground state, spectral, and finite temperature properties are addressed. More specifically we focus on the transition between color superfluid and trionic phase and on a better understanding of the coexistence of magnetism and superfluidity in the color superfluid phase already predicted in the SU (3) symmetric case [20,21] but also when the SU (3)-symmetry is explicitly broken [37]. We show that the existence of a spontaneous magnetization leads the system to separate in color superfluid domains with different realizations of color pairing and magnetizations whenever the total number of particles in each hyperfine state is conserved. This would represent a special case, due to the underlying SU (3) symmetry, of a more general tendency towards phase separation in three-component Fermi mixtures. We point out that all this rich and interesting physics arises merely from having three components instead of two. Indeed the analogous SU (2) system would give rise to the more conventional BCS-BEC crossover, where the superfluid ground state evolves continuously for increasing attraction [38]. Moreover in the SU (2) case superfluidity directly competes with magnetism [39].
The case under investigation can be realized with ultracold gases by loading a three-species mixture of 173 Yb [12] or another group II element such as 87 Sr into an optical lattice, or alternatively using 6 Li in a large magnetic field. However, some realizations with ultracold atoms are plagued by three-body losses due to three-body Efimov resonances [8,9,11], which are not any more Pauli suppressed as in the twospecies case. The three-body loss properties and their dependence on the magnetic field have been already measured for 6 Li [8,9,11], while they are still unknown for three-component mixtures of certain group-II elements. Loading a gas into an optical lattice could be used to suppress losses, as a large rate of onsite three-body loss can prevent coherent tunneling processes from populating any site with three particles [40]. As proposed in Ref. [40] for bosonic systems, in the strong loss regime a Hamiltonian formulation is still possible if one includes an effective hard-core three-body interaction, which leads to new interesting physics [41]. The effect of this dynamically generated constraint on the fermionic system in D = 1 with attractive interactions was studied in Ref. [27], where it was shown that the constraint may help to stabilize the superfluid phase in some regions of the phase diagram.
For these reasons we also study the effect of including a three-body constraint in the model, as representative of an SU (3) symmetric mixture in the strong-loss regime. The asymmetric case in the strong loss regime, which is directly relevant for experiments on 6 Li close to a Feshbach resonance, has been already addressed in a separate publication [42].
The paper is organized as follows: in the following sections we first introduce the model (Sec. 2) and then the methods used (Sec. 3). Later on we present our results, focusing first on the unconstrained system (Sec. 4), for commensurate and incommensurate densities and then on the effects of the three-body constraint (Sec. 5).
The emergence of domain formation within globally balanced mixtures is discussed in detail in Sec. 6. Final remarks are drawn in Section 7.
Model
Three-component Fermi mixtures with attractive two-body interactions loaded into an optical lattice are well described by the following Hamiltonian where σ = 1, 2, 3 denotes the different components, J is the hopping parameter between nearest neighbor sites i, j , µ σ is the chemical potential for the species σ and U σσ < 0. We introduced the onsite density operators n iσ = c † iσ c iσ . The three-body interaction term with V = ∞ is introduced to take the effects of three-body losses in the strong loss regime into account according to Refs. [27,40]. V = 0 corresponds to the case when three-body losses are negligible. While the model and the methods are developed for the general case without SU (3)-symmetry, in this paper we concentrate on the SU (3)symmetric case reflected by species-independent parameters U σσ = U, µ σ = µ. ( In this case the Hamiltonian (1) reduces to an SU (3) attractive Hubbard model if V = 0. Note that the three-body interaction term is a color singlet and thus does not break SU (3) for any choice of V . On the basis of previous works, the ground state of the unconstrained model is expected to be, at least in the weak coupling regime, a color superfluid, i.e. a phase where the full SU (3) ⊗ U (1) symmetry of the Hamiltonian is spontaneously broken to SU (2)⊗U (1) [17,18]. As shown in [17,18], it is always possible to find a suitable gauge transformation such that pairing takes place only between two of the natural species σ, σ and in this paper we choose a gauge in which pairing takes place between the species σ = 1 and σ = 2 (1 − 2 channel), while the third species stays unpaired. Whenever the SU (3)-symmetry is explicitly broken, only the pairing between the natural species is allowed to comply with Ward-Takahashi identities [37]. This reduces the continuum set of equivalent pairing channels of the symmetric model to a discrete set of three (mutually exclusive) options for pairing, i.e. 1 − 2, 1 − 3 or 2 − 3. In this case the natural choice would be that pairing takes place in the channel corresponding to the strongest coupling when the mixture is globally balanced. We can always relabel the species such that strongest attractive channel is the channel 1 − 2.
Other pairing channels can be studied via index permutations of the species. Therefore the formalism developed here is fully general and includes both the symmetric and nonsymmetric case, while only in the SU (3)-symmetric case our approach corresponds to a specific choice of the gauge.
Methods
In order to investigate the model in Eq. (1) in spatial dimensions D ≥ 2 we use a combination of numerical techniques which have proven to give very consistent results for the non-symmetric case [42]. In particular, we use dynamical mean-field theory (DMFT) for D ≥ 3 and variational Monte Carlo (VMC) for D = 2. DMFT provides us with the exact solution in infinite dimension and a powerful (and non-perturbative) approach in D = 3, which has the advantage of being directly implemented in the thermodynamic limit (without finite size effects). VMC allows us to incorporate also the effect of spatial fluctuations which are not included within DMFT, even though the exponential growth of the Hilbert space limits the system sizes that are accessible.
DMFT
Dynamical mean-field theory (DMFT) is a non-perturbative technique based on the original idea of Metzner and Vollhardt who studied the limit of infinite dimension of the Hubbard model [43]. In this limit, the self-energy Σ(k, ω) becomes momentum independent Σ(k, ω) = Σ(ω), while fully retaining its frequency dependence. Therefore the many-body problem simplifies significantly, without becoming trivial, and can be solved exactly. In this sense DMFT is a quantum version of the static mean-field theory for classical systems, since it becomes exact in the same limiting case (D = ∞) and can provide useful information also outside of this limit, fully including local quantum fluctuations. In 3D, assuming a momentum independent self-energy, has proved to be a very accurate approximation for many problems where the momentum dependence is not crucial to describe the physics of the system such as the Mott metal-insulator transition [44] where the frequency dependence is more relevant than the k dependence.
3.1.1. Theoretical setup for SU (3) model with spontaneous symmetry breaking -In this work, we generalize the DMFT approach to multi-species Fermi mixtures in order to describe color superfluid and trionic phases, which are the expected phases occurring in the system. The theory can be formulated in terms of a set of self-consistency equations for the components of the local single-particle Green functionĜ on the lattice. Since we are dealing here with superfluid phases involving also anomalous components of the Green function, we use a compact notation in terms of mixed Nambu spinors ψ = (c 1 , c † 2 , c 3 ), where we already assumed that pairing takes place only between the first two species, as explained in the previous section, and we omit the subscript i (spatially homogeneous solution). We reiterate that this specific choice is valid without loss of generality in the SU (3)-symmetric model, and has the same status as fixing the phase of a complex condensate order parameter in theories with global phase symmetry.
In practice the original lattice model (1) in the DMFT approach can be mapped, by introducing auxiliary fermionic degrees of freedom a † lσ , a lσ , on a Single Impurity Anderson Model (SIAM), whose Hamiltonian reads where the Anderson parameters ε lσ , V lσ , W l have to be determined self-consistently. Selfconsistency ensures that the impurity Green function of the SIAM is identical to the local component of the lattice Green function. The components of the non-interacting Green function for the impurity site, which represent the dynamical analog of the Weiss field in classical statistical mechanics, can be expressed in terms of the Anderson parameters as where ζ l,σ = −iω n + ε lσ . The self-consistency equations for the local Green functions now have the form where M is the number of lattice sites,Ĝ latt (k, iω n ) =Ĝ latt (ε k , iω n ) is the lattice Green function within DMFT and D(ε) is the density of states of the lattice under consideration. The independent components ofĜ latt (k, iω n ) have the form where ζ σ = iω n + µ σ − Σ σ (iω n ) and the self-energy can be obtained by the following local Dyson equationΣ(iω n ) =Ĝ −1 Once a self-consistent solution has been obtained, the impurity site of the SIAM represents a generic site of the lattice model under investigation. Therefore several static thermodynamic quantities can be directly evaluated as quantum averages of the impurity site. As evident from the previous equations, DMFT is explicitly formulated in a grand canonical approach where the chemical potentials µ σ are given as input and the onsite densities n σ = c † σ c σ are calculated.
Calculated observables and numerical implementation -
To characterize the different phases, we evaluated several static observables such as the superfluid (SF) order parameter P = c 1 c 2 , the average double occupancy d σσ = n σ n σ and the average triple occupancy t = n 1 n 2 n 3 . As suggested in Refs. [17,18,37], in order to gain condensation energy in the c-SF phase, it is energetically favorable to induce a finite density imbalance between the paired species (1 − 2 in our gauge) and the unpaired fermions. To quantitatively characterize this feature we introduce the local magnetization m = n 12 − n 3 where n 12 = n 1 = n 2 .
From the normal components of the lattice Green functions in Eqs. (10), (11) and (13) we can extract the DMFT momentum distribution and the average of kinetic energy per lattice site It is evident from the expression ofĜ latt (k, iω n ) given in Eqs. (10), (11) and (13) that n σ (k) only depends on the momentum k through the free-particle dispersion ε k of the lattice at hand. The internal energy per lattice site E can then be obtained as is the average potential energy per lattice site. Solving the DMFT equations is equivalent to solving a SIAM in presence of a bath determined self-consistently. We use Exact Diagonalization (ED) [45], which amounts to truncating the number of auxiliary degrees of freedom a † lσ , a lσ in the Anderson model to a finite (and small) number N s − 1. In this way the size of the Hilbert space of the SIAM is manageable and we can exactly solve the Anderson model numerically. Here we would like to point out that this truncation does not reflect the size of the physical lattice but only the number of independent parameters used in the description of the local dynamics. Therefore we always describe the system in the thermodynamic limit (no finite-size effects). We use the Lanczos algorithm [46] to study the ground state properties (up to N s = 7) and full ED for finite temperature (up to N s = 5). Due to the increasing size of the Hilbert space (σ = 1, 2, 3 instead of σ =↑, ↓) in the multicomponent case the typical values of N s which can be handled sensibly is smaller than the corresponding values for the SU (2) superfluid case. However, in thermodynamic quantities, we found indeed only a very weak dependence on the value of N s and the results within full ED at the lowest temperatures are in close agreement with T = 0 calculations within Lanczos.
A definite advantage of ED is that it allows us to directly calculate dynamical observables for real frequencies without need of analytical continuation from imaginary time. In particular, we can directly extract the local single-particle Green function G σ (ω) and the single-particle spectral function
Variational Monte-Carlo
The variational Monte Carlo (VMC) techniques described in this subsection can be used to calculate the energies and correlation functions of the homogeneous phases at T = 0 in a canonical framework. The basic ingredients of the VMC formalism are the Hamiltonian and trial wavefunctions with an appropriate symmetry. In principle, the formalism presented here can be applied to any dimension, even though here we use it specifically to address the system on a two-dimensional square lattice. The canonical version of Hamiltonian (1) for three-components fermions with generic attractive interactions is given by where the three-body constraint is imposed by using the projector P 3 = i (1 − n i,1 n i,2 n i,3 ) and in the unconstrained case we set P 3 equal to the identity.
Practical limitations do not permit a general trial wave function equally accurate both for the weak-and the strong-coupling limit. Due to this reason we introduce different trial wavefunctions for different coupling regimes.
In the weakly interacting limit, which we operatively define as |U σσ | ≤ 4J = W/2, we use the full Hamiltonian (20) along with the weak-coupling trial wavefunction defined in the next subsection. Here W = 2DJ is the bandwidth. At strong-coupling this wavefunction results in a poor description of the system. In order to gain insight into the strong coupling regime, we derive below a perturbative Hamiltonian to the second order in J/U σσ , which we will combine with a strong-coupling trial wavefunction. Again the strong-coupling wavefunctions are incompatible with the Hamiltonian (20), as will be clarified below. We can therefore address confidently both limits of the model while at intermediate coupling we expect our VMC results to be less accurate.
3.2.1.
Strong coupling Hamiltonian, constrained case -In order to derive a perturbative strong-coupling Hamiltonian for the constrained case we make use of the Wolff-Schrieffer transformation [47] H pert = P D e iS He −iS P D (21) and keep terms up to the second order in J/U σσ . In the expression above, P D is the projection operator to the Hilbert subspace with fixed numbers of double occupancies in each channel (N 12 , and e iS is a unitary transformation defined in Appendix A. So we obtain the perturbative Hamiltonian (see Appendix A), which reads: For the case where the SU (3)-symmetry is restored (U σσ = U ), the perturbative Hamiltonian can be written in a compact notation where the double occupancy operator is now defined as . Now, rather than conserving the number of double occupancies N σσ d in each channel, only the total number N d,0 = N 12 is conserved due to the SU (3)-symmetry. Indeed Eq. (23) contains terms where the tightly bound dimers are allowed to change the composition through second order processes. Thus, the SU (3)-symmetric case, in contrast to the case with strongly anisotropic interactions is qualitatively different from the Bose-Fermi mixture, because the bosons -tightly bound dimers -can change composition as described above, while such a process was not allowed in the case of the strong anisotropic interactions. We also notice that the last of the ∼ J 2 /U terms contributes only when N d,0 < N/2.
Strong coupling
Hamiltonian, unconstrained case -Without the 3-body constraint three fermions with different hyperfine states can occupy the same lattice site and we expect them to form trionic bound states at sufficiently strong coupling. Correspondingly the many-body system should be in a trionic phase with heavy trionic quasiparticles, as mentioned in previous studies [17,18,22,23]. Therefore we expect that our perturbative approach can provide a description of the trions in the strong coupling limit.
First we consider the extreme case J = 0. In this limit formation of local trions takes place, i.e each site is either empty or occupied by three fermions with different hyperfine spins. Their spatial distribution is random, because any distribution of trions will have the same energy. For finite J with J |U σ,σ | the hopping term can break a local trion, but this would result in a large energy penalty.
According to perturbation theory up to third order we could have two different contributions: (i) one of the fermions hops to one of the neighboring sites and returns back to the original site (second order perturbation), (ii) all three fermions hop to the same nearest neighbor site (third order perturbation). As we show below, due to the first process there is an effective interaction between trions on nearest neighbor sites. Also due to this process the onsite energy has to be renormalized. The second process (ii) describes the hopping of a local trions to a neighboring site.
After straightfroward calculations (see Appendix A) we obtain that the effective interaction between two trions on neighboring sites is For the SU (3)-symmetric case this expression is simplified and we obtain Therefore the nearest-neighbour interaction between trions is repulsive in the SU (3)symmetric case.
For the hopping coefficient we obtain where σ, σ and σ are different from each other in the sum. In the SU (3)-symmetric case, the expression again simplifies to So we obtain the following effective Hamiltonian [48] Here t † i is the creation operator of a local trion at lattice site i and n T i = t † i t i is the trionic number operator. Because the effective hopping of trions results from a third order process and the interaction from second order, more precisely J ef f = J/|U | · V ef f , the effective trion theory is interaction dominated. Since the interaction describes nearest-neighbour repulsion, the strong coupling limit clearly favors a checkerboard charge density wave ground state at half-filling ‡, which we will discuss in more detail in Sec. 4.
Trial wavefunctions:
In order to describe a normal Fermi liquid phase without superfluid pairing, we use the following trial wavefunction where |0 is the vacuum state and ε k,σ = −2J(cos(k x ) + cos(k y )) for a 2D square lattice with only nearest-neighbor hopping. The dependence on the densities is included in the value of the non-interacting Fermi energy ε F,σ . The wavefunction above has no variational parameters except for the choice of Jastrow factor which takes into account the effect of the interaction. Here ν 3 and ν c are variational parameters and i,j is summation with nearest neigbours. The weak-coupling version of the wavefunctions presented in this part is obtained by setting P D equal to unity.
We also consider the broken symmetry SU (2) ⊗ U (1) phase with s-wave pairing in the 1 − 2 channel, whose trial wavefunction is given by where In this case, in addition to the Jastrow factor J , we haveμ and ∆ 0 as additional variational parameters. The s-wave gap function ∆ s (k) = ∆ 0 has no k dependence. This parametrization of ∆ s (k) leads upon Fourier transform to a singlet symmetric pairing orbital φ s (r 1 , r 2 ) = φ s (r 2 , r 1 ).
In practice the optimization parameter ∆ 0 depends on the density n as well as on the coupling strength U . Also, even at the same coupling strength U , the ∆ 0 can be qualitatively different for the weak and the strong coupling ansatz (in the intermediate regime U ≈ −5J). On the other hand, the parameterμ depends mostly on n (and only weakly on U ). The general tendency we observe is that ∆ 0 is suppressed beyond the filling density n 1 in the presence of the constraint. Within a BCS mean-field theory approach, the condensation energy E cond is easily related to the order parameter ∆ 0 , being E cond ∝ ∆ 2 0 . We however calculate it explicitly from the definition by comparing ‡ Despite our fermions are charge neutral, we use sometimes the expression charge density wave in analogy with the terminology commonly used in condensed matter physics.
the ground state energies of the normal and the superfluid phases for the same density. Therefore we define where We also calculate the order parameter P that characterizes the superfluid correlation by considering the long range behavior of the pair correlation function where and M is the total number of the lattice sites. Finally, in order to describe the trionic Fermi liquid phase we can use the following trial wavefunction In this case the Jastrow factor Here ν t is a variational parameter and i,j is summation over nearest neigbours.
Results: SU (3) attractive Hubbard Model
We first consider the SU (3) attractive Hubbard model described by the Hamiltonian (1) with V = 0. In a physical realization with ultracold gases in optical lattices, this corresponds to a situation where three-body losses are negligible. In order to address the effects of dimensionality and of particle-hole symmetry, we analyze several cases of interest, namely (i) an infinite-dimensional Bethe lattice in the commensurate case (halffilling), (ii) a three-dimensional cubic lattice and (iii) a two-dimensional square lattice, the latter two in the incommensurate case. In order to simplify comparison of results on different dimensions, we rescaled everywhere the energies by the bandwidth W of the specific lattice under consideration. For a Bethe lattice in D = ∞ the bandwidth is related to the hopping parameter by W = 4J, while for a D dimensional hypercubic lattice it is W = 4DJ.
Bethe lattice at half-filling
We first consider the infinite dimensional case, for which the DMFT approach provides the exact solution of the many-body problem whenever the symmetry breaking pattern of the system can be correctly anticipated. For technical reasons we consider here the Bethe lattice in D = ∞, which has a well defined semicircular density of states, given by the following expression The simple form of the self-consistency relation for DMFT on the Bethe lattice introduces technical advantages, as explained below. Moreover, we can directly compare our results with recent calculations for the same system within a Self-energy Functional Approach (SFA) [22,23].
In the absence of three-body repulsion, the Hamiltonian (1) is particle holesymmetric whenever we choose µ = U . In this case the system is half-filled, i.e. n σ = 1 2 for all of σ and n = σ n σ = 1.5.
We first consider the ground state properties of the system which we characterize via the static and dynamic observables defined in Sec. 3. For small values of the interaction (|U | W ), we found the system to be in a color-SF phase, i.e. a phase where superfluid pairs coexist with unpaired fermions (species 1-2 and 3 respectively in our gauge) and the superfluid order parameter P (plotted in Fig. 1 using green triangles) is finite. This result is in agreement with previous mean field studies [17,18], as expected since DMFT includes the (static) mean-field approach as a special limit, and with more recent SFA results [22,23]. By increasing the interaction |U | in the c-SF phase, P first increases continuously from a BCS-type exponential behavior at weak-coupling to a non-BCS regime at intermediate coupling where it shows a maximum and then starts decreasing for larger values of |U |. This non-monotonic behavior is beyond reach of a static mean-field approach and agrees perfectly with the SFA result [22,23]. As explained in the introduction, the spontaneous symmetry breaking in the c-SF phase is generally expected [20,21,37,49] to induce a population imbalance between the paired channel and the unpaired fermions, i.e. a finite value of the magnetization m in Eq. (15). It is however worth pointing out that, due to particle-hole symmetry, the c-SF phase at half-filling does not show any induced population imbalance, i.e. m = 0 for all values of the interaction strength. As discussed in the next subsection, the population imbalance is indeed triggered by the condensation energy gain in the paired channel. This energy gain, however, cannot be realized at half-filling where the condensation energy is already maximal for a given U .
Further increasing |U |, we found P to suddenly drop to zero at |U | = U c,2 ≈ 0.45W , signaling a first order transition to a non-superfluid phase. This result is in good quantitative agreement with the SFA result in Ref. [22,23], where a first order transition to a trionic phase was found, while a previous variational calculation found a second order transition [20,21].
In this new phase we were not able to stabilize a homogeneous solution of the DMFT equations with the ED algorithm [45]. Such a spatially homogeneus phase would correspond to having identical solutions, within the required tolerance, at iteration n and n + 1 of the DMFT self-consistency loop. In the normal phase (|U | > U c,2 ) instead, we found a staggered pattern in the solutions and convergence is achieved if one applies (Color online) SF order parameter P (green triangles) and CDW order parameter C (blue squares) plotted as a function of interaction strength |U |/W on the Bethe lattice in the limit D → ∞ at half-filling and T = 0. C > (empty squares) and C < (full squares) correspond to calculations starting from a superfluid or a trionic charge density (t-CDW) initial conditions, and similar for P > (note that P < is always vanishing and therefore not shown). In the inset we compare the ground state energies of the c-SF and t-CDW phases. (Unconstrained, i.e. V = 0) a staggered criterion of convergence by comparing the solutions in iteration n and n + 2. This behavior is clearly signalling that the transition to a non-superfluid phase is accompanied by a spontaneous symmetry breaking of the lattice translational symmetry into two inequivalent sublattices A and B. In a generic lattice a proper description of this phase would require solving two coupled impurity problems, i.e. one for each sublattice, and generalizing the DMFT equations introduced in the previous section. In the Bethe lattice instead the two procedures are equivalent §.
In the new phase the full SU (3)-symmetry of the hamiltonian is restored and we identify it with as a trionic Charge Density Wave (t-CDW) phase. In order to characterize this phase, we introduce a new order parameter which measures the density imbalance with respect to the sublattices A (majority) and B (minority), i.e.
where n A ≡ n σ,A and n b ≡ n σ,B for all σ and C = 0 in the c-SF phase because the translational invariance is preserved. The evolution of the CDW order parameter C in the t-CDW is shown in Fig. 1 using blue squares. At the phase transition from c-SF to t-CDW phase, P goes to zero and C jumps from zero to a finite value. Then C increases further with increasing attraction |U | and eventually saturates at C = 1/2 for |U | → ∞. Motivated by these findings, we considered more carefully the region around the transition point. Surprisingly we found that upon decreasing |U | from strong-to weak-coupling the t-CDW phase survives far below U c,2 down to a lower § On the Bethe lattice the sublattices A and B are completely decoupled from each other at a given step n. critical value U c1 0, revealing the existence of a coexistence region in analogy with the hysteretic behavior found at the Mott transition in the single band Hubbard model [44]. In the present case, however, we did not find any simple argument to understand which phase is stable and had to directly compare the ground state energy of the two phases in the coexistence region to find the actual transition point. In the Bethe lattice, the kinetic energy per lattice site K in the c-SF and t-CDW phases can be expressed directly in terms of the components of the local Green functionĜ(iω n ), which is straightforwardly determined by DMFT. The potential energy per lattice site V is given by , where the index indicates the sublattice. By generalizing analogous expressions valid in the SU (2) case [38,50,51], we obtain and Results shown in the inset of Fig. 1 indicate that the t-CDW phase is stable in a large part of the coexistence region and that the actual phase transition takes place at |U | = U c ≈ 0.2W . The good agreement between our findings and the SFA results in Ref. [22,23] concerning the maximum value of the attraction U c2 where a c-SF phase solution is found within DMFT would suggest that this value is indeed a critical threshold for the existence of a c-SF phase. On the other hand we also proved that the c-SF phase close to U c2 is metastable with respect to the t-CDW phase and therefore the existence of the threshold could equally results from an inability of our DMFT solver to further follow the metastable c-SF phase at strong coupling. The disagreement between our findings and Ref. [22,23] for what concerns the existence of CDW modulations in the trionic phase is clearly due to the constraint of homogeneity imposed in the SFA approach of Ref. [22,23] in order to stabilize a (metastable) trionic Fermi liquid instead of the t-CDW solution. In our case, this was not an issue due to the fact that the iterative procedure of solution immediately reflects the spontaneous symmetry breaking of the translational invariance and does not allow for the stabilization of an (unphysical) homogeneous trionic Fermi liquid at half-filling.
On the other hand, the necessary presence of CDW modulation in the trionic phase at half-filling, at least in the strong-coupling limit, can be easily understood based on general perturbative arguments. Indeed, as pointed out in Sec. 3, in the strongcoupling trionic phase where J/|U | 1, the system can be described in terms of an effective trionic Hamiltonian (28). In this Hamiltonian the effective hopping J ef f of the trions is much smaller than the next-neighbor repulsion V ef f between the trions Due to the scaling of the hopping parameter required to obtain a meaningful limit D → ∞, i.e. J → J/ √ z where z is the lattice connectivity, one finds J ef f → 0 in this limit, i.e. the trions become immobile while their nextneighbor interaction term survives. In this limit, the Hamiltonian is equivalent to an antiferromagnetic Ising model (spin up corresponds to a trion and spin down corresponds to a trionic-hole). At half-filling, clearly the most energetically favorable configuration is therefore to arrange the trions in a staggered configuration [52]. Moreover, due to quantum fluctuations, if we decrease the interaction starting from very large |U |, the spread of a single trion (which is proportional to J 2 /U ) increases and it is not a local object any more. In this case the trionic wave-function extends also to the nearest neighboring sites [34], as sketched in Fig. 2. This interpretation is in agreement with the observed behavior of the CDW order parameter C in Fig. 1. Indeed, at large |U |, C asymptotically rises to the value C = 1/2, corresponding to the fully local trions in a staggered CDW configuration. The presence of the CDW also explains the anomalously large value of residual entropy per site s res = k B ln 2 found when imposing a homogeneous trionic phase as in Ref. [22,23]. At strong-coupling in finite dimensions, even though the trions have a finite effective hopping J ef f , one would still expect that the augmented symmetry at half-filling favors CDW modulations with respect to a trionic Fermi liquid phase. In D = 1, 2 it is indeed known [17,27] that the CDW is actually stable with respect to the SF phase at half-filling for any value of the interaction, in contrast to the SU (2) case where they are degenerate [38]. Our results prove that in higher spatial dimensions this is not the case and there is a finite range of attraction at weak-coupling, where the c-SF phase is actually stable.
Further confirmation of the physical scenario depicted above is provided by the analysis of the single-particle spectral function ρ σ in the c-SF and t-CDW phases shown in Fig. 3. In the c-SF phase (Fig. 3(a)), the spectrum shows a gapless branch due to the presence of the third species which is not involved in the pairing, while the spectral function for species 1 (2 is identical) shows a gap. The situation is totally different in the t-CDW phase (Fig. 3(b)), where the spectral functions for the three species are identical but the lattice symmetry is broken into two sublattices. If we plot the spectral functions for the two sublattices (corresponding to two successive iterations in our DMFT loop) a CDW gap is visible. We would like to note that the sharply peaked structure of the spectrum is due to the finite number of orbitals in the ED algorithm. However, the size of the gap should not be affected significantly by the finite number of orbitals. Interestingly for |U | = 0.75W the size of the energy gap ∆ gap ≈ W is in very close agreement with the value obtained within SFA for the same value of the interaction [22,23], indicating that the gap most likely is only weakly affected by CDW ordering. In order to characterize the system at finite temperature, we studied the evolution of the SF order parameter P as a function of temperature in the c-SF phase for different values of the coupling (Fig. 4(a)) and analogously for the CDW order parameter C in the t-CDW phase (Fig. 4(b)). The superfluid-to-normal phase transition at T SF c (U ) is also mirrored in the behavior of the spectral function for increasing temperature. The results shown in Fig. 5 indicate that the superfluid gap in the spectral function closes for T > T SF c (U ), signaling the transition to a normal homogeneus phase without CDW modulations.
At finite temperatures we also found a coexistence region of the trionic CDW wave phase and the color superfluid or normal homogeneous phases in a finite range of the interaction U (U c1 < |U | < U c2 at T = 0). We however leave a thorough investigation of the stability range of the t-CDW phase at finite temperature to future study, together with its dependence on the distance from the particle-hole symmetric point and on the dimensionality. Due to this coexistence region, we define the two critical temperatures T SF c (U ) and T CDW c (U ) plotted in the phase diagram in Fig. 6, where P (T ) |U and C(T ) |U vanish respectively above the c-SF phase and t-CDW phase. In agreement with the results obtained within SFA [22,23], we also found that the critical temperature T SF c (U ) has a maximum at T SF c /W ≈ 0.025 for |U |/W = 0.4. This is also in qualitative agreement with the SU (2) case [38], where the critical temperature has a maximum at intermediate couplings. Due to the presence of the CDW modulations in the trionic phase which are ignored in Ref. [22,23], we found also a second critical temperature T CDW c where charge density wave modulations in the trionic phase disappear.
Incommensurate density
In this section we consider the system for densities far from the particle-hole symmetric point. Specifically we investigate, using VMC and DMFT respectively, the implementation of the model (1) on a simple-square (cubic) lattice in 2D (3D) with tight-binding dispersion, i.e. k = −2J i=x,y(,z) cos(k i a), where a is the lattice spacing. In particular, we will find that away from the particle-hole symmetric point in the c-SF phase, the superfluidity always triggers a density imbalance, i.e. a magnetization. In order to address this feature quantitatively, we studied the system by adjusting the chemical potential µ in order to fix the total density n = σ n σ , allowing the system to adjust spontaneously the densities in each channel. Due to the spontaneous symmetry breaking of the SU (3) symmetry of the Hamiltonian in the color superfluid phase, it is indeed possible that, for a given chemical potential µ 1 = µ 2 = µ 3 = µ, the particle densities for different species may differ. If such a situation occurs, the systems shows a finite onsite magnetization m. As a more technical remark, we add that the choice of pairing channel, as explained in Sec. 3.1.1, is done without loss of generality: A specific choice will therefore determine in which channel a potential magnetization takes place, but not influence its overall occurrence. Here, since we fix the pairing to occur between species 1 and 2, we found a nonzero value of the magnetization parameter m = n 12 − n 3 , where n 12 = n 1 = n 2 . Therefore the paired channel turns out (spontaneously) to be fully balanced, while there is in general a finite density imbalance between particles in the paired channel with respect to the unpaired fermions.
The implications of the results presented in this subsection and in Sec. 5 for cold atom experiments, where the total number of particles of each species N σ = i n i,σ is fixed, will be discussed in Sec. 6. Combining the grand canonical DMFT results with energetic arguments based on canonical VMC calculations, we show that the system is generally unstable towards domain formation.
We first consider in Fig. 7 how the ground state properties of the 3D system evolve by fixing the coupling at |U |/W = 0.3125, where the system is always found to be in the c-SF phase for any density. We consider only densities ranging from n = 0 to half-filling n = 1.5. The results above half-filling can be easily obtained exploiting a particle-hole transformation. In particular one easily obtains where t and d are the average triple and double occupancies. The superfluid order parameter P increases (decreases) with the density for n < 1.5 (n > 1.5) and is maximal at half-filling. The average triple occupancy is instead a monotonic function of the density. Below half-filling, the magnetization m first grows with increasing density, then reaches a maximum and eventually decreases and vanishes at half-filling in agreement with the findings in the previous subsection. This means that in the c-SF phase for a fixed value of the chemical potential µ the system favors putting more particles into the paired channel than into the unpaired component. For n > 1.5 the effect is the opposite and m < 0. This behavior can be understood by considering that the equilibrium value of the magnetization results from a competition between the condensation energy gain in the paired channel on one side and the potential energy gain on the other side. Indeed the condensation energy found as a function of the density of pairs has a maximum at halffilling. For example in the weak-coupling BCS regime E cond is proportional to P 2 [49]. Therefore the condensation energy gain will increase by choosing the number of particles in the paired channel as close as possible to half-filling. On the other hand, for a fixed total density n, this would reduce or increase the unpaired fermions and consequently the potential energy gain, which is maximal for a non-magnetized system since U is negative. The competition between these opposite trends eventually determines the value of the magnetization in equilibrium, which is finite and rather small at this value of the coupling (see inset in Fig. 7). At half-filling no condensation energy gain can be achieved by creating a density imbalance between the superfluid pairs and the unpaired fermions since the condensation energy is already maximal. Therefore the spontaneous symmetry breaking in the color superfluid phase does not result necessarily in a density imbalance, which is however triggered by a condensation energy gain for every density deviation from the particle-hole symmetric point. We now consider the same system for fixed total density n = 1 and study the ground state properties as a function of the interaction strength |U | (see Fig.8). For weak interactions the system is in a c-SF phase. Upon increasing |U |, the order parameter P first increases and then shows the dome shape at intermediate couplings which we already observed for the half-filled case. Away from the half-filling, the value where P reaches its maximum is shifted to lower values of the interaction strength. The triple occupancy t, on the other hand monotonically increases with |U |. Interestingly the magnetization m(U ) has a non-monotonic behavior. At weak-coupling, magnetization m(U ) grows with increase of the interaction strength. For increasing coupling, m has a maximum and then decreases for larger |U |, indicating a non-trivial evolution due to competition between the condensation energy and the potential energies for increasing attraction. The spontaneous breaking of the SU (3)-symmetry is also well visible in the behavior of the double occupancies. Indeed in the c-SF for n < 1.5 we find d 12 > d 13 = d 23 . The difference d 12 − d 23 is however non-monotonic in the coupling and seems to vanish at |U |/W ≈ 0.35. Our interpretation is that beyond this point the SU (3)-symmetry is restored and the system undergoes a transition to a Fermi liquid trionic phase. Indeed for |U |/W > 0.35 we did not find any converged solution within our DMFT approach, neither for a homogeneous nor for a staggered criterion of convergence. This result is compatible with the presence of a macroscopically large number of degenerate trionic configurations away from the half-filling. A finite kinetic energy for the trions would remove this degeneracy, leading to a trionic Fermi liquid ground state. This contribution is however beyond the DMFT description of the trionic phase where trions are immobile objects. We can address the existence of a Fermi liquid trionic phase at strong-coupling using the VMC approach in 2D, which we will discuss in the following.
As already mentioned in Sec. 3, we use different trial wavefunctions to study the behavior of the system in the weak-(|U | ≤ W/2) and the strong-coupling (|U | > W/2) regimes. At weak-coupling the magnetization is expected to be very small and we can consider the results for the unpolarized system with n 1 = n 2 = n 3 to be a good approximation of the real system which is in general polarized. We found indeed that for |U | ≤ W/2 the system is in the c-SF phase with a finite order parameter P . As shown in Fig. 9, we obtain that P (U ) has a similar dome shape as in the 3D case. Unfortunately, we cannot directly address the trionic transition within this approach since it is expected to take place at intermediate coupling where both ansatz wave functions are inaccurate. We can however consider the system in the strong-coupling limit by using the effective trionic Hamiltonian of Eq. 28. In this way we can study the Fermi liquid trionic phase which we characterize by evaluating the quasiparticle weight, averaged over the Fermi surface Here Z k is extracted from the jump in the momentum distribution at the Fermi surface, which we approximate as where ∆k x (∆k y ) is the translational vector along the x (y) direction in the reciprocal lattice. In Fig. 10 we plot Z as a function of interaction strength |U |/W . By combining DMFT and VMC results we therefore have strong evidence of the system undergoing a phase transition from a magnetized color-superfluid to a trionic Fermi liquid phase at strong-coupling, when the density is far enough from the particlehole symmetric point.
Results: Constrained System (V = ∞)
As referred to in the introduction, actual laboratory implementations of the model under investigation using ultracold gases are often affected with three-body losses, which are not Pauli suppressed as in the SU (2) case. As discussed in Ref. [8], the three-body loss rate γ 3 shows a strong dependence on the applied magnetic field. Therefore the results presented in the previous section essentially apply to the case of cold gases only whenever three-body losses are negligible, i.e. γ 3 J, U . In the general case, in order to model the system in presence of three-body losses, one needs a non-equilibrium formulation where the number of particles is not conserved. However, as shown in Ref. [40], in the regime of strong losses γ 3 J, U , the probability of having triply occupied sites vanishes and the system can still be described using a Hamiltonian formulation with a dynamically-generated three-body constraint. To take it into account in our DMFT formalism, we introduce a three-body repulsion with V = ∞. Within VMC we directly project triply occupied sites out of the Hilbert space. We stress that finite values of V do not correspond to real systems with moderately large γ 3 since then real losses occur and a purely Hamiltonian description does not apply any more; only the limits γ 3 J, U and γ 3 J, U lend themselves to an effective Hamiltonian formulation.
Ground State Properties
In order to address how the system approaches the constrained regime with increasing V , we first used DMFT to study the ground-state properties of the model in 3D as a function of the three-body interaction V for a fixed value of the total density n = 0.48 and the two-body attraction |U |/W = 0.3125. We found that the average number of triply occupied sites t = n 1 n 2 n 3 (not shown) vanishes very fast with increasing V . The SF order parameter P and the densities in the paired and unpaired channels approach their asymptotic values already for V ≈ 3W or V ≈ 10|U |, as shown in Fig. 11 Therefore, we assume that we can safely consider the system to be in the constrained regime whenever V is chosen to be much larger than this value. Both the densities n σ and the superfluid order parameter P are strongly affected by the three-body interaction (see Fig. 11). For this value of the interaction, P and m are strongly suppressed by the three-body repulsion, even though both eventually saturate to a finite value for large enough V . However, as shown below, this suppression of the magnetization and SF properties is specific to the weak-coupling regime and for larger values of |U | both the SF order parameter P and the magnetization m are instead strongly enhanced in the presence of large V .
We now investigate the constrained case (setting V = 1000J ≈ 80W within the DMFT approach) where the total density is fixed as above to n = 0.48. Large values of the density imply an increase of the probability of real losses over a finite interval of time. Therefore we restrict ourselves to a relatively low density which is meant to be representative of a possible experimental setup.
We study the evolution of the ground state of the system in 2D and 3D as a function of the two-body interaction strength U . DMFT results in Fig. 12 show that in the threedimensional system the trionic phase at strong coupling is completely suppressed by the three-body constraint and the ground state is found to be always a color superfluid for any value of the attraction. This remaining c-SF phase shows however a very peculiar behavior of the magnetization m as a function of the attraction U . Indeed the magnetization m = n 12 − n 3 (n 12 = n 1 = n 2 ) steadily increases for increasing interaction and n 3 ≈ 0 (m ≈ n 12 ≈ n/2) already for U ≈ 12J = W .
Our explanation is that the three-body constraint strongly affects the energetic balance within the c-SF phase. Indeed, in the absence of V the magnetization was shown to be non-monotonic and to vanish in the SU (3)-symmetric trionic phase at strong-coupling. Now instead in the same limit the fully polarized c-SF system has a smaller ground state energy for fixed total density n. This result is fully confirmed by the VMC data for the 2D square lattice. As shown in the next section, combining these results essentially implies that a globally homogeneous phase with m = 0 is unstable in the thermodynamic limit with respect to domain formation whenever the global particle number in each species N σ = i n i,σ is conserved. By using the canonical ensemble approach of VMC, we can indeed address also metastable phases and study the effect on the energy of a finite magnetization for fixed total density n = 0.48. In particular we study the energy difference between the magnetized system and the unpolarized one with the same n, i.e. ∆E(m) = E(m) − E(0). Results shown in Fig. 13 indicate that at strong-coupling the energy decreases for increasing magnetization and the minimum in the ground state energy corresponds to the fully polarized system. In the inset of Fig. 13, we show ∆E as a function of the interaction strength for the fully polarized c-SF at strong-coupling, which decreases as ∆E ∼ 1/|U |. We also investigated the system in the weak-coupling regime, where our calculation shows that ∆E(m) has a minimum for very small values of the magnetization (not shown). This indicates that also in 2D the c-SF ground state at weak-coupling is partially magnetized, in complete agreement with the three-dimensional results. For weak coupling we approximate that the system is not magnetized (green lines and circles), while for strong coupling we assume that the system is fully polarized, i.e. contains only pairs (dashed blue line and squares). The dotted line corresponds to the superfluid order parameter in the atomic limit P ∞ (Constrained case) Within DMFT the order parameter P in the c-SF ground state shown in Fig. 12 is also increasing with |U | and saturates at strong coupling to a finite value, which we found to be in agreement with the asymptotic value in the atomic limit for the SU (2) symmetric case [38] The total number of double occupancies d is also an increasing function of |U | and saturates for very large |U | to the value n 12 = n/2 as in the strong coupling limit for the SU (2) symmetric system. This means that in the ground state the strong coupling limit of the SU (3) model is indistinguishable from the SU (2) case for the same total density n and two-body interaction U . As we will show in the next subsection, this is not any more true if we consider instead finite temperatures.
Similar considerations on the superfluid properties in the ground state apply to the two-dimensional case studied within the VMC technique. As the magnetization in the weak-coupling regime is very small, we approximated it to zero and consider an unpolarized system within the weak-coupling ansatz, while at strong-coupling we directly consider the system as fully polarized, i.e. containing only pairs. As visible in Fig. 14, P shows a similar behavior to the 3D case. Indeed at weak-coupling both, DMFT and VMC, show a BCS exponential behavior in the coupling, while at strongcoupling P converges to a constant.
Within VMC we also studied the condensation energy as explained in Sec. 3. Fig. 14b shows that the condensation energy first increases with the interaction strength U as expected in BCS theory, while it decreases as 1/U at strong-coupling as expected in the BEC limit for the SU (2) case [38]. Despite the fact that we cannot reliably address the intermediate region, there are also indications that the condensation energy has a maximum in this region.
Finite temperatures
We also investigated finite-temperatures properties for the three-dimensional case using DMFT. In Fig. 15, we show the evolution at finite temperature T of the SF order parameter P and of the magnetization m at fixed values of the interaction U . At low temperatures, the system is superfluid and the magnetization finite. With increase of the temperature, both P and m decrease and then vanish simultaneously at the critical temperature T = T c (U ). This clearly reflects the close connection between superfluid properties and magnetism in the SU (3)-symmetric case and is markedly different from the strongly asymmetric case which we studied in Ref. [42], where the density imbalance survives well above the critical temperature.
It is however remarkable that for |U | > U m ≈ W , m(T ) and P (T ) clearly show in Fig. 15 the existence of a plateau at finite T , indicating that the system stays in practice fully polarized in a finite range of temperatures. This allows us to define operatively a second temperature T p (U ) below which the system is fully polarized, while for T > T p instead the magnetization decreases and eventually vanishes at T c .
We summarize these results in the phase diagram in Fig. 16. Inside the region marked in orange (|U | > U m and T < T p ) the system is fully polarized and therefore identical to the SU (2) superfluid case. As we will see in the next section, in a canonical ensemble where the total number of particles N σ of each species is fixed, this analogy is not any more correct and we have to invoke the presence of domain formation to reconcile these findings with the global number conservation in each channel. Outside this region and below T c (solid blue line in Fig. 16), the c-SF is partially magnetized and therefore intrinsically different from the case with only two species. This is also visible in the behavior of the critical temperature where the SU (3)-symmetry is restored in the normal phase. We found indeed that the critical temperature first increases with the interaction strength |U |, similarly to the SU (2) case. Then for |U | = U m , the critical temperature T c suddenly changes trend and for larger |U | a power-law decrease T c ∝ 1/|U | occurs as shown in Fig. 16. In the SU (2) symmetric case this power-law behavior only appears for very large |U | (bosonic limit) [38], while in the SU (3) case this regime occurs immediately for |U | > U m . The smooth crossover in T c (U ) and the maximum of in the critical temperature characteristic of the SU (2) case, here are replaced by a cusp at |U | = U m , which marks the abrupt transition from one regime to the other.
Domain Formation
One of the main results of this work is the close connection between superfluidity and magnetization in the c-SF phase. Indeed we found that in the c-SF phase, away from the particle-hole symmetric point, the magnetization is always non-zero. On the other hand ultracold gas experiments are usually performed under conditions where the global number of particles N σ = i n i,σ in each hyperfine state is conserved, provided spin flip processes are suppressed. The aim of this section is to show that domain formation provides a way to reconcile our findings with these circumstances. In particular, combining DMFT and VMC findings, we will show that a globally homogeneous c-SF phase is unstable with respect to formation of domains with different c-SF phases in the thermodynamic limit.
To be more specific, we will consider the case when the global numbers of particles in each species are the same, i.e. N 1 = N 2 = N 3 = N/3, at T = 0, though the discussion can be easily generalized to other cases. The simplest solution compatible with N σ = N/3 is clearly a non-polarized c-SF phase with energy E hom per lattice site. This phase is actually unstable and therefore not accessible in a grand canonical approach like DMFT, where we fix the global chemical potential µ and calculate the particle densities n σ as an output. Since, as shown in Sec. 4 and Sec. 5, the system is spontaneously magnetized in the color superfluid phase out of half-filling, there is no way to reconcile the DMFT result with the global constraint N σ = N/3 assuming the presence of a single homogeneous phase. The VMC approach, on the other hand, operates in the canonical ensemble, and it can be used to estimate the ground state energy per lattice site for specific trial configurations. For the homogeneous configuration, we have E hom = E(m = 0) n , where n = N/M and M is the number of lattice sites.
Let us now contrast this situation with the spatially non-uniform scenario in which we have many color superfluid domains in equilibrium. Each of these domains corresponds to one of the solutions obtained above, and therefore this phenomenon can be seen as a special form of phase separation. For two or more phases to be in thermodynamic equilibrium with each other at T = 0, they need to have the same value of the grand potential per lattice site Ω = E − µn for the same given value of the chemical potential µ, while the onsite density of particles for each species n σ can be ; a specific example of a phase-separated configuration is plotted. Increasing the attraction strength reveals substantial differences between the two cases: (b) In the constrained case, domain formation persists to strong coupling, in parallel to the 3-component asymmetric situation [42]. The unpaired species are expelled from the paired regions, pairing up in other spatial domains. (c) In the unconstrained case instead, a spatially homogeneous trionic phase emerges [20,21]. different in the different phases. Possible candidate phases for the system considered in this paper are suggested by the underlying SU (3)-symmetry. Indeed if we consider c-SF solutions corresponding to different gauge fixing, i.e. with pairing in different channels, they will have the same total onsite density n and therefore the same energy and grand potential, since they correspond to different realizations of the spontaneously broken symmetry. If we consider for simplicity only the three solutions with pairing between the natural species sketched in Fig. 17, then this mixture of phases has globally the same number of particles N σ = N/3 in each hyperfine state whenever we choose the fraction of each phase in the mixture to be α = 1/3 and n = N/M in each domain. In fact in each domain we have the same densities n p in the paired channel and n u for the unpaired fermions, even though they involve different species in different domains. This scenario is therefore compatible with the global number constraint N σ = N/3 and we can compare its energy with the energy E hom of the globally homogeneous c-SF phase. The VMC calculations reported in Fig. 13 clearly indicate that for a fixed onsite density n, the ground state energy per lattice site is lower by having a finite magnetization, i.e. E(m) n < E(0) n and therefore E hom > E phase−separated = α 3 i=1 E i = E(m) and E i = E(m) is the energy per lattice site in the i-th domain. Thus a globally homogeneous c-SF phase has higher energy than a mixture of polarized domains with the same N σ and is therefore unstable with respect to phase separation.
It should be noted however, that the configuration sketched in Fig. 17 only represents the simplest possible scenario compatible with the global boundary conditions N σ = N/3. Indeed in the SU (3)-symmetric case we have continuous set of equivalent solutions, since solutions obtained continuously rotating the pairing state from 1-2 to a generic linear combination of species have the same energy and are therefore equally good candidates for the state with domain formation. Moreover, it is well known that having a continuous symmetry breaking is intrinsically different from the discrete case, because of the presence of Goldstone modes [17]. In large but finite systems, the surface energy at the interface between domains, which is negligible in the thermodynamic limit, will become relevant. On one hand a continuous symmetry breaking allows the system to reduce the surface energy cost through an arbitrarily small change of the order parameter from domain to domain, pointing toward a scenario where a large number of domains is preferable in real systems. On the other hand, when the system is finite, increasing the number of domains decreases their extension, reducing the bulk contribution which eventually defines number and size of the domains at equilibrium. Based on our current approaches, we cannot address the issue of what is the real domain configuration in a finite system, neither the question if different scenarios with microscopical modulations of the SF order parameter take place [53,54]. Similar conclusions concerning the emergence of domain formation in the c-SF phase have been already drawn in [20,21,37] and also in a very recent work [49], which addresses the same system in continuum space.
In real experiments both finite-size effects and inhomogeneities due to the trapping potential could play an important role in the actual realization of the presented scenario. Furthermore, as the SU (3)-symmetry in the cold atomic systems is not fundamental but arises as a consequence of fine-tuning of the interaction parameters, imperfections will also arise from slight asymmetries in these parameters. We have shown before [42] that in the strongly asymmetric limit, phase separation is a very robust phenomenon. We may therefore conjecture that interaction parameter asymmetries favor this scenario.
The combination of the findings in the present paper on the SU (3) case with those on the strongly-asymmetric case in [42] suggests that phase-separation in globally balanced mixtures is a quite general feature of three-species Fermi mixtures. However, the phases involved are in general different in different setups. In the stronglyasymmetric case in presence of a three-body constraint, the color superfluid phase undergo a spatial separation in superfluid dimers and unpaired fermions [42]. In this case, the presence of the constraint is crucial to the phase-separation phenomenon, as testified by its survival well above the critical temperature for the disappearance of the superfluid phase [42]. In the fully SU (3) symmetric case instead, the presence of the constraint only modifies the nature of the underlying color superfluid phase favoring fully polarized domains at strong coupling. The formation of many equivalent color superfluid domains can be seen as a special case of phase separation reflecting the SU (3) symmetry. In this case the phase separation phenomenon is strongly connected to the superfluid and magnetic properties of the color superfluid phase and it is expected to disappear at the critical temperature T c and for the peculiar particle-hole symmetric point at half-filling in the unconstrained case.
Conclusions
We have studied a SU(3) attractively interacting mixture of three-species fermions in a lattice with and without a three-body constraint using dynamical mean-field theory (D ≥ 3) and variational Monte Carlo techniques (D = 2). We have investigated both ground state properties of the system and the effect of finite temperature and find a rich phase diagram. For the unconstrained system, we found a phase transition from a color superfluid state to a trionic phase, which shows additional charge density modulation at half-filling. The superfluid order as well as CDW disappear with increasing temperature.
In the presence of the three-body constraint, the ground state is always superfluid, but for strong interactions |U | > U m the system becomes fully polarized for fixed total density n. It is remarkable that according to our calculations the system stays fully polarized in a range of low temperatures. For high temperatures a transition to the non-superfluid SU (3) Fermi liquid phase is found. The critical temperature has a cusp precisely at U m . This is in contrast to the SU (2)-symmetric case, where a smooth crossover in the critical temperature takes place.
The c-SF phase shows an interesting interplay between superfluid and magnetic properties. Except in the special case of half-filling, the c-SF phase always implies a spontaneous magnetization which leads to domain formation in balanced 3-component mixture.
The kinetic energy operator can be split in several contributions, where the subscripts indicate the change in the total number of double occupancies (N d,0 = N 12 d +N 23 d +N 13 d ), i.e.
(n i,σ h i,σ + h i,σ n i,σ )c † i,σ c j,σ (n j,σ h j,σ + h j,σ n j,σ ), (A.2) Here n iσ = c † i,σ c i,σ , h i,σ = 1 − n iσ and σ =σ =σ = σ. We note that whereas K 0 preserves the total double occupancy N d,0 , it contains two different types of terms: (i) terms that also preserve double occupancy in each channel N σσ d ( K a 0 part) and (ii) terms that change the double occupancy in two different channels such that the total double occupancy stays unchanged (K b 0 part). Thus, we can write We can also decompose the operators that change the total number of double occupancies into K 1 = K 12 1 + K 23 1 + K 13 1 , (A.6) where the superscripts give the type of double occupancies that are being created or destroyed. The canonical transformation can be written as an expansion to the second order Using the relation [V, K σσ ±1 ] = ±U σ,σ K σσ m and applying the projection P D , we arrive at ).
(A.11) 0. One can easily calculate that | iσ|H|t 0 | 2 = J 2 and E t 0 − E iσ = U σσ + U σσ , where σ = σ = σ = σ. So we obtain ∆E = zJ 2 U 12 + U 13 + zJ 2 U 12 + U 23 + zJ 2 U 13 + U 23 , (A. 15) where z is the number of the nearest neighbor lattice sites. The calculation above assumes that neighboring sites of a trion are not occupied. If one of the neighboring sites is occupied by another trion, then the energy gain per trion is given by The effective interaction between two trions on neighboring sites is therefore For the SU (3)-symmetric case this expression is simplified and we obtain Therefore the nearest neighbor interaction between trions is repulsive in the SU (3)symmetric case. The next step is to calculate the effective hopping of the trions. For this purpose one has to use third order perturbation theory Here |t 0 and |t 1 define local trions on lattice site 0 and the neighboring lattice site 1 respectively, |σ defines a state where a fermion with spin σ occupies the lattice site 1, and two other fermions are occupying the lattice site 0. Conversely |σσ defines a state where two fermions with spins σ and σ occupy the lattice site 1. On the lattice site 0 we have only a fermion with spin σ = σ, σ . For any σ and σ the matrix elements are given by t 0 |H|σ = σ|H|σσ = σσ |H|t 1 = −J, E t 0 − E σ = U σσ + U σσ and E t 1 − E σσ = U σσ + U σ σ , where σ, σ and σ are three different hyperfine-spins. So we obtain . (A.20) where σ, σ and σ are different from each other in the sum. In the SU (3)-symmetric case, the expression again simplifies to So we obtain the following effective Hamiltonian [48] H ef f = −J ef f i,j Here t † i is the creation operator of a local trion at lattice site i and n T i = t † i t i is the trionic number operator.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2010-12-06T00:00:00.000
|
4407773
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-10-344",
"pdf_hash": "67427c96c07d1fe3e32c481ff70e59a45aac1f44",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41836",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "eeb972ff28421f2a6bb2fbc97b2b5b42d20500bb",
"year": 2010
}
|
pes2o/s2orc
|
Blood cultures for the diagnosis of multidrug-resistant and extensively drug-resistant tuberculosis among HIV-infected patients from rural South Africa: a cross-sectional study
Background The yield of mycobacterial blood cultures for multidrug-resistant (MDR) and extensively drug-resistant tuberculosis (XDR-TB) among drug-resistant TB suspects has not been described. Methods We performed a retrospective, cross-sectional analysis to determine the yield of mycobacterial blood cultures for MDR-TB and XDR-TB among patients suspected of drug-resistant TB from rural South Africa. Secondary outcomes included risk factors of Mycobacterium tuberculosis bacteremia and the additive yield of mycobacterial blood cultures compared to sputum culture. Results From 9/1/2006 to 12/31/2008, 130 patients suspected of drug-resistant TB were evaluated with mycobacterial blood culture. Each patient had a single mycobacterial blood culture with 41 (32%) positive for M. tuberculosis, of which 20 (49%) were XDR-TB and 8 (20%) were MDR-TB. One hundred fourteen (88%) patients were known to be HIV-infected. Patients on antiretroviral therapy were significantly less likely to have a positive blood culture for M. tuberculosis (p = 0.002). The diagnosis of MDR or XDR-TB was made by blood culture alone in 12 patients. Conclusions Mycobacterial blood cultures provided an additive yield for diagnosis of drug-resistant TB in patients with HIV from rural South Africa. The use of mycobacterial blood cultures should be considered in all patients suspected of drug-resistant TB in similar settings.
Background
Tuberculosis (TB) is the leading cause of mortality among people living with HIV worldwide [1,2]. Drugresistant TB has emerged as an important global threat to public health. Although previously considered uncommon in high HIV prevalence settings, there has been a 3-4 fold increase in multidrug-resistant (MDR)-TB prevalence in southern Africa over the past decade [3,4]. In addition, extensively drug-resistant (XDR)-TB has been reported from all countries in southern Africa. MDR and XDR-TB are associated with a much higher mortality than drug-susceptible TB [5], especially among HIV co-infected persons [6].
Prompt diagnosis and treatment are essential to improve drug-resistant TB outcomes, but TB diagnosis in patients with HIV co-infection is challenging, particularly in resource-limited settings [6]. HIV-infected TB patients have higher rates of extrapulmonary disease, atypical clinical presentations, and normal chest radiographs [7][8][9][10]. With the emergence of MDR and XDR-TB in HIV-infected populations worldwide [3], it is therefore likely that there will be a consequent rise in extrapulmonary MDR and XDR-TB disease [11]. The diagnosis of drug-resistant TB requires isolation of an organism, or DNA in the case of molecular tests, thus vigorous efforts to obtain a specimen that may yield Mycobacterium tuberculosis are needed in settings with a high prevalence of drug resistance.
Mycobacteremia with drug-susceptible TB was described in reports from HIV-infected patients early in the epidemic of HIV from the United States [12,13]. Yet the yield of blood cultures in detecting M. tuberculosis can vary between 2%-64% depending on the population of study and suspicion for extrapulmonary TB [14][15][16][17][18][19]. In addition, HIV-infected patients may be predisposed to other bacterial and fungal bloodstream infections that clinically mimic TB, leading to delays in diagnosis or overtreatment of TB. In sub-Saharan Africa, M. tuberculosis bacteremia has been documented in patients with blood stream infections from a referral hospital in Tanzania, in patients with cough from Botswana, and as a critical etiology of sepsis among HIV-infected patients from Uganda, but drug-susceptibility testing (DST) was not performed [20][21][22]. To date, no study has reported the yield of blood cultures for the detection of MDR and XDR-TB.
Over 650 patients with MDR and XDR-TB have been identified from the rural Tugela Ferry area of KwaZulu-Natal, South Africa, where HIV co-infection rates exceed 90% [5,23]. Mycobacterial blood cultures have been used routinely in Tugela Ferry since 2006 in an attempt to improve case detection of drug-resistant TB. We sought to quantify the yield of blood cultures for MDR and XDR-TB, compare the yield to that of sputum culture, and identify risk factors for M. tuberculosis bacteremia in this population in order to guide clinical practice and public health policy.
Setting
The Church of Scotland Hospital (COSH) in Tugela Ferry is a 355-bed facility serving a population of 200,000 Zulu people. The local incidence of TB is estimated at 1,100 per 100,000 population, with approximately 80% of TB cases being HIV coinfected [24]. Onsite diagnostics include smear microscopy for acidfact bacilli of sputum and cerebrospinal fluid specimens. Sputum and other non-sputum fluid specimens requiring culture and DST are sent to the provincial TB referral laboratory in Durban (approximately 180 kilometers away) on a daily basis. Since June 2005, all TB suspects presenting to COSH are requested to give two 'spot' sputum specimens, one for onsite smear microscopy, and one for mycobacterial culture and DST.
Study design
We performed a retrospective, cross-sectional study of all drug-resistant TB suspects in whom at least one mycobacterial blood culture was sent from September 1 st , 2006 to December 31 st , 2008. Clinicians defined a drug-resistant TB suspect based on the presence of TB symptoms (e.g., cough, nightsweats, weight loss) with one or more of the following additional criteria: advanced HIV/AIDS, a prior history of TB treatment, or persistent symptoms despite one month or more of drug-susceptible TB treatment. If a sputum culture was available it was included for analysis if collected within two weeks before or after the date of collection of the blood culture.
Medical chart review was performed to obtain demographic, clinical and microbiological information. Specific data extracted included age, gender, HIV status, receipt and duration of antiretroviral treatment, CD4 cell count (cells/mm 3 ) prior to blood culture collection, history of TB treatment, TB treatment status at time of blood culture collection, and physician's comments of signs of extrapulmonary TB.
Definitions and outcome measures
Patients were categorized as being extrapulmonary TB suspects if a physician had documented specific extrapulmonary organ involvement that was suggestive of TB (e.g., pericardial effusion or abdominal lymphadenopathy on ultrasound) or if the physician documented a suspicion for extrapulmonary TB in the chart. MDR-TB was defined as resistance to at least isoniazid and rifampicin, while XDR-TB was defined as resistance to at least isoniazid, rifampicin, kanamycin and ofloxacin [25]. Susceptibility testing to other second-line TB drugs was not routinely done.
The primary outcome was the yield of blood cultures for drug-resistant TB, defined as the proportion of blood cultures that were positive for MDR or XDR-TB. Secondary outcomes included: 1) risk factors of M. tuberculosis bacteremia, and 2) comparison of blood culture to sputum culture for additive yield of blood in detection of M. tuberculosis and drug resistance.
Laboratory methods
All patients had 5 ml of blood collected and inoculated into mycobacterial blood culture bottles (BACTEC MycoF-lytic) and placed in a darkened storage container at room temperature prior to transport to the provincial referral laboratory. Blood culture bottles were cultured using the automated BACTEC 9240 system in which specimens are continuously monitored for growth up to 42 days [26]. All positive cultures were confirmed by niacin accumulation and nitrate reductase methods. DST was performed on all specimens positive for M. tuberculosis using the 1% proportional method [27] on Middlebrook 7H11 agar to: isoniazid (critical concentration, 0.2 μg/ml), rifampicin (1 μg/ml), ethambutol (7.5 μg/ml), ofloxacin (2 μg/ml), kanamycin (6 μg/ml) and streptomycin (2 μg/ml). Non-tuberculous mycobacteria (NTM) were not further speciated.
Sputum samples were refrigerated before and during transport to the provincial TB referral laboratory. Upon receipt, the specimen was digested and decontaminated with the N-acetyl-L-cysteine-sodium hydroxide method and smears were prepared for auramine staining. The remainder of the deposit was transferred for liquid culture in the automated BACTEC MGIT 960 system. DST was completed on all positive specimens after secondary inoculation on to Middlebrook 7H11 agar and using the 1% proportional method for the drugs as described with blood specimens.
Statistical analysis
Yield of M. tuberculosis detected by blood and sputum was calculated using simple frequencies and proportions. Demographic and clinical characteristics were compared with the chi-square statistic or the Mann-Whitney U test for non-parametric data. Bivariate and multivariate logistic regression were employed to determine risk factors for M. tuberculosis bacteremia. All tests for significance were 2-sided with a p-value < 0.05 considered significant. For variables with >10% missing data, tests of interaction were performed when appropriate. The multivariate model included any variable with p-value < 0.1 in bivariate analysis and any pertinent clinical and demographic characteristics. All analysis was performed using SPSS Statistics 17.0 software.
Ethical considerations
The study was approved by the biomedical research ethics committees of the University of KwaZulu-Natal, Albert Einstein College of Medicine, and Yale University.
Results
One-hundred thirty patients suspected of drug-resistant TB had mycobacterial blood cultures performed during the study period and were included for analysis. All patients had only one blood culture specimen. There were 73 males (56%); the median age was 31.5 years (Interquartile range [IQR] 27-38) and 8 (6%) patients were less than 12 years of age (Table 1). HIV-infection was confirmed in 114 (88%) patients. The CD4 cell count was available in 63 (55%) HIV-infected patients, with a median cell count of 100 cells/mm 3 (IQR . Fifty-three (46%) HIV-infected patients were on antiretroviral therapy at the time of blood culture collection. The median duration of antiretroviral therapy for patients on treatment was 15.4 weeks (IQR 4.7-31.0).
Of the 130 patients, 88 (68%) had no prior history of TB, although 89 (69%) patients were failing drug-susceptible TB treatment at the time of blood culture collection. The median duration of TB treatment for these patients was 8.0 weeks (IQR 4.0-20.0). Forty-five (35%) patients were suspected to have extrapulmonary TB.
Of the remaining positive blood cultures, NTM were found in 3 (6%) specimens and Cryptococcus species was found in 3 (6%) specimens. In the six study patients with Cryptoccocus species or NTM detected in the blood, five were currently receiving first-line TB therapy at the time of blood culture collection yet none of the patients had culture documentation of M. tuberculosis.
Risk factors for M. tuberculosis bacteremia
There were significantly more patients with M. tuberculosis bacteremia who were extrapulmonary TB suspects, Table 1). Though age was not a significant risk factor, it was found that M. tuberculosis was cultured from the blood in patients as young as 8 years (MDR-TB) and as old as 62 years (XDR-TB).
Comparison of blood to sputum cultures for MDR-TB and XDR-TB yield
Of the 41 patients with M. tuberculosis bacteremia, there were 23 patients that also had a sputum sample collected for comparison (Table 3). With two patients, the sputum sample was negative but the blood cultures revealed XDR-TB in one and MDR-TB in the other. DST was not completed in one patient's sputum sample but the blood culture revealed XDR-TB. Among all patients in whom DST was completed in both the blood and sputum sample, the DST results were identical. Considering the 21 patients where sputum cultures were either negative, DST was incomplete, or sputum was not collected, there were 9 (43%) blood cultures that diagnosed drug-susceptible TB and 12 (57%) that diagnosed MDR or XDR-TB (Table 3). Despite extrapulmonary TB suspects being at higher risk of M. tuberculosis bacteremia, in patients with both a blood and a sputum culture positive for M. tuberculosis, only 24% were suspected of extrapulmonary TB.
Discussion
We found that among a predominantly HIV-infected population of patients suspected of drug-resistant TB, MDR-TB and XDR-TB were isolated in nearly 70% of all positive M. tuberculosis blood cultures. Importantly, among patients with MDR or XDR-TB bacteremia, in over half of those in whom a sputum culture was unavailable, the blood culture was the only means of drug-resistant TB diagnosis. Bacteremia with XDR-TB was more common than MDR-TB, but reflective of community trends from sputum diagnosis in Tugela Ferry [28].
Current guidelines suggest the use of mycobacterial blood cultures may be beneficial in suspected [29]. A recent comprehensive screening study of HIV-infected ambulatory persons from Southeast Asia found only a 5% incremental yield of blood cultures for TB diagnosis among those with two negative sputum smears; DST results were not provided [19]. In contrast, the results from our study population are likely reflective of advanced immunosuppression, prolonged TB illness prior to blood culture collection, and the high pretest suspicion of drug-resistant TB. The additive yield of blood cultures is likely to vary in other regions with differing disease epidemiology. Nonetheless, these results suggest that M. tuberculosis bacteremia is likely to be present in drug-resistant TB suspects at higher rates than clinically suspected. Thus, we feel that these results are generalizable to other populations in sub-Saharan Africa where TB/HIV co-infection rates are high and the incidence of drug-resistant TB may be increasing. The bulk of the additive yield for MDR and XDR-TB in blood compared to sputum cultures was found in patients that did not have a sputum sample collected. The most common reason to not have a sputum sample collected in this hospital is the patient's inability to expectorate due to an absence of cough or marked physical disability; however, due to the retrospective nature of this study, we cannot confirm the reasoning for an individual patient. Standard of practice in other settings is to collect two or more sputum samples for microscopy and/or TB culture as a means of increasing yield. Further prospective study is warranted to determine how multiple sputum samples would affect the comparative yield to blood culture in similar populations with advanced HIV. Blood is an easily accessible fluid and carries the additional advantage of not requiring cold storage for transport. Additionally, the cost of analyzing a mycobacterial blood culture with the National Health Services Laboratory in South Africa is no more expensive than MGIT analysis of a sputum specimen.
In this study, the majority of patients with M. tuberculosis bacteremia were not suspected to have extrapulmonary TB. Mycobacterial culture of lymph node aspirates and pleural fluid were available to clinicians during this study period [11], yet in only three patients was an aspirate performed and all were concordant with blood culture results. Indeed, the minority of patients with positive blood cultures and sputum cultures for M. tuberculosis were suspected of extrapulmonary TB. Thus, our findings suggest that many patients with pulmonary TB in this setting may also harbor unrecognized M. tuberculosis bacteremia. Detection of otherwise occult M. tuberculosis bacteremia regardless of DST, in a patient without suspected extrapulmonary TB may prompt a more exhaustive search for an extrapulmonary focus which could alter treatment and carry important implications for monitoring and clinical outcome. Patients on antiretroviral medication at the time of blood culture collection were significantly less likely to have M. tuberculosis bacteremia. Earlier studies of M. tuberculosis bacteremia in similar populations in Africa were carried out prior to widespread availability of antiretrovirals and therefore this association could not have been documented until now [20,21]. Our findings lend further support to the growing body of evidence for early initiation of antiretrovirals in the treatment of TB and HIV co-infected patients [30]. Notably, the median duration of antiretroviral use in our study population was 15 weeks, a reasonable timeframe to present with immune reconstitution inflammatory syndrome (IRIS). We suspect that some patients on antiretroviral therapy who were culture-negative for TB may have actually presented with IRIS, a condition which may share signs and symptoms with TB and drug-resistant TB; however complete follow-up data were not available for confirmation. Interestingly, there was no difference in CD4 count among patients with and without M. tuberculosis bacteremia. One explanation is, in accord with national guidelines, the CD4 count is only checked twice annually; thus the CD4 count recorded may be falsely low for those patients that initiated antiretroviral therapy within the prior six months. Alternatively, in some patients early restoration of lymphocyte function may precede restoration of total lymphocyte count.
One of the primary limitations of the study, given its retrospective design, was that the decision of blood culture collection was dependent upon the attending clinician and therefore additional patients suspected of drugresistant TB may not have had blood cultures sent and were not included in the study. Additional factors that influenced the decision to pursue the investigation for M. tuberculosis bacteremia may not have been captured. It is also a possibility that blood cultures may have been preferentially pursued in patients in whom a diagnosis was not as readily made by sputum analysis. Therefore, only prospective study of simultaneous blood and rigorous collection of multiple sputum samples in all drugresistant TB suspects would allow determination of a true incremental yield in this setting.
Conclusions
In summary, mycobacterial blood cultures diagnosed MDR and XDR-TB in a substantial number of patients predominately HIV-infected and suspected of drug-resistant TB from rural South Africa. Bacteremia with drugsusceptible and drug-resistant TB was not restricted to patients suspected of extrapulmonary TB, as many patients with sputum culture confirmed pulmonary TB also had M. tuberculosis bacteremia. The adjunctive use of mycobacterial blood cultures should be considered in all patients suspected of drug-resistant TB, particularly in those unable to expectorate. In many regions of Africa and the developing world, culture and DST are not routinely performed for the diagnosis of TB despite the inferior sensitivity of routine sputum microscopy and the inability of microscopy to detect drug-resistant TB. Expanded access to culture and DST of sputum in South Africa has been projected to save 47,955 lives and avert 7,721 new MDR-TB cases over the next 10 years [31]. Our finding that a significant proportion of drugresistant TB suspects had MDR-TB or XDR-TB bacteremia underscores the need for more widespread use of culture and DST for both sputum and blood specimens.
|
v3-fos-license
|
2021-11-28T05:25:37.009Z
|
2021-11-01T00:00:00.000
|
244518038
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6643/13/11/4175/pdf",
"pdf_hash": "5121c87697df6b10c0a18f24dc125043e3fbf790",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41837",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "2153b7ae294fa53371357c41e703b9e5ee6a7997",
"year": 2021
}
|
pes2o/s2orc
|
Prevalence and Determinants of Vitamin D Deficiency in 9595 Mongolian Schoolchildren: A Cross-Sectional Study
Population-based data relating to vitamin D status of children in Northeast Asia are lacking. We conducted a cross-sectional study to determine the prevalence and determinants of vitamin D deficiency in 9595 schoolchildren aged 6–13 years in Ulaanbaatar (UB), the capital city of Mongolia. Risk factors for vitamin D deficiency were collected by questionnaire, and serum 25-hydroxyvitamin D (25[OH]D) concentrations were measured using an enzyme-linked fluorescent assay, standardized and categorized as deficient (25[OH]D <10 ng/mL) or not. Odds ratios for associations between independent variables and risk of vitamin D deficiency were calculated using multivariate analysis with adjustment for potential confounders. The prevalence of vitamins D deficiency was 40.6% (95% CI 39.7% to 41.6%). It was independently associated with female gender (adjusted odds ratio [aOR] for girls vs. boys 1.23, 95% CI 1.11–1.35), month of sampling (aORs for December–February vs. June–November 5.28 [4.53–6.15], March–May vs. June–November 14.85 [12.46–17.74]), lower levels of parental education (P for trend <0.001), lower frequency of egg consumption (P for trend <0.001), active tuberculosis (aOR 1.40 [1.03–1.94]), household smoking (aOR 1.13 [1.02 to1.25]), and shorter time outdoors (P for trend <0.001). We report a very high prevalence of vitamin D deficiency among Mongolian schoolchildren, which requires addressing as a public health priority.
Introduction
It is estimated that at least 1 billion individuals globally have sub-optimal serum 25-hydroxyvitamin D [25(OH)D] levels [1]. The 25(OH)D is the major circulating metabolite of vitamin D, widely acknowledged to be the most robust and reliable measure of vitamin D status [2]. Studies investigating vitamin D deficiency in Mongolia found a prevalence of 80.1% among Mongolian adults in the winter and 80% among reproductive-age women [3,4]. In particular, Mongols have low 25(OH)D levels, due in part to Mongolia's high latitude, increasing amounts of air pollution, especially in the capital city of Ulaanbaatar, lack of sun exposure during winter and spring, and lack of access to vitamin D-rich food (e.g., fish and mushrooms) [5][6][7]. To address these issues, the Mongolian government has been considering solutions for the problem of vitamin D deficiency and ways that might promote the supply of micronutrients (like vitamin D) to the general population.
Vitamin D supplementation has been proposed as an intervention that would raise serum 25(OH)D levels. However, the most recent national nutrition survey suggests a lack of adherence to both supplements and a resistance of consumption of vitamin D-rich foods [8]. Therefore, food fortification has been widely supported recently in Mongolia, as a means of supplying vitamin D on a national scale [9]. Although fortification can be a strong tool for alleviating micronutrient deficiency, other risk factors within the Mongolian population may have major links to vitamin D deficiency in Mongolia.
The present study describes a cross-sectional analysis of vitamin D status in a large sample of Mongolian schoolchildren. These populations are of major interest because they undergo rapid growth and development. Our main purpose was to evaluate relationships that may exist between modifiable or non-modifiable risk factors and risk of vitamin D deficiency within this population, particularly household, nutritional, health, and sociodemographic determinants. Mongolia has one of the highest Tuberculosis (TB) incidence rates among Asian countries at 428 cases per 100,000 per year, out of which 10% is pediatric [10], motivating the study to assess its potential relationship with vitamin D status. We used cohort information to conduct a secondary analysis and to identify potential risk factors associated with low 25(OH)D levels. Cross-sectional studies evaluating determinants of vitamin D deficiency can inform the design of health programs by identifying risk factors that are potentially amenable to intervention. These results may enhance the effectiveness of the Mongolian government's efforts to improve the population's vitamin D status and help develop targeted interventions
Study Design and Setting
Mongolia is a land-locked country located between China and Russia with a population of 3.1 million people, of whom almost half reside in the capital city, Ulaanbaatar. We conducted a cross-sectional analysis of baseline data collected from children attending 18 public schools (located in six districts of Ulaanbaatar) who were being screened for participation in a randomized, controlled trial of vitamin D supplementation for the prevention of latent Tuberculosis infection (LTBI) [11]. As recruits to a double-blind clinical trial, participating children were randomized to receive either vitamin D supplements or placebo. This study was implemented to explore independent associations between vitamin D deficiency and increased susceptibility to TB among the Mongolian population [11]. The study was approved by Institutional Review Boards at the Mongolian Ministry of Health and the Harvard T. H. Chan School of Public Health (IRB ref. no. 14-0513) and funded by the National Institutes of Health (ClinicalTrials.gov number, NCT02276755).
Sample Size and Eligibility
Demographic data were collected from 11,475 children in 18 public schools in Ulaanbaatar who were considered potentially eligible for the primary randomized, controlled trial. Various subjects were eliminated from further participation in the study because they did not attend informational meetings at the school or because initial consent was later retracted by parents or other legal guardians. Thus, by the end of the study only 9782 subjects remained. The subjects were children between the ages of 6 and 13 (given vitamin D supplements via school during the parent trial) at screening who attended one of the 18 schools. Inclusion criteria primarily consisted of children aged 6 to 13 years at screening and attendance at a participating school with data available, and children who had pre-existing tuberculosis or evidence of rickets were excluded. Figure 1 shows the number of children from the original sample who were included in the final sample.
Data Collection and Measurement
Data were collected at baseline for a parent clinical trial [11]. Household and diet information was collected at baseline via questionnaire and interview (with certain foods such as meat, fish, and eggs included as primary sources of dietary vitamin D), and parents and guardians were contacted for full details when needed. Characteristics were considered modifiable if they were behaviors or other factors that can be reasonably altered by the parent or child (e.g., district, household income, eating behaviors, physical activity, BMI-for-age, smoking levels); if not, they were considered non-modifiable (e.g., age, gender, month of sampling, TB classification). Physical measurements, vitamin D measurements, and TB tests were collected by trained trial field workers. A 5-mL blood sample was obtained from each child for QuantiFERON -TB Gold (QFT-G) testing and for measurement of serum 25(OH)D levels. Children with positive QFT-G tests were referred to the Mongolian National Center for Communicable Diseases (NCCD) for clinical and radiographic screening for tuberculosis disease. Vitamin D levels were measured in Global lab using an enzyme-linked fluorescent assay (VIDAS 25OH Vitamin D total; Biomerieux, Marcy-l'Etoile, France). The assay was accredited by the Vitamin D External Quality Assessment Scheme (DEQAS). The total Coefficient of Variation (CV) was 7.9%, mean bias was 7.7%. and the limit of quantitation (LOQ) was 8.1 ng/mL.
Statistical Analysis
Serum 25(OH)D levels were standardized with the use of standards provided by the Vitamin D External Quality Assessment Scheme [12] prior to conversion to a binary variable, in which vitamin D deficiency was defined as a serum 25(OH)D level less than 10 ng/mL, supported by most commercial laboratories as the standard [13]. For continuous variables (age, height, weight, waist circumference, BMI-for-age Z-score, and fat mass), means and standard deviation are reported in Table 1. Household annual income was categorized based on quartiles. BMI (body mass index) data were generated using height and weight measurements and converted to BMI-for-age Z-scores, using World Health Organization reference data via the Canadian Pediatric Endocrine Group ShinyApps platform [14]. Complete case analysis (i.e., exclusion of subjects with missing data) was
Data Collection and Measurement
Data were collected at baseline for a parent clinical trial [11]. Household and diet information was collected at baseline via questionnaire and interview (with certain foods such as meat, fish, and eggs included as primary sources of dietary vitamin D), and parents and guardians were contacted for full details when needed. Characteristics were considered modifiable if they were behaviors or other factors that can be reasonably altered by the parent or child (e.g., district, household income, eating behaviors, physical activity, BMI-for-age, smoking levels); if not, they were considered non-modifiable (e.g., age, gender, month of sampling, TB classification). Physical measurements, vitamin D measurements, and TB tests were collected by trained trial field workers. A 5-mL blood sample was obtained from each child for QuantiFERON -TB Gold (QFT-G) testing and for measurement of serum 25(OH)D levels. Children with positive QFT-G tests were referred to the Mongolian National Center for Communicable Diseases (NCCD) for clinical and radiographic screening for tuberculosis disease. Vitamin D levels were measured in Global lab using an enzyme-linked fluorescent assay (VIDAS 25OH Vitamin D total; Biomerieux, Marcy-l'Etoile, France). The assay was accredited by the Vitamin D External Quality Assessment Scheme (DEQAS). The total Coefficient of Variation (CV) was 7.9%, mean bias was 7.7%. and the limit of quantitation (LOQ) was 8.1 ng/mL.
Statistical Analysis
Serum 25(OH)D levels were standardized with the use of standards provided by the Vitamin D External Quality Assessment Scheme [12] prior to conversion to a binary variable, in which vitamin D deficiency was defined as a serum 25(OH)D level less than 10 ng/mL, supported by most commercial laboratories as the standard [13]. For continuous variables (age, height, weight, waist circumference, BMI-for-age Z-score, and fat mass), means and standard deviation are reported in Table 1. Household annual income was categorized based on quartiles. BMI (body mass index) data were generated using height and weight measurements and converted to BMI-for-age Z-scores, using World Health Organization reference data via the Canadian Pediatric Endocrine Group ShinyApps platform [14]. Complete case analysis (i.e., exclusion of subjects with missing data) was performed for any individual whose record showed missing data, leaving 9595 subjects left for analysis. The potential predictors of vitamin D deficiency were chosen based on existing literature, and variation inflation factors (a metric used to detect multicollinearity by testing how variation is inflated for a variable) were calculated to reduce the influence of highly correlated variables. Reference levels of the measured variables were chosen based on which category had the highest level of vitamin D deficiency because interpretation of odds ratio is more logical when comparing the odds to the odds of the group most vitamin D deficient. Parameter value estimates and 95% confidence limits were generated by logistic regression models. Consequently, the exponentiated value of the coefficient should be interpreted as the expected change in the odds of vitamin D deficiency in response to a oneunit increase in the level of a continuous parameter or a one-level increase in a categorical parameter, holding other parameters' levels constant. Considering that 25(OH)D levels can greatly vary by season in Mongolia [3], a categorical variable for the month that samples were drawn was included in every model. Since summer months were not available in this dataset, months were separated into three general categories (September-November, December-February, and March-May) to capture seasonal effects. Univariate analysis was conducted for all potential predictors, and variables that yielded a p-value less than 0.1 were used in multivariable analysis. Those variables in the multivariable analysis that had a p-value less than 0.05 were considered likely determinants of vitamin D deficiency. For categorical variables, a likelihood ratio test was used to generate a global p-value to assess categories as a group. All analyses were done in R version 4.0 for Mac OS X Catalina, and anonymized raw data and modeling code are available on request.
Characteristics of the Study Population
Household and demographic characteristics are detailed based on the overall population in Table 1 and among vitamin D-deficient individuals in Table 2. After removing all subjects with missing data, 9595 subjects remained to be analyzed. The overall prevalence of vitamin D deficiency (defined using the 10-ng/mL 25(OH)D threshold) in this sample was 40.6% (3900 out of 9595) (95% CI 39.7% to 41.6%). The participants' gender distribution was virtually even, and the mean age was 9.4 years. Participants from six districts were studied, with the minority coming from an "Other" region not defined and the majority coming from the Sukhbaatar region. Most participants lived in a house or apartment without central heating or lived in ger (a traditional Mongolian felt-covered structure) and had a household income in the highest quartile of the study population. Most subjects consumed red meat every day or almost every day. In contrast, most subjects consumed eggs only 1-4 times per month and seldomly consumed any seafood and/or animal liver/intestinal organs. Most subjects did not live with any household members who smoked and did not smoke themselves. Most subjects had a BMI-for-age Z-score of between −2 and +2 but had less than 1 h of daily outdoor activity.
Predictors of Vitamin D Deficiency
Results of the multivariable regression analysis are summarized in Table 2. Following univariate analysis, the adjusted model included age, gender, month of sampling, district, highest level of parental education, frequency of egg consumption, TB classification, any smoking within household, and daily outdoor activity. Vitamin D deficiency was independently associated with female gender (adjusted odds ratio [ Table 2 with p-value below 0.05. Thus, there was a relatively even split of modifiable and non-modifiable independent risk factors that met the significance threshold. In addition, variation in 25(OH)D can be seen when investigating scatterplots with month of sampling, outdoor activity, and frequency of egg consumption on the other axis ( Figure 2). 2.06 to 3.77), lower levels of parental education (aOR for secondary education vs. university 1.36, 95% CI 1.21 to 1.52, primary vs. university 1.32, 95% CI 1.04 to1.69, no education vs. university 1.48, 95% CI 1.11 to 1.99), and other ORs seen from Table 2 with p-value below 0.05. Thus, there was a relatively even split of modifiable and non-modifiable independent risk factors that met the significance threshold. In addition, variation in 25(OH)D can be seen when investigating scatterplots with month of sampling, outdoor activity, and frequency of egg consumption on the other axis ( Figure 2).
Discussion
Although predictors of vitamin D deficiency are complex, their identification is important for the development of policy that seeks to improve serum 25(OH)D levels in the Mongolian population. Using data from the parent vitamin D supplementation trial, we identified several predictors that were associated with vitamin D deficiency, including gender, months sampled, district of residency, parental education, frequency of egg consumption, TB status, any smoking in the household, and frequency of daily outdoor activity. Identification of these predictors can guide policy and program changes that may be directed towards addressing vitamin D levels in Mongolia.
The risk factors identified were independent predictors of vitamin D deficiency and consisted of an even split of modifiable and non-modifiable risk factors. District of residence in Ulaanbaatar as a risk factor is difficult to concretely explain, since there are numerous qualities of a district that may affect vitamin D status (e.g., number of markets selling vitamin D foods, traffic congestion motivating individuals to stay indoors, urban planning of outdoor space available, socioeconomic status of district). However, annual household income and type of household were not strongly associated with vitamin D deficiency, reducing the likelihood of an economic-based explanation. The months that serum was sampled had both positive and negative associations, reflecting the fluctuating nature of 25(OH)D level based on season, likely due to sunlight and food availability. Households with smokers had higher odds of vitamin D deficiency, which could be due to poorer health habits or an undefined biological relationship between secondhand smoke and 25(OH)D. This relationship has been seen before in U.S. and Korean populations and has been hypothesized to be due to nicotine-induced hypoparathyroidism that affects 25(OH)D metabolism [15,16]. A positive TB diagnosis was associated with higher odds of vitamin D deficiency, which could potentially be explained by TB interrupting healthy behavior or an undefined biological relationship (e.g., TB may potentially perturb vitamin D metabolism) [17]. Since these data are cross-sectional, we cannot exclude the possibility that vitamin D deficiency may be a consequence of TB, which should be explored in future studies. Higher frequency of egg consumption was associated with lower odds of vitamin D deficiency, which has been corroborated as a good source of vitamin D in the Asian region [18] due to a limited number of foods naturally containing vitamin D being produced in this area of the world [19]. Higher frequency of daily outdoor activity was associated with lower odds of vitamin D deficiency, which is expected given the increased exposure to sunlight. Differences in the risk of vitamin D deficiency across genders could suggest gender-specific behavioral activity or nutrition that we were unable to adjust for in the multivariable analysis. In addition, since households with parents with a university degree had a lower odds of vitamin D deficiency, it could underscore better awareness of nutrition and nutrition practices in such households.
Since these predictors reflect several underlying causes, they are difficult to address in a single policy or program. Although some modifiable predictors can be targeted via government intervention, tangible improvement necessitates a more comprehensive plan of action. In addition, micronutrient supplementation with vitamin D, although effective in raising vitamin D levels in schoolchildren [11,[20][21][22], is difficult to provide for those not attending school. The Coronavirus pandemic has made accessibility to micronutrient supplements especially more difficult and costly [23]. For these reasons, food fortification has become a major focus of the government. Given the country's large nomadic population, it is difficult to ship and store supplements over a wide and sparsely populated area [24]. Fortification of staple foods such as flour is considered a more viable option since a large portion of the population purchases commercial flours throughout the year [25,26]. A projected effectiveness study found flour to be the most practical food fortification candidate (as opposed to milk and oil), given its universal consumption and centralized production [26]. Preliminary estimates made in that report suggested that such a measure would increase the intake of vitamin D considerably. Although food fortification has been successful in addressing micronutrient deficiencies in other countries, our studies indicated that targeted interventions focused on other risk factors could complement such efforts in Mongolia.
One strength of this study is the large sample of Mongolian children, a group that has been rarely studied in the past. A wide range of household and demographic characteristics were studied, although some categories may have had too few subjects for definitive analysis. Fortunately, our rate of missing data was minimal. Another strength is that serum 25(OH)D levels took into account the month of sampling, since it has been seen that seasons have different impacts on food availability and sunlight exposure. For example, a study of pregnant women in Ulaanabatar by Uush and colleagues found that serum 25(OH)D levels varied by season [27]. This seasonal variation was also found by Bromage and colleagues among Mongolian adults, with levels in the winter being especially low [3]. Further studies should be conducted in regions outside of Ulaanbaatar for a more comprehensive look at the situation at a national level. In addition, following this same cohort could provide useful information as to how vitamin D levels change through adulthood. Current micronutrient deficiencies (including vitamin D) are ubiquitous during childhood in Mongolia and may ultimately impact growth and development and adult productivity [28]. For this reason, focusing on improving these deficiencies early through targeted interventions is of major importance for future generations.
Conclusions
Despite efforts to improve serum 25(OH)D levels in Mongolia, vitamin D deficiency remains a significant public health concern. Especially during times when infectious diseases are prevalent, like the COVID-19 pandemic, achieving and maintaining vitamin D sufficiency is of major importance. This study identified several important modifiable and non-modifiable determinants associated with vitamin D deficiency, including gender, months sampled, district of residency, parental education, frequency of egg consumption, TB status, any smoking in the household, and frequency of daily outdoor activity. The effectiveness of efforts aimed at improving 25(OH)D levels in Mongolia such as food fortification can be supplemented by targeted interventions that address determinants we identified in this study.
|
v3-fos-license
|
2023-11-17T16:33:27.911Z
|
2023-01-01T00:00:00.000
|
265228225
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.stet-review.org/articles/stet/pdf/2023/01/stet20230084.pdf",
"pdf_hash": "c30eb54f29ed9de1f3231d5831f339ad61678139",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41840",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Chemistry"
],
"sha1": "430d6858f208c7cc2d08c45599944587ba0e4443",
"year": 2023
}
|
pes2o/s2orc
|
A (NiMnCo)-Metal-Organic Framework (MOF) as active material for Lithium-ion battery electrodes
. A (NiMnCo)-Metal-Organic Framework and its oxidized and pyrolysed derivatives have been tested as electrode materials for lithium-ion batteries. Materials have been fully characterized by Scanning Electron Microscopy (SEM), powder X-Ray Diffraction (XRD), Thermogravimetric Analyses (TGA), Infrared (IR) spectroscopy and electrochemical properties have been determined in coin cells using lithium metal as the counter electrode. Studies have revealed speci fi c capacities of 860 mAh g (cid:1) 1 and 800 mAh g (cid:1) 1 at 15 mA g (cid:1) 1 respectively for the MOF and the oxide (690 and 190 mAh g (cid:1) 1 after 50 cycles) whereas the corresponding pyrolysed compound has shown limited electrochemical performances. Ex situ XRD has been performed to highlight the evolution of the material structure during cycling. These results show that electrochemical storage is based on a conversion reaction.
Introduction
Energy storage is one of the biggest challenges for the next decades [1].Energy consumption has never been so high, predictions show a dramatic increase in energy demand due to the economic growth and the expansion of populations [2].There is a real need for an alternative to replace the dominant energy for a cleaner and more efficient development of our society.The management of renewable energy production through efficient electrochemical energy storage devices has become a priority because these energies are intermittent and need storage capacities to be massively included in large-scale power grids.To generate a new sustainable energy supply, battery properties must be improved in terms of capacity, security and durability [3].
In recent years, Li-ion Batteries (LiBs) have been developed and successfully commercialized for portable devices such as smartphones or laptops and more recently for electric vehicles [4].Those batteries are usually composed of a positive electrode containing metal oxides, a negative one made of graphite, an organic electrolyte and a separator.LiBs have high energy density and good cyclability compared to other commercial batteries.There are several generations of LiBs depending on the chemical composition of the electrode material [5].Transition metals have been extensively used as cathode components in LiBs due to the reversible insertion/disinsertion of Li during the redox phenomena of the discharge/charge process as demonstrated by Mizushima et al. in the early 1980s [6].Recent developments have been devoted to finding the best metal ratio in order to maximize advantages over drawbacks depending on the final application.Batteries using layers of LiCoO 2 have been firstly used but, due to the cost and the toxicity of Co, other transition metals have been incorporated [7].For example, LiNixMnyCozO 2 batteries (NMC with x + y + z = 1) have shown a high capacity (up to 160 mAh g À1 ) with a high midpoint voltage (vs.Li) of 3.8 V and a good balance between stability, cyclability, cost of the material and performance.This makes them one of the most successful LiB on the market (47 kilotons/year) just after the LiFePO 4 (LFP) system (65 kilotons/year) and before the original LiCoO 2 (LCO) system (39 kilotons/ year) in 2016.However, there are still some limitations to the use of NMC batteries such as the toxicity, the use of non-abundant metal (Li) and the recycling processes [8].Present processes are generally based on pyrometallurgical and hydrometallurgical treatments [9].
Our team has recently obtained Metal-Organic Frameworks (MOFs) from a solution of Nickel, Manganese and Cobalt species (ratio 1:1:1) that includes all the metals inside the structure with the same proportion [10] or just Mn [11] depending on the nature of the precipitating agent.MOFs consist of inorganic nodes (clusters) bound together by organic linkers.They have gained a lot of popularity mostly due to their high surfaces, tunable structures and abundant active sites [12].MOFs have been proposed in many applications such as gas storage, heterogeneous catalysis or extraction and separation of metals or organics [13].Recently, mainly due to their crystalline organisation, MOFs and their derivatives have also been used as electrode materials [14], precursors for electrode synthesis [15] or solid electrolytes [16].In this way, we have recently proposed an iron-based MOF as a cheap conversion material for the negative electrode of lithium batteries.Electrochemical properties measured in coin cells have revealed a high capacity of 550 mAh g À1 with good cyclability over the charge/discharge cycles [17].Electrode materials can also be prepared from MOFs after heat treatment under air to convert the material into oxide [18].Recently, MIL-88-Fe of formula Fe 3 O(H 2 O) 2 Cl(BDC) 3 ÁnH 2 O (BDC for benzene dicarboxylate), was chosen as a template to prepare a porous a-Fe 2 O 3 .This material shows an interesting capacity of 911 mAh g À1 and good stability after 50 cycles at a rate of 0.2 °C [19].Reduced transition metal which can be obtained by thermal treatment of MOF [20] under an inert atmosphere has also shown great potential for energy storage application [21].As an example, exfoliated transition metal carbides and carbonitrides (MXene such as Ti 2 AlC or V 2 AlC) have been evaluated as Li + intercalation electrodes in LiBs [22].
Here we propose to evaluate a MOF composed of Ni, Mn and Co coordinated to the BTC linker (BTC for trimesic acid), synthesized in our previous work, as anode materials for LiBs [10].This material stabilizes inside its structure the three metals with a 1:1:1 ratio.The corresponding oxide and reduced materials, obtained after a heat treatment under air or argon respectively, have also been synthesized.The three materials have been characterized and the electrochemical capacities in coin cells have revealed a capacity of 690 and 190 mAh g À1 after 50 cycles at 15 mA g À1 respectively for the MOF and the oxide.The corresponding reduced form has nevertheless shown no electrochemical capacity.
Experimental section 2.1 Synthesis
Reagents were purchased from Sigma-Aldrich and were used as received without further purification.TGA was measured in a Mettler-Toledo TG with an auto-sampler.A FEI Quanta 200 environmental scanning electron microscope, equipped with an Everhart-Thornley Detector (ETD) and a Backscattered Electron Detector (BSED), was used to record images with an acceleration voltage of 30 kV under high vacuum conditions.PXRD patterns were obtained with a Bruker D8 Advance diffractometer using a Cu anode delivering an x-ray beam of wavelength 1.542 Å. FTIR spectra were recorded with a Perkin Elmer 100 spectrometer between 380 and 4000 cm À1 using an ATR crystal with 4 cm À1 resolution.Ex situ XRD patterns were obtained by protecting the sample from air and moisture by using a Kapton Ò polyimide film.More precisely, the cycled coin cells are dismounted into a glove box.Anode electrodes are then removed and washed with Dimethyl Carbonate (DMC) in order to remove all residual salts.Then electrodes are fixed on glass supports and samples are then protected from air with a Kapton Ò polyimide film.The MOF was synthesized from a reported protocol [10].In summary, 1 equivalent of each metal ) are solubilized in DMF.To this solution, three equivalents of Trimesic acid were added and transferred to a Parr Bomb.The reaction was performed in solvothermal conditions at 120 °C for 2 days in an oven to obtain a crystalline pink material.Then the mixture was centrifuged, the supernatant was isolated and the precipitate was washed 3 times with DMF, 3 times with EtOH to eliminate any residual trace of DMF and finally dried overnight in an oven at 80 °C.The corresponding oxide was obtained by a heat treatment under air at 1000 °C during 12 h whereas the reduced form was obtained with a 1000 °C heat treatment under inert gas (argon) during 12 h.
Electrochemical characterizations
Coin cell assembly was carried out in an argon-filled MBraun glove box using metallic lithium foil as counter and reference electrode and Celgard Ò 2400 and Viledon Ò (Freudenberg) as separator, the latter soaked with a 1M solution of LiPF 6 in a 1:1 volume mixture of Ethylene Carbonate (EC) and Dimethyl Carbonate (DMC) (UBE), serving as electrolyte.Galvanostatic cycling was performed using an Arbin BT-2000.Composite electrodes for the electrochemical characterization were prepared by mixing Active Material (AM), Super P Ò (SP; Imerys) and Poly (Vinylidene difluoride) (PVdF, SOLEF Ò 5130; Solvay), in N-Methyl-2-Pyrrolidone (NMP; Sigma-Aldrich).The overall weight ratio was 40/40/20 (AM/SP/PVdF).The slurry was coated on copper foil, serving as a current collector, and after pre-drying, disk-shaped electrodes were punched, having an average active material mass loading of about 1 mg cm À2 .The electrodes were subsequently pressed at 10 tons for 10 s, and then, all the electrodes were dried at 80 °C for 48 h under vacuum.Since lithium foil served as a counter and reference electrode, all potential values given herein refer to the Li + /Li 0 redox couple.
Characterizations
The reaction between the trimesic acid and the 3 metals (Co, Ni and Mn in the ratio 1:1:1) in DMF leads to a pink crystalline material after 2 days at 120 °C.
The material is then washed thoroughly and dried.The metal ratio is confirmed by ICP-MS after the MOF digestion and by EDX analysis on the material [10].Previous analyses suggest the incorporation of all the metals inside the structure of the MOF and the structure guide of the cobalt in the morphology of the final product [10].The IR spectra of this material and the BTC ligand are shown in Figure S1 and are consistent with the coordination of the metals to the ligand as reported by Gan et al. [23].The characteristic bands of the protonated carboxyl groups of the BTC (mOH = 3066 cm À1 , mC=O = 1690 cm À1 ) are absent in the IR spectrum of the MOF and new bands are observed attributed to the vibration of COO À (1608, 1438 and 1369 cm À1 ).The conversion under heat treatment has been made under air or argon to form its oxidized and pyrolysed derivatives respectively.These materials have then been characterized by PXRD, TGA and SEM analyses prior to being tested as electrode material in a coin cell.
The MOF NiMnCo has revealed a few sharp peaks according to the formation of a material that combined the three metals in the structure [10].The corresponding oxide also exhibits sharp peaks on its PXRD pattern between 15 and 60 degrees typical of a crystalline oxide.This pattern corresponds to the formation of NiMnCoO 4 with a space group Fd3mz (Fig. 1 top, left).Peak duplication can be observed in this pattern and is explained by the presence of impurities due to an excessive amount of cobalt in the host structure as previously reported [24].The TGA profile is consistent with the formation of this oxide material by the thermal decomposition of the rehydrated MOF NiMnCo with the formula NiMnCo (BTC) 2 Á 6H 2 O (Fig. 1 top, right).The two first decomposition steps correspond to the evaporation of free and adsorbed water molecules until 250 °C (20% weight loss).From 250 °C to 420 °C, the degradation of the organic framework occurs in two subsequent processes (45% weight loss) which leads to a residual and stable inorganic oxide compound (35% of the total weight).These results are in good accordance with the degradation of NiMnCo(BTC) 2 Á 6H 2 O to NiMnCoO 4 .The thermal treatment of the MOF under an inert atmosphere shows a similar behaviour with the elimination of water until 250 °C (30% weight loss) followed by the decomposition of the framework until 550 °C (40% weight loss).Then a slow pyrolysis until 1000 °C (5% weight loss) is observed, probably due to the carbonization of the organic ligand associated with an increase of C/O ratio or to the reduction of the oxide material to the reduced form under argon.Here, a higher amount of water is observed inside the material, compared to the previous analysis under air showing that the material is very hygroscopic.Starting from NiMnCo(BTC) 2 Á 9H 2 O, the MOF-derived material can be noted as CoNiMnCx (with x ranging from 10 to 6 from 550 °C to 1000 °C).PXRD analysis indicates that no trace of the NiMnCoO 4 oxide appears in this material.
The structure of the three materials was analysed by SEM and the corresponding images are depicted in Figure 1-bottom.The SEM image of MOF NiMnCo indicates the presence of highly crystalline structures in rod shapes with dimensions of around 10 lm (Fig. 1A).The corresponding oxide (Fig. 1B) exhibits small and regular particles of around 1 lm size, suggesting that materials keep some organisation during the thermal conversion to form NiMnCoO 4 .In contrast, the SEM image of the corresponding pyrolysed material does not show any specific shape (Fig. 1C).Only very small and non-regular particles are observed.Transition metal carbides are indeed known to be polydispersed during the carbonization process [19].
Electrochemical performance
The electrochemical performances of the three materials have been evaluated in half-coin cells.Galvanostatic charge-discharge curves of MOFs electrodes at a current density of 15 mA g À1 are exhibited in Figure 2-left.The first delithiation of the pyrolysed material corresponds to a quite low specific capacity ( ~250 mAh g À1 ) which could be mainly attributed to the contribution of carbon additive in this voltage window (contribution of the Super P Ò carbon is of 150 mAh g À1 after 15 charge/discharge cycles (Fig. S3).On the contrary, the other materials show some high capacity of 800 mAh g À1 during their delithiation.
The MOF NiMnCo has a first lithiation capacity of 1690 mAh g À1 with an average voltage of 0.5 V.The main part of the capacity is observed between 1 V and 0.1 V with a large hysteresis.The specific capacity in delithiation is 860 mAh g À1 with an average voltage of 1.2 V that increases almost linearly.A very similar trend is observed with the NiMnCo oxide material even if the average voltage is slightly higher (0.7 V in lithiation and 1.4 V in delithiation).Figure 2-right represents the capacity of each material as a function of the cycle number.The electrochemical performances of MOF NiMnCo are very stable with still 685 mAh g À1 after 50 cycles.However, the capacity of the NiMnCo oxide decays quickly from 780 mAh g À1 to 185 mAh g À1 after 50 cycles.The shape of the galvanostatic curves suggests that conversion mechanisms are involved.Moreover, as previously reported, the structure of NiMnCoO 4 presents structural impurities due to the presence of Co ions in the host structure that affects the lithium ions intercalation/de-intercalation [24].
Ex situ XRD has been performed in order to follow the structure evolution during cycling Fig. S2). Figure 3-left represents the XRD of the MOF NiMnCo at different stages of cycling.Three different coin cells have been made with MOF NiMnCo-based electrodes.One is stopped after the first lithiation, a second after a full cycle and the last after 50 cycles.To allow easier comparison between XRD patterns and to take into account the Kapton Ò film signal in the interpretation, a pristine electrode has also been characterized.On the pristine electrode, the different asterisks index the original diffraction peaks of the material (the strong peak at 43°is due to the copper current collector).It is important to note that the MOF NiMnCo could be identified even if the Kapton Ò film signal is quite intense in the 12°-25°area.Another interesting point is that the MOF structure was maintained during the cell manufacturing steps especially, after the calendering step at 7 tons cm À2 .After cycling, the cell was dismounted and the electrode was recovered, cleaned and characterized.No XRD peak corresponding to the initial material could be identified on all cycled electrodes.It is clear that the MOF structure is not maintained at the end of the first lithiation and the material is not recovered at the end of the first cycle.A similar experience has been made with the NiMnCo oxide (Fig. 3-right).The same conclusion could be drawn, initial XRD peaks could not be found after the first discharge.
The Ex situ XRD patterns are in good agreement with the hypothesis of the conversion mechanism of the two materials.In such a mechanism, the lithiation would lead to the reduction of the active material in metallic nanoparticles.
However, even if both materials are reduced following the same phenomena, the cycle life of the reduced metals generated by the MOF is clearly better than the oxide one.In fact, because both materials have the same metal ratio, and the same initial capacity (no organic ligand contribution), it is expected that electrochemical performances will be similar.However, this is not the case and the only possible interpretation is that the reduced materials morphology is different.We assume that the particle size of the reduced MOF at the end of the reduction is smaller than the oxides.In the case of the MOF NiMnCo, all metal clusters are separated by organic ligands which can lead to very small and isolated metallic particles at the end of the reduction process [25].Inversely, in the case of the oxide, all atoms are very close to each other due to the oxygen bonds.SEM pictures of electrodes (MOF and oxide) before and after 1 cycle confirm this hypothesis.MOF particles can no longer be observed at the end of the first cycle (Fig. S4).Due to the better dispersion in smaller particles obtained by the MOF reduction, the electrochemical performances, and notably the cycle life, seem to be improved.
Conclusion
A MOF and its oxidized and reduced derivatives have been developed and have demonstrated some interesting electrochemical performances.This hybrid material can integrate three metals (Ni, Mn and Co) in its structure with a ratio of 1:1:1.The thermal treatment under air was conducted to the formation of a crystalline oxide, while under inert conditions, a crystalline reduced material was obtained.Only the MOF and the oxide have shown significant electrochemical activities but both materials exhibit a conversion mechanism during cycling and present high charge and discharge capacities.Generally, oxide materials are known to allow high capacity due to the conversion mechanism.However, our MOF has revealed a higher capacity and more importantly better stability than the oxide during the charge-discharge cycling experiments.We highlight that well-dispersed and smaller metallic particles related to the reduction of MOF metal clusters allow us to achieve a better cycle life.
Both materials have high capacities and prove that a real interest exists in conducting research on MOF materials for Li-ion applications.In the future, the real challenge for the use of MOF in batteries will be to find a framework that can support a lower potential with capacities as high as those observed with the NiMnCo MOF or the NiMnCo oxide.
Fig. 1 .
Fig. 1. (Top) left -PXRD analyses of the MOF, oxide and reduced forms; right -TGA of the MOF under air and argon.(bottom) SEM images of the MOF (A), oxide (B) and reduced MOF (C).
Fig. 2 .
Fig. 2. First charge and discharge (left) and cycle life (right) of the 3 materials.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-06-17T00:00:00.000
|
17229494
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0021032&type=printable",
"pdf_hash": "3f8c9a6eb54b52296102e8ee5342b7ad524b1056",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41841",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "f16ee7e1e4f3997591d5e5c9b0bb3279addf61c1",
"year": 2011
}
|
pes2o/s2orc
|
Interactions of the Algicidal Bacterium Kordia algicida with Diatoms: Regulated Protease Excretion for Specific Algal Lysis
Interactions of planktonic bacteria with primary producers such as diatoms have great impact on plankton population dynamics. Several studies described the detrimental effect of certain bacteria on diatoms but the biochemical nature and the regulation mechanism involved in the production of the active compounds remained often elusive. Here, we investigated the interactions of the algicidal bacterium Kordia algicida with the marine diatoms Skeletonema costatum, Thalassiosira weissflogii, Phaeodactylum tricornutum, and Chaetoceros didymus. Algicidal activity was only observed towards the first three of the tested diatom species while C. didymus proved to be not susceptible. The cell free filtrate and the >30 kDa fraction of stationary K. algicida cultures is fully active, suggesting a secreted algicidal principle. The active supernatant from bacterial cultures exhibited high protease activity and inhibition experiments proved that these enzymes are involved in the observed algicidal action of the bacteria. Protease mediated interactions are not controlled by the presence of the alga but dependent on the cell density of the K. algicida culture. We show that protease release is triggered by cell free bacterial filtrates suggesting a quorum sensing dependent excretion mechanism of the algicidal protein. The K. algicida / algae interactions in the plankton are thus host specific and under the control of previously unidentified factors.
Introduction
Diatoms (Bacillariophyceae) are very abundant unicellular microalgae in marine and freshwater ecosystems and are highly ecologically relevant because of their position at the bottom of the marine food web [1]. Different diatom species can occur in dense blooms and dominate the phytoplankton community during short or prolonged periods. Because of their ecological importance, understanding the factors that limit diatom growth and proliferation is crucial. These can include abiotic factors such as extreme light or temperature conditions or nutrient limitation [2]. But also biotic factors such as grazing by zooplankton [3,4], allelopathic effects of other phytoplankton species [5], or viral infections can have a negative impact on diatoms [6,7]. It is also documented that bacteria can even control bloom termination processes [8,9].
In terms of cell numbers marine bacteria are even more abundant than diatoms and by utilization of organic matter they also play a key role in plankton communities [10]. Interactions between phytoplankton and bacteria have gained increasing attention as the relevance of the microbial loop for plankton communities becomes more evident [11,12,13]. Bacteria can act synergistically with diatoms and symbiotic interactions have been reported from several systems [11,14,15]. But bacteria can also control algal populations e.g. by inhibiting growth of diatoms and other phytoplankton members or by active lysis of algal cells [16,17,18]. Bacterial inhibition of algal growth either requires direct cell contact [19] or can be mediated by excreted extracellular substances [18,20]. Inhibitory interactions between bacteria and phytoplankton are mostly investigated with the goal of finding a biological control for harmful algal blooms [21,22]. In contrast, only few ecological studies on the bloom termination of non-harmful plankton species exist [12,18]. Besides few exceptions the identity of the compounds or enzymes responsible for the algicidal effect remains unknown. Lee et al. [20] demonstrated that Pseudoalteromonas sp. produces a high molecular weight extracellular protease which is able to inhibit the growth of the diatom Skeletonema costatum. But lower molecular weight algicidal compounds, such as rhamnolipid biosurfactants from Pseudomonas aeruginosa or the pigment prodigiosin from the bacterium Hahella chejuensis have also been identified [23,24].
The regulation of the production of such inhibitory compounds is mostly unknown. An exception is the report on genes potentially involved in prodigiosin biosynthesis [25]. Generally, bacterial production of inhibitory substances can be regulated by external factors which might also be a relevant mechanism for planktonic species. Examples from the terrestrial environment include mechanisms where secretion of active metabolites occurs only in the presence of the host or where the release of active compounds is dependent on the cell density of the bacteria [26]. The latter process is known as quorum sensing (QS). QS is a process governed by small molecules such as acyl homoserine lactones or peptides that are excreted from bacteria. Reception of such metabolites allows bacteria to determine the local density of their population and to regulate gene expression. These changes in gene expression can result in a variety of physiological changes like the onset of bioluminescence, antibiotic synthesis or extracellular enzyme production [26].
In a screening of algicidal bacteria the aerobic, Gram-negative, non-motile Kordia algicida was isolated during a bloom of the cosmopolitan diatom Skeletonema costatum. The bacterium was able to kill S. costatum and also exhibited algicidal activity against other microalgae in co-culture experiments [27]. The genome sequencing of K. algicida is underway and interestingly, several genes coding for proteases have been identified and deposited in the databases. We decided to investigate K. algicida/diatom interactions in more detail. We reasoned that for any bacterium in the dilute matrix of the plankton, secretion of secondary metabolites or proteins that mediate lysis of diatoms is costly and thus we proposed the hypothesis that algicidal activity is controlled by biotic signals in the K. algicida/S. costatum system.
In this study we show that the algicidal bacterium K. algicida relies on diffusible enzymes .30 kDa to interfere with algal growth. We show that the activity is specific for certain diatoms, while others are not susceptible. Furthermore we show that the excretion of active proteases is not regulated by the presence of a co-cultured diatom species but is rather dependent on the bacteria cell density in a process that bears the hallmarks of quorum sensing.
Algal and bacteria culturing
The Gram-negative marine bacterium Kordia algicida strain OT-1 was originally isolated from a Skeletonema costatum bloom [27] and was obtained from the NITE Biological Resource Center (NBRC 100336). Cultures were grown at 15uC under constant shaking (90-100 rpm min 21 ) in autoclaved ZoBell medium (5 g bacto peptone, 1 g yeast extract, 10 mg FePO 4 , 34 g of Instant Ocean in 1 L bidistilled water) [28]. Dense cultures were used to prepare glycerol stock cultures (20 vol. %). Before each set of experiments a new culture was started from the glycerol stock.
Non-axenic S. costatum (RCC75) and Thalassiosira weissflogii (RCC76) were obtained from the Roscoff Culture Collection, France. Phaeodactylum tricornutum (UTEX 646) was obtained from the Culture Collection of Algae in Austin, TX, USA. Chaetoceros didymus (CH5) was isolated by S. Poulet, Station Biologique, Roscoff, France and is maintained in our culture collection. The strains were cultivated under a 14/10 hours light/dark cycle with 40-45 mmoles photons s 21 m 22 at 15uC in artificial seawater prepared according to Maier and Calenberg at a pH of 7.8 [29]. The nutrient concentrations were 620 mM nitrate, 14.5 mM phosphate and 320 mM silicate.
Estimating bacterial and algal growth
The optical density (OD) of K. algicida cultures was measured with a Specord M42 UV-vis spectrophotometer by Carl Zeiss (Jena Germany) at a wavelength of 550 nm. Bacterial growth rate was estimated graphically by plotting measured OD values on a logarithmic scale. Time points that showed a linear increase were used to perform an exponential regression with OD 2 = OD 1 e mt werem represents the bacterial growth rate and OD 1 and OD 2 represent the optical densities at time point 1 and 2, respectively.
Algal growth was determined by measuring the in vivo chlorophyll a fluorescence using 300 mL of each culture in 96 well plates or 1.5 mL in 24 well plates. The fluorescence was measured with a Mithras LB 940 plate reader by Berthold Technologies (Bad Wildbad, Germany). Cell density was determined using a Fuchs-Rosenthal hematocytometer with an upright microscope (Leica DM 200, Leica, Germany).
Generation of cell free bacterial and co-culture filtrates
Exponentially growing K. algicida was inoculated into a 10:1 (vol. %) mixture of artificial seawater and ZoBell medium. After the culture reached an optical density (OD) .0.32 one mL was diluted into 50 mL of seawater. After 24 hours the cultures were gently filtered through a 0.22 mm sterile polyethersulfone (PES) filter (Carl Roth; Karlsruhe, Germany). To obtain a cell free filtrate of S. costatum/K. algicida co-cultures, 1 mL of bacterial culture in 10:1 seawater : ZoBell media (OD. 0.32) was inoculated with 50 mL of an exponentially growing S. costatum culture (ca. 1.5 10 6 cells mL 21 ) and grown for 24 h before filtration as described above.
Monitoring algicidal activity
We inoculated 1.125 mL of the filtrate (see above) with 375 mL of the respective exponentially growing diatom culture in the wells of 24 well plates. For controls aliquots of 375 mL of the same starting cultures were diluted with 1.125 mL of artificial seawater. Plates were cultured under the previously mentioned conditions and measurements were performed over regular time intervals. The in vivo fluorescence of chlorophyll a of all cultures was measured as indicator for algal growth.
Size fractionation
Size fractionation experiments were performed with a filtrate of a co-culture of S. costatum and K. algicida as well as with filtrates of mono-cultures of these species (see above). A volume of 15 mL of the respective filtrates was fractionated using Amicon Ultra centrifugal filter units with a molecular weight cut off of 30 kDa (Millipore, Billerica, MA, USA) as described in the manufacturer's instructions. The high molecular weight fraction was diluted to 1.5 mL with artificial seawater. The biological activity of the filtrates was monitored in 96 well plates by inoculating 240 mL of raw or fractionated filtrates with 60 mL of exponentially growing S. costatum.
Heat inactivation of filtrates
Active cell free filtrates of S. costatum, K. algicida, and co-cultures were incubated at 80uC for 10 min. The algicidal activity was monitored after inoculating 375 mL of S. costatum culture in 1.125 mL of regular or heat treated filtrate in 24 well plates.
Conditioning of active filtrates
Replicates each containing 1.125 mL of active filtrate were inoculated with 375 mL of S. costatum, C. didymus or seawater in 24 well plates and incubated using the previously described culturing conditions. After 24 h each treatment was filtered through a 0.22 mm PES filter and the replicates within one treatment were combined. Aliquots of 1.125 mL of the combined filtrates were used to incubate 375 mL of exponentially growing S. costatum in 24 well plates. Other aliquots of the cell free filtrates were heat deactivated as described previously and inoculated with S. costatum in the same way to serve as controls.
Protease inhibition experiment
Cell free bacterial filtrates were harvested as described above and the irreversible serine-protease inhibitor phenylmethanesulphonylfluoride (PMSF; Sigma, Munich, Germany) was tested for its ability to reduce algicidal activity against S. costatum. A working stock solution (1 M in isopropanol) was used to add a final concentration of 1 mM to active K. algicida filtrates and artificial seawater which was used as positive control. After incubation for 30 min in the dark at 15uC the filtrate was applied to S. costatum as described above and algal growth monitored as in vivo chlorophyll a fluorescence.
Protease activity
The measurement of protease activity in bacterial filtrates was based on the conversion of BODIPY FL (E 6638) to a fluorescent product [30]. The dye was purchased from Invitrogen (Carlsbad CA, USA) and the assay was performed following the manufacturer's instructions. Briefly, 10 mL of cell free filtrate of K. algicida cultures were diluted in 100 mL digestion buffer (Invitrogen) and 100 mL of the dye at a concentration of 10 mg mL 21 were added. After incubation at room temperature under exclusion of light for 1 h, the fluorescence was measured with a Mithras LB plate reader with an excitation filter of 47065 nm and an emission filter of 510620 nm. Linearity was ensured in independent calibrations.
Calculation of protease release rate
The protease release rate (PRR) was calculated according to PRR = D(protease fluorescence)/(OD(Av) * D(t) where D(protease fluorescence) represents the difference between the measured fluorescence at two time points, OD(Av) represents the average of the OD at these time points and D(t) the time in hours between these time points. PRR values not significantly different from 0 are not displayed in the figures.
Induction of protease release by conditioned bacterial filtrates
K. algicida was inoculated into 100 mL of a 10:1 mixture of artificial seawater and ZoBell medium in three replicates. The growth and the protease release rate were regularly monitored until the first significant protease release was measured. Afterwards the cultures were sterile filtered, the cell free filtrate was pooled and proteases as well as other high molecular weight constituents were removed using Amicon Ultra centrifugal filter units. A volume of 10 mL of filtrate was added to i) 10 mL of freshly inoculated K. algicida in 1:10 mixture medium and ii) to 10 mL K. algicida cultures inoculated 16 h before the addition of the conditioned filtrate. Protease activity in these inoculations was monitored as described above.
Extraction of homoserine lactones
The attempt to extract acyl-homoserine lactones was performed with cell free supernatant of dense bacterial cultures. The supernatant was extracted with CH 2 Cl 2 according to an established protocol [31] and samples were run on an Perkin Elmer Auto System XL gas chromatograph (GC) equipped with a SPB-5 column (40 m, 0.32 mm internal diameter and 0.25 mm film thikness. He 5.0 was used as a carrier gas with a constant pressure of 160 kPa. The GC was coupled with a Perkin Elmer TurboMass mass spectrometer (Waltham, MA, USA).
Statistical analysis
The test for statistical significant differences at different time points over the course of an experiment was conducted using a two way repeated measures analysis of variance (RM-ANOVA) with Sigmaplot 11. Post hoc test of significance was performed using the Tukey method implemented in Sigmaplot 11. A student t-test was performed to exclude PRR values that were not significantly different from 0. Significance level was generally set for all analysis P,0.05.
Effect of K. algicida and cell free filtrates on different diatom species
In an initial experiment we prepared a co-culture of S. costatum and K. algicida and monitored the cell growth of S. costatum. We observed a significant reduction of the diatoms cell density after 7 h (P = 0.04). After 25 h the cell density of the co-cultured diatoms was only 12.1% of the corresponding control (data not shown). All further experiments were performed with cultures of this active K. algicida. We tested the effect of a cell free filtrate of K. algicida on the diatoms S. costatum, C. didymus, P. tricornutum and T. weissflogii over the period of 64 h. Fig. 1 shows the in vivo chlorophyll a readings 39 h after inoculation. The cell growth of S. costatum, P. tricornutum and T. weissflogii were significantly inhibited (P,0.001 for all species). A two way repeated measures ANOVA revealed significant differences between treatment and the corresponding controls for all data points recorded 24 h after inoculation or later (P,0.001 for all species, data not shown). At the end of the experiment (t = 64 h) the in vivo chlorophyll a fluorescence in the treatments were only 8.7%, 8.7% and 19.4% of the respective control for S. costatum, P. tricornutum, and T. weissflogii respectively (data not shown). In contrast, C. didymus growth was not affected by K. algicida filtrate (P$0.553 at all sampling points; RM-ANOVA). At the end of the experiment the in vivo chlorophyll a signal in the treatment was 99.7% of the corresponding control.
No regulation of algicidal activity in the presence of the host
The filtrate of active K. algicida cultures as well as the filtrate of K. algicida/S. costatum co-cultures both caused a significant decrease of cell growth of S. costatum in comparison to S. costatum filtrate as control (P,0.001 for both) (Fig. 2). No up-regulation of algicidal activity was observed in the presence of S. costatum, since the effect of the filtrate of K. algicida and of K. algicida/S. costatum co-cultures did not differ significantly (P = 0.821). Size fractionation of the bioactive filtrate The filtrates of K. algicida and co-cultures of K. algicida and S. costatum containing only compounds with a molecular weight below 30 kDa had no inhibitory effect compared to the corresponding control using a ,30 kDa fraction of medium from a S. costatum culture (P = 1 for both) (Fig. 2). None of these treatments were significantly different from a treatment with an unfractionated filtrate of a S. costatum culture (P$0.899 in all cases). In contrast, treatments with the high molecular weight fraction .30 kDa of the K. algicida and co-culture filtrates resulted in a significant inhibition of algal growth in comparison to the control (P,0.001 for both). The inhibition caused by the high molecular weight filtrate of the K. algicida culture was not significantly different from the inhibition by the high molecular weight filtrate of the co-culture (P = 0.832). The effects of high molecular weight filtrates of both K. algicida cultures and co-cultures were not significantly different as compared to the effect of the corresponding unfractionated filtrates (P. 0.321 for all comparisons) (Fig. 2).
Heat deactivated filtrates
The filtrates of K. algicida and co-cultures of K. algicida and S. costatum significantly inhibited the growth of S. costatum while aliquots of the same filtrates that were heated at 80uC for 10 min prior to the assay had no negative effect for any monitored time point (data not shown) (P,0.001 for a comparison of the effect of filtrates versus heat treated filtrates at t = 38 h onwards). Filtrates of K. algicida and of the co-cultures again showed no significant difference in their activity over the complete time course of this experiment (P.0.982).
Protease as the inhibiting enzyme
Aiming to identify the inhibiting activity of K. algicida we performed experiments adding commercially available protease from Streptomyces griseus to S. costatum and C. didymus in a concentration range of 1.7 U mg 21 to 0.2 U mg 21 . While C. didymus was not affected by any of these protease additions S. costatum was inhibited in growth by the external proteases (data not shown). Further evidence for the involvement of proteases in the interaction was gained by protease inhibition experiments. The addition of the serine-protease inhibitor phenylmethanesulphonylfluoride (PMSF) significantly decreased the inhibition of algal growth by K. algicida medium in comparison to controls without PMSF (P,0.038). However, the protease inhibitor did not reestablish the complete algal growth and resulted in significant less in vivo chlorophyll a fluorescence compared to a seawater control (Fig. 3) (P,0.001).
Test for detoxification of the K. algicida activity by C. didymus An active filtrate of K. algicida was incubated for 24 h with a C. didymus culture to test whether C. didymus could deactivate algicidal activity. As controls aliquots of the same K. algicida filtrate were used without further treatment or after incubation with a S. costatum culture. After a second filtration to remove the diatoms, the respective filtrates were used in incubations with S. costatum. Neither incubation of the active filtrate with C. didymus nor with S. costatum resulted in decreased activity as compared to the control (Fig. 4) (P$0.956 and P$0.585, respectively over the entire time of the experiment). To test if this effect was due to a general loss of activity all three filtrates were heat inactivated, resulting in significantly reduced activity in all cases (P,0.005 for all comparisons 54 h and onwards).
Protease release by K. algicida
Exponential bacterial growth started after a lag period of 29 h. During this period there was no detectable protease activity in the K. algicida medium (Fig. 5A). After 29 h the culture started to grow exponentially and reached a growth rate of m = 0.14260.004 h 21 . Exponential growth lasted from t = 29 until t = 48 h. In the beginning of the exponential growth phase there was no protease release. A significant release of proteases started after 44 h, in the late exponential phase. This release proceeded for 18 h and stopped after 62 h. In later stationary growth we could not observe any protease release. To exclude an underestimation of the protease release due to potential instability of the enzyme, we verified the protease stability in seawater over a period of 9 h. After this time period no detectable decrease of the protease activity could be observed (P = 0.866 in student t-test, data not shown).
Induction of protease release
In order to test if chemical communication regulates bacterial activity as known from quorum sensing we examined the effect of cell free bacterial filtrate on the excretion of protease from freshly inoculated K. algicida cultures and cultures that were incubated for 16 h (Fig. 5 B & C). The addition of K. algicida conditioned cell free filtrate to freshly inoculated bacteria cultures accelerated the protease release. These cultures exhibited already a significant protease release rate after 14 hours of cultivation (Fig. 5B) which was approximately 5 times higher than the release rate observed under standard growth conditions (Fig. 5A). Under standard cultivation conditions an optical density of .0.1 was needed before protease release occurred. In contrast cultures where conditioned cell free medium was applied started to excrete significant amounts of protease already at an optical density ,0.01 (Fig. 5B). This protease release was stopped after 26 h and started again after 32 h when an optical density of .0.05 was reached.
If the same cell free filtrate was added to K. algicida cultures 16 h after inoculation we observed a significant release of bacterial proteases already after 8 h confirming an induction of enzyme release by a bacterial cell free filtrate (Fig. 5C).
Discussion
The marine bacterium K. algicida has a strong algicidal effect on the diatom S. costatum. In a direct contact situation a significant inhibition of diatom proliferation can be observed after 7 h if a dense bacteria culture is employed for incubation. This is consistent with previous findings of S. costatum cells that were killed quantitatively after 3 days in a co-culture with K. algicida [27]. The negative effect of the bacterium is not exclusively transmitted through contact with the diatom but can be also mediated via diffusible compounds. This is clearly demonstrated by the fact that activity of the K. algicida medium remains after removal of the cells by sterile filtration. Inhibition of growth relative to a control is observable within the first 24 hours of incubation, indicating rapid action of the algicidal compounds. Compared to reports from other systems, where a.24 h delay of effects on the algal cells was observed after algae were treated with algicidal bacteria, both the direct interaction as well as the action of the filtrate reported here are quick [8,32]. Diffusible substances mediating algicidal activity have been previously observed from bacteria and can include both, small molecular weight metabolites as well as proteins [33,34]. The use of dissolved substances to inhibit the growth of algae is common in bacteria belonging to the phylum of c-proteobacteria which includes the genera Alteromonas [8], Pseudoalteromonas [22,35] and Vibrio [36]. However, K. algicida belongs to the Cytophaga-Flavobacterium-Bacteroides phylum (CFB). Genera within this group usually require direct cell contact to kill their prey [16,35], although there are exceptions reported [18]. K. algicida is thus a rare example of a CFB bacterium that does not require cell contact with its prey to inhibit the algal growth, but releases diffusible active enzymes.
The release of active substances by K. algicida allowed us to further explore the nature of the active principles. A first survey revealed that K. algicida filtrate is also active against other diatom species (Fig. 1). The activity against the pennate diatom P. tricornutum, as well as that against the centric diatom T. weissflogii was comparable to that observed against S. costatum. In contrast, another centric diatom, C. didymus was not susceptible against the diffusible factors released by K. algicida. This missing susceptibility is apparently not due to an active detoxification by C. didymus since medium from a C. didymus/K. algicida co-culture is still active against S. costatum (Fig. 4). The physiological properties which mediate C. didymus resistance cannot, however, be concluded from our experiments. Selectivity of algicidal activity is important to understand ecological interactions within the planktonic community. Additionally, proposals to apply bacteria in order to control red tides should seriously consider the selectivity of algicidal activity [35,37]. Different levels of specificity have been observed from algicidal bacteria. Selective activity against one algal species and universal activity against all tested species in a given taxon have been reported as well as all intermediate forms of specificity like they are shown here [16]. From an ecological perspective it is obvious that resistance mechanisms of algae have the potential to provide selective advantages. When other diatom species that are potential competitors for resources are inhibited, the unsusceptible alga can proliferate. Thereby the bacteria can directly influence plankton species successions.
Basic characterization of the released algicide showed that it bears all hallmarks of an enzyme. It has a molecular weight .30 kDa (Fig. 2) and the activity can be inactivated by heat treatment. A survey of the literature suggests that dissolved proteases are prime candidates for algicidal enzymes. Lee et al. were the first to demonstrate the activity of proteases in the interaction of the bacterium Pseudoalteromonas sp. and the diatom Skeletonema costatum [20]. After indirect evidence from bioassays they isolated a 50-kDa serine protease with algicidal activity. Several subsequent studies supported the role of enzymes from algicidal bacteria in the lysis of algae [17]. Using fluorescence based assays we were able to show that active medium from K. algicida and from K. algicida/S. costatum co-cultures exhibited substantial protease activity. Indeed, S. costatum was susceptible to protease treatment. If the protease from the bacterium Streptomyces griseus was applied the diatom growth was inhibited compared to a control. In agreement, application of the protease inhibitor PMSF to active K. algicida medium resulted in a significantly higher growth of S. costatum compared to uninhibited controls. The growth of S. costatum was, however, not fully restored after the application of the protease inhibitor. Similarly, PMSF did not fully neutralize the motility reduction of the dinoflagellate Lingulodinium polyedrum caused by bacterial proteases [38]. The inhibitor experiment demonstrates, however, the involvement of a protease in the interaction but it might well be that additional activities can be responsible for the observed interactions. Alternatively, the algicidal protease might not be very sensitive to the inhibitor PMSF and the applied concentration might not be sufficient for a quantitative inhibition.
It has been argued that the release of a freely diffusible algicide is unlikely to be energetically efficient for killing algal cells suspended in seawater [16]. Since ratio of the volume of bacterial cells to the volume of seawater they inhabit is ca. 10 27 in an average dilute situation in the plankton [39] an uncontrolled release of any active principle would most likely not result in concentrations sufficient for algicidal activity or result in high costs. However, a release of active metabolites could provide a selective advantage if it is under the control of a metabolic switch that is triggered only under environmental conditions where the production of algicides is beneficial. We tested the hypothesis that algicidal activity is only induced in the presence of susceptible algae or in the presence of signals of these algae. No evidence was found for such an induced mechanism since algicidal activity did not increase in the presence of diatoms (Fig. 2).
Another possibility to increase the success of released active compounds would be a metabolic switch dependent on the density of a bacterial population. Based on the findings that the algicidal activity observed in our study was caused by a protease, we monitored protease activity as a function of K. algicida culture density. We indeed observed a synchronized release of a protease, which could be explained by quorum sensing like mechanism in K. algicida (Fig. 5) [40]. We found support for such phenomenon using experiments with conditioned cell free supernatant of K. algicida. After adding such filtrates to freshly inoculated K. algicida cultures the protease release was remarkably accelerated (Fig. 5). These results fit to known quorum sensing dependent secretion mechanisms of other bacteria species such as the human pathogen Pseudomonas aeruginosa where the excretion of exoenzymes that determine virulence is controlled by bacterial density [41,42]. Quorum sensing in gram negative bacteria is often mediated by acyl homoserine lactones (AHL) as it can be for example observed in the P. aeruginosa pathogenicity [41,42]. We were, however, not able to detect any AHL in dichloromethane extracts of protease releasing cultures using sensitive GC/MS methods. Several other QS molecules that have been previously described for Gramnegative bacteria can be considered as alternative candidates and further tests will have to be performed in the search for the regulative principle in plankton assemblages [43,44]. Gramnegative bacteria found in all kinds of habitats often rely on quorum sensing signals to trigger metabolic events. In planktonic bacteria the alternative induction pathway (AI-2) for quorum sensing type regulation has been detected although it could not be directly linked to algicidal activity [18]. Evidence also exists for the QS-regulation of the production of the algicidal pigment PG-L-1 in a marine c-proteobacterium [45]. While these studies give rather indirect evidence we can show here clearly that release rates of active principles are regulated. Comparable regulative mechanisms have also been suggested in a study of algicidal Pseudoalteromonas sp.. Mitsutani et al. [17] could show in gel electrophoretic experiments that the production of several enzymes was only observed during stationary phase and that bacteria only exhibited algicidal activity during this phase.
In the plankton such a density dependent release of proteases might provide an advantage if a sufficient bacterial density is required for efficient lysis of algae. Diffusible substances aiding algal lysis might provide a benefit for locally dense bacterial assemblages. Bacteria could jointly overcome defense systems of the alga in cases when active principles from single bacteria would not be effective. Lysis of algal cells could increase available nutrient concentrations in the vicinity of the bacterial assemblage and such a control could be an efficient means for a concerted mobilization of resources.
Our results on the specificity of the algicidal activity as well as on the density dependent regulation of the release of an active protease by an algicidal bacterium support the view that a multitude of chemical signals can regulate plankton interactions on all levels.
|
v3-fos-license
|
2018-12-21T04:37:12.993Z
|
2017-06-12T00:00:00.000
|
62817287
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.linguisticsociety.org/proceedings/index.php/PLSA/article/download/4057/3767",
"pdf_hash": "cae476a72c52461c4857923c070a2c3f910abd18",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41842",
"s2fieldsofstudy": [
"Sociology"
],
"sha1": "cae476a72c52461c4857923c070a2c3f910abd18",
"year": 2017
}
|
pes2o/s2orc
|
‘Grandmas’ in debate: A first-person story told in Taiwan’s presidential debate as a rhetorical device and public reactions to its credibility
This study examines data from a 2016 presidential debate in Taiwan to explore the use of first-person narrative in political discourse as a rhetorical device, and how public reactions to its credibility are influenced by the narrative’s context. While previous studies of political debate discourse (e.g. Kuo 2001) investigate, for example, the use of “constructed dialogue” (Tannen 2007), there is a lack of studies focusing on first-person narrative in political debates. Using three-level positioning as outlined by Bamberg (1997), I analyze a narrative featuring a grandma character told by presidential candidate Eric Chu, also comparing it to another candidate James Soong’s “grandma narrative.” I argue that the context places constraints on the effects of their narratives. Whereas Chu’s narrative, a traditional Labovian firstperson story, is widely ridiculed with memes for its lack of credibility, Soong’s narrative, a habitual narrative, receives little attention. The analysis shows how Chu’s narrative serves his rhetorical purposes and suggests why the public doubts its credibility. At level 1 (characters positioned vis-à-vis one another), Chu presents himself as non-agentive with constructed dialogue, thereby excusing an earlier decision he made -failing to keep his promise to finish his term as a mayor. At level 2 (speaker positioned to audience), he switches from Mandarin to Taiwanese, a local dialect, which can be seen as an appeal to his current audience. At level 3 (identity claims locally instantiated), the grandma character draws on the archetype of elderly women in Taiwanese culture, fundamental to national economic growth, while his description of praying at a temple casts him against the local tradition of religious practices in Taiwan. The study helps fill the knowledge gap regarding first-person narrative in political discourse, while highlighting the context in which political narratives are embedded and contributing to understanding positioning in Taiwanese public discourse.
Introduction.
Narrative is a powerful linguistic device that communicates meaning and it serves a multitude of functions in human interaction ranging from identity construction (e.g., Schiffrin 1996) to argumentation (e.g., Carranza 1999).It is, therefore, not uncommon to see narratives in political speech, even in political debates, where each politician's time to speak is strictly allotted.Few studies on political discourse, however, have closely examined narratives in political debates.Past research on political discourse has, for the most part, taken a critical perspective, or Critical Discourse Analysis (CDA), that focuses on issues such as ideology, power relations, and dominance (e.g., Yang 2013).Scholars have also taken various approaches to political discourse, including politicians' uses of direct quotation (e.g., Kuo 2001) and pronouns (e.g., Kuo 2002), and how they talk about their families (e.g., Sclafani 2015).The present paper adds to the works on political discourse that adopt narrative analysis (e.g., Duranti 2006, Shenhav 2005a) by investigating the use of narratives in the first of the two 2016 presidential debates in Taiwan and how aspects of the context influenced public perception of the two narratives.I argue that narrative is a discourse strategy employed by the candidates in an attempt to enhance what Tannen (2007) calls "conversational involvement" with the public viewers, to increase the credibility of their claims, and to justify their positions during political campaigns.Also, I underscore how the context in which narratives are situated can undermine their effects.To substantiate, I borrow the theoretical construct of three levels of positioning proposed by Bamberg (1997) to delineate how the two presidential candidates present themselves as relatable and, at the same time, establish what Duranti (2006) terms "existential coherence."Specifically, I examine positioning in the context of the narrative's structural parts (as outlined by Labov and Waletzky 1967) and by drawing on Tannen's (2007) theorizing on involvement in discourse.While Labovian structural elements and involvement features are present, the narrative is not viewed as persuasive or credible, and my positioning analysis demonstrates how and suggests why this might be.
Two of the three 2016 presidential candidates in Taiwan, Eric Chu and James Soong, incorporated narratives into their speeches during the first debate on December 27, 2015; while both narratives featured a grandma character, only one of them later became the subject of numerous parodies.The two grandmas are both the center of the narratives, but public perceptions vary to a great extent due primarily to the contexts in which the narratives are situated.Although narrative, as a discourse strategy, is supposedly a powerful rhetorical device employed to increase authenticity, in Chu's case, not only did it not work to his advantage in vindicating his action of running for presidency but his narrative was publicly mocked.This paper addresses this issue by focusing on the context where the two narratives are embedded, including the immediate context of the debate and the larger social context in Taiwan.In exploring positioning and its relationship to situational and sociocultural context, this study contributes to the research of narrative analysis in political debates.It also highlights linguistic strategies that help accomplish positioning in narrative, among them constructed dialogue, repetition, and code-switching.In what follows, I first review previous studies of discourse analysis in political speech and suggest how narrative analysis can add to the existing literature.Then I provide some background for the data before presenting my analysis of the two candidates' positioning at different levels in their narratives.Finally, I conclude with the implications and significance of bringing narrative analysis into research of political discourse.
POLITICAL DISCOURSE ANALYSIS
In defining political discourse, van Dijk (1997) outlines some ways of delimiting the object of study.First and foremost, many studies identify political discourse by its political actors or authors, that is, politicians.Hodges' (2011) work, for example, analyzes presidential speeches by George W. Bush in the aftermath of 911 to illustrate how the Bush administration creates a "grand narrative" (a dominant discourse about certain social issues through which they are framed and interpreted) of war on terror propagated as America's response to terrorism.Second, viewing politics as a series of communicative events, van Dijk consider the recipients such as the public to be a focus of political discourse analysis.Third, political discourse analysis studies text and talk in certain activities and practices.Political practices carry political functions and implications, such as legislating, debating, and protesting.In this sense, social movements are included as part of the political discourse.Last but not least, van Dijk sees context as a deciding element for defining what sort of discourse is political in that "politicians talk politically also (or only) if they and their talk are contextualized in such communicative events such as cabinet meetings, parliamentary sessions, election campaigns," and so on (p.14).In this paper, I identify these four fundamental criteria for my data to be considered political discourse; I emphasize especially the context of these narratives in that they are told by the presidential candidates to accomplish certain political aims.
APPROACHES TO POLITICAL DISCOURSE
Research on political discourse comes largely from the framework of Critical Discourse Analysis (CDA).CDA primarily deals with social problems, which include political issues, such as dominance, discrimination, and inequality (e.g., Miller & Fox 1995, Gamson 1992).Theorists whose studies follow the main principles of CDA maintain that power relations are discursive, that is, social problems like power abuse are represented in discourse (see Fairclough & Wodak 1997).Critical discourse analysts aim to explain, for example, "social inequality as it is expressed, signalled, constituted, legitimized and so on by language use (or in discourse)" (Wodak 2001:2).Within CDA, political discourse is also described as "fundamentally argumentative" and it "primarily involves practice argumentation" (Fairclough and Fairclough, 2012:1).Studies adopting a CDA model include Yang's (2013) analysis of Taiwan's 2010 national debate on Economic Cooperation Framework Agreement (ECFA).Building on Oddo's (2011) study on discourse representation of "Us versus Them" in U.S. presidential addresses, Yang examines how then-President Ying-jeou Ma creates dominance over his opponent, then-Chairperson Ing-wen Tsai (DPP), by positively evaluating Us while negatively describing Them in the discourse of the debate.
In addition to CDA, scholars have drawn on multiple other approaches to investigate various aspects of political discourse.Sclafani (2015), for example, borrows the concept of framing, or a definition of the situation (Goffman 1974), and demonstrates how Republican candidates in a U.S. presidential primary present their family identities at home (e.g., as husband or father) and use these family references to frame their political identities by blending together the audience's understanding of the family and the state, family safety and national security.Kuo (2001) investigates videotaped data from two Taipei mayoral debates in 1998, examining the debaters' use of reported speech.She builds on Tannen's ([1989] 2007) idea that reported speech is in fact "constructed dialogue," in which speakers situate words in a new context and therefore create new meanings.Kuo, in this vein, argues that the debaters incorporate direct quotations in their speeches to create involvement with the audience in the sense making process, dramatize key elements in narrative for rhetorical purposes, and bring in the voice of authoritative and nonpartisan figures for self-promotion.Kuo (2002) analyzes the same data set to illuminate the discourse functions of the second-person singular pronoun nǐ in Mandarin Chinese.She identifies a change of how the debaters use nǐ, from referring to the audience/voters to create solidarity in the first debate to addressing the opponents to express antagonism in the second debate, therefore suggesting a shift of interactive goal between the two debates.
NARRATIVE ANALYSIS IN POLITICAL DISCOURSE
Discussing narrative's relevance to political theory, Bottici (2010) puts that "narratives are ways to connect events in a nonrandom way, and therefore they are a powerful means to provided meaning to the political world we live in" (p.920).In this sense, narrative serves as a way in which we observe the process of world making and sense making in politics and the construction of our political identities.As she notes, scholarly attention to narrative in the study of politics and political theory is rather recent (e.g., Duranti 2006, Shenhav 2005a, Schubert 2010, Souto-Manning 2014).Shenhav (2005a) proposes a new framework for defining narrative in political discourse analysis by following the structuralist tradition, that is, the scholarship that sees narrative as consisting of structural units (e.g., Labov & Waletzky 1976) and diverging from other approaches such as narrative dimensions, or the extent to which a narrative is "tellable," "embedded," "linear," and so on (e.g., Ochs & Capps 2001).He suggests two levels on which narrative analysis operates: the "thin" level analysis that looks into the components of narrative such as the organization of events and the "thick" level analysis that include contextual viewpoints relating to the storytelling process (p.87).Extending Linde's (1993) concept of creating coherence in life stories, Duranti (2006) argues that politicians have the awareness that their emotional stance is articulated through talk and aim to project a coherent image of themselves, or what he terms "existential coherence," as people "whose past, present, and future actions, beliefs, and evaluations follow some clear basic principles, none of which contradicts another" (p.469).In his study, he investigates former Representative Walter Capps' construction of "political self" in his campaign from 1995 to 1996.Given my interest in how the candidates in 2016 Taiwan's presidential election present themselves in the debate, I focus on the narratives they told, and use positioning theory to do so.
THREE LEVELS OF POSITIONING
Positioning theorists hold that an individual's identity is constructed through discursive practices in social interaction.Davies and Harré (1990) define positioning as "the discursive process whereby selves are located in conversations as observably and subjectively coherent participants in jointly produced storylines" (p.37).This take on self-presentation highlights not only the relationships between the speaker and the talk but also the relationships between speaker and audience.Bamberg (1997) elaborates the concept of positioning and puts forward a threelevel model of positioning in narrative, mapping out the dimensions of self-presentation in storytelling activity.Level 1 pertains to how the story characters are positioned in relation to one another; level 2 stresses the interaction in the storytelling world, that is, how the storyteller positions himself or herself to the audience; level 3 addresses self and identity or how the narrators "position themselves to themselves" (p.337).De Fina (2013) illustrates that Bamberg's three-level positioning bridges interactions on a micro level and dominant discourses on a macro level.Analyzing a narrative about an ethnic conflict that reveals language ideology in the life of an immigrant, she demonstrates that level 3 positioning "allows for an analysis of connections between local identity claims and negotiations and macro-level social processes" (p.47).
Data and method
I analyze narratives drawn from the first of two Taiwan's presidential debates leading up to the January 16, 2016 election.The debate was 155 minutes in length, and was televised and live streamed on YouTube by Taiwan Public Television.The debate was in the format of four stages: opening statements, questions by journalists, cross-examination, and closing statements.Three presidential candidates, Eric Chu (Li-luan Chu), James Soong (Chu-yu Soong), and Ing-wen Tsai participated in the debate, each representing a political party.Tsai, the chairperson of Democratic Progressive Party (DPP) won the election.The present analysis is based on the narratives told by Chu and Soong; Chu's was told during cross-examination while Soong's was part of his own closing statement.Segments of their speeches were transcribed in Mandarin Chinese, and then translated into English for analysis.Chu's first-person narrative is the focus of my analysis; Soong's habitual narrative is used as a point of contrast and comparison.
Before Chu announced his decision to run for presidency, he had been the chairperson of Kuomintang (KMT) and, at the same time, the mayor of New Taipei City since 2014, while another candidate, Hsiu-chu Hung, was the candidate officially nominated by the party on June 14, 2015.Three months later, at KMT's congress convention on October 17, 2015, Hung's candidacy was revoked by delegates' voting and Chu was selected to replace Hung as the presidential candidate.Chu's decision to run not only contradicted his frequent promise in the interviews in mass media, 24 times at different occasions in total, that he would commit to serving his second term as mayor, that he would "do the job well, and serve the full term" (做好 做滿, zuò hǎo zuò mǎn), but also added to his record of leaving one position for another, including resigning as a legislator to run for Taoyuan County Magistrate election in 2001, and resigning from the second term of magistracy after being appointed Vice Premier in 2009.The replacement of Hung with Chu also was controversial.
Soong, the chairperson of People First Party (PFP) has a long history of political engagement in Taiwan, including running for presidency.In 1994, Soong, a long-time member of KMT, was elected as the Governor of Taiwan Province, a position that was later eliminated in 1998 following the decision made by the National Development Council to resolve a contradiction in administrative territory.After losing the KMT presidential nomination, he decided to run independently in the 2000 presidential election.After losing the election by a close margin to DPP candidate Shui-bian Chen, Soong left KMT and founded PFP, and in 2004, partnered with the KMT candidate Chan Lien to run in the presidential election.In 2006, he ran for Taipei City mayoral election, garnering only 4% of the cast ballots.In 2012, he registered for the candidacy for presidency and was defeated by the KMT candidate Ying-jeou Ma.In 2016, he again ran in the presidential election as a PFP candidate and finished third of all three candidates.
Standing out in both Chu's and Soong's narratives are the grandma characters, which provides a unique opportunity for comparison, especially given the different reactions to the stories.In Taiwanese, a Chinese language widely spoken in Taiwan, "Ama" not only means one's grandma but can be used to refer to any old lady as an intimate term.It denotes a tender, caring, and family-oriented personality, a female figure in the household and, at the same time, an economically substantial role because, in the late 90s after World War II, women of that generation contributed to the economic growth in Taiwan by being in the manufacturing industry and other labor intensive workplaces.Nowadays, their image is also tied to traditions and authenticity, which is often used as a marketing strategy in advertising.We can say that the candidates alluded to some of these characteristics of Ama in their narratives.
CHU'S NARRATIVE
Eric Chu, the KMT candidate, provided the following narrative in the second round of cross-examination when his publicly criticized motive and action of leaving his current position as the mayor of New Taipei City to run in the presidential election was questioned by the DPP candidate Ing-wen Tsai.The confrontation at this stage characterizes the debate as one of the "essential instances of persuasive attack and defense" (Benoit & Wells 1996).The narrative was told by Chu with a clear political aim in a political context: to answer his opponent's question, deny the accusation, and reestablish his credibility.However, the narrative was not perceived as credible by audiences (as later mocking of it showed); in what follows I identify how examining the three levels of positioning in Chu's narrative illustrates the rhetorical nature of this firstperson narrative in the debate and Chu's attempt to counter the accusation he faced and to justify his decision of joining the presidential election.However, because of the argumentative nature in the immediate context of the debate and the public scrutiny to which Chu's reputation was subject to in the larger context in the society, his narrative was perceived as lacking in credibility after the debate.Knowing Chu's past record, Tsai's debate tactic focused on pointing out the lack of credibility in his character by drawing the audience/voters' attention to both his attitude change from initial denial to eventual tenacity when it came to joining in the election, and the way he replaced the original presidential candidate by allegedly undemocratic means.In response to her question on his inability to keep his previous promise, and, namely, an attack on his credibility as a politician, either as a present mayor or as a presidential candidate, Chu first acknowledged that he was caught in a dilemma.On the one hand, he recognized his responsibility and the promise he had made to finish his mayoral term, and on the other, he reckoned that had he not chosen to run in rivalry with Tsai, the result would be that "DPP takes power and leads Taiwan into a very horrible state" (line 2-3).Twice he responded with the questions, "what decision should I make?" (line 6) and "what should we do?" (line 8), paving way for his narrative in which he justifies his course of action on the presupposition that Taiwan would be in tribulations if the opposing party DPP won the election.In this way, he implied that judging from the circumstances, running for presidency would be the wise thing to do for the greater good.The narrative is a strategy he employed to vindicate his choice and to restore the "existential coherence" of his political self.In the narrative, he referred to an epiphanic encounter with an Ama that prompted him to make this final decision.It begins with a Labovian orientation of place and situation (Labov and Waletzky 1967), followed by a short passage of constructed dialogue, and ends with his evaluation.Note that the data originally occurred in Mandarin Chinese; material spoken in Taiwanese appears in italics (see Appendix I for full transcript with Pinyin and glosses).
(1) 7. I remember once when I went praying in Tamsui, 8.An Ama told me, "Mayor, you have to run for presidency."9.I said, "I have promised all citizens of New Taipei City." 10.The Ama said, "If you don't, not even the gods will forgive you."11. "You have to do this for Taiwan, not for yourself."12.These words touched me greatly.
By "casting thoughts and speech in dialogue," according to Tannen (2007), the storyteller "creates particular scenes and characters" (p.107).Instead of directly rejecting the accusation by his debate rival Tsai by explicitly expressing his thoughts, Chu frames the process in which he changes his attitude and makes the final decision in a conversation between him and the Ama, transforming a statement into dialogue to create involvement by "establishing and building on a sense of identification" between himself and the audience/voters (Tannen 2007:107).This way, even though Chu's narrative is situated in a context where his credibility is being examined and his "existential coherence" being questioned, the narrative and the dialogue in it are "constructed in service of some immediate interactional goal" (Tannen 2007:108), that is, vindicating his action; direct quotations from the Ama in the narrative involve the audience through participation in sense making and, therefore, diverge their attention from questioning his motive.Further, Chu establishes the Ama and himself as holding opposite positions by presenting dialogue consisting of three turns, with each turn indicating the progression of his attitude change.At level 1, the Ama is positioned in relation to Chu as someone who imposes upon him the idea that urges him to run for presidency (An Ama told me, "Mayor, you have to run for presidency," line 8).This juxtaposition persists through the constructed dialogue, positioning the Ama specifically as the agent performing the action and holding a certain belief.When it comes to his turn to speak, Chu regains agency in the narrative by displaying his awareness of his promise and his responsibility as the mayor (I said, "I have promised all citizens of New Taipei City," line 9).This is supposed to present him as a person who stands by his earlier duty and promises to serve that duty.When the conversation takes the final turn, Chu once again positions himself relationally as the person being addressed and who passively accepts the Ama's proposition (You have to do this for Taiwan, not for yourself, line 11).The direct quotation, then, serves as "an evaluative device which dramatizes the key elements in narrative" in order for candidates to "indirectly and implicitly invoke and project some positive characteristics of themselves" (Kuo 2001:195).The evaluative clause in line 12 suggests that Chu has agreed to decline his responsibility as a mayor and has accepted the plea to join the election (These words touched me greatly).This way, Chu tactfully answers Tsai's question by providing a narrative that accounts for his decision and tacitly justifies his failure to keep his promise considering that he was committed to the greater good of all people in Taiwan at the Ama's request.
At level 2, Chu employs two strategies, providing detailed imagery and codeswitching, to reinforce for the audience the authenticity of this encounter in the narrative and to rationalize and justify his action.According to Tannen (2007), details in narrative "provide a sense of authenticity, both by testifying that the speaker recalls them and by naming recognizable people, places, and activities," contributing to the speaker's presentation of self (p.138).In the story's orientation, when Chu sets up the background of the story, he presents himself as a pious person who goes to a temple to pray (I remember once when I went praying in Tamsui, line 7).Praying at a temple is a common practice that politicians in Taiwan do in front of the camera.Here he reinforces that image and paves the way for what is coming up next.He names the location, Tamsui, creating a sense of authenticity by arousing the audience's perception of location credibility.This allows people to associate the place with the activity, especially when Tamsui is known for many temples.The majority of the prayers at the temples are elderly people, making it more convincing that Chu met the Ama when he was there.
Also, starting in line 8, Chu switches from Mandarin Chinese to Taiwanese when telling the story.With over 60% of the population using it on a daily basis, Taiwanese is associated with a strong sense of national identity as well as a large group of working class and people of older generations because younger generations in Taiwan have less access to Taiwanese.As Lo (1999) argues, "the act of codeswitching itself makes salient the indexical links between a language, categories of ethnic identity, and speech community membership" (p.462).This juncture of codeswitching in the debate can be seen as a move on Chu's part to address directly the speech community of Taiwanese, evoking a sense of familiarity and taking an accepting and supportive stance toward the working class in Taiwan.At the same time, it is part of the authentication.Telling the story with an Ama in Taiwanese calls for corroborative recognition from the audience of the story itself and the storyteller.It again makes the Ama character more real and the politician more believable.
When Chu presents the grandma character and positions himself vis-à-vis the Ama, he alludes to these characteristics associated with Ama at level 3.Here the Ama can be interpreted as a symbol of the working class or middle class citizens.Animating her voice in direct quotes as an external source makes the narrative more persuasive as a rhetorical device.When an Ama tells him to run for office, Chu implies that most people in Taiwan support his decision and it helps the candidate connect to a large part of the audience.Simultaneously, he shows that his decision is for the greater good, once again justifying his motive of running for presidency as opposed to being a city mayor, which would be for his own good ("You have to do this for Taiwan, not for yourself," line 11).Lastly, he references the local religion in Taiwan and the concept of retribution that one will be punished by the gods for not doing the right thing, which consequently prompted him to make "the right decision" (The Ama said, "If you don't, not even the gods will forgive you, line 10).This reference to traditional Taoist religious belief ties back to the location; the event took place at a temple.By telling the story, it is as if the conflict in Chu's political self is resolved when the story reaches its resolution.
Chu's attempt to decline responsibility is evidenced by his positioning in the narrative.However, the traditional Labovian first-person narrative that is meant to enhance authenticity and thus restore Chu's credibility was not corroborated by the public after the debate.For one thing, in the immediate context of the debate, the political aim of Chu's narrative is conspicuously defensive and argumentative as shown in the speech in which the narrative is embedded.The narrative emerges in his speech in response to a question, a context where his credibility as a candidate is put to test.Prior to and subsequent to the narrative, Chu foments the opposition between DPP and KMT, Tsai and himself, to reinforce the image that the political circumstances called for his justifiable action of joining the election.This explains why the narrative, designed as a rhetorical device, is perceived as less convincing because the embedding of Chu's narrative turned out to accentuate his ulterior motive.The technique is in line with what Holly (1989) calls "non-communication," that is, "concealing intentions and conveying meanings at the same time" (p.123).Chu attempts to authenticate the narrated event that justifies his decision to run for presidency and vindicate his action of breaking his promise and failing to fulfill his responsibility as mayor.Consequently, People turn to view Chu's encounter with the grandma as a feeble attempt to justify his motive, corresponding to Holly's description that "trustworthiness of politicians is connected with the way they take or refuse responsibility for meaning components which are intended and conveyed " (p. 122-123).
"Tamsui Ama" (淡水阿嬤, dànshuǐ āmā), a term that quickly went trending following the day of the debate, was turned into the subject of a wide variety of parodies, ranging from graphic illustrations, song adaption, idiom coinage, to traditional Taiwanese puppet show, exactly because the character portrayal left considerable room for people to play with their imagination.The rapid and pervasive emergence of all sorts of parodies arguably points to the lack of credibility both in his narrative and in his reputation.The public in general finds it impossible to identify the person in Chu's narrative and, therefore, casts doubts on the authenticity of it, thinking that the Ama in Chu's narrative is no more than a fictional character invented to embody the Taiwanese working and middle class.
SOONG'S NARRATIVE
In the same debate, James Soong, the PFP candidate, provided a narrative that featured another Ama in Taiwan.There are, however, a few major differences between the two narratives, which I address primarily to explain the relatively negative reception of Chu's.First, compared to Soong's story of an Ama, Chu's is situated in a context of justification and defense.Soong's narrative, on the other hand, is embedded in his concluding statements where he provided another narrative about his father in addition to this one.His narrative is thematically coherent in the way that it undergirds his political visions, whereas Chu's narrative comes off as rather abrupt.Soong's narrative operates differently than Chu's as his is not a first-person story; Soong did not know or interact with the Ama character in person and he does not claim to have done so.He showed a photo of the Ama that he had brought with him while telling the narrative as a visual aid to frame the situation and drawing an analogy between him and the Ama character to highlight a sense of persistence.(Original data in Mandarin Chinese.See Appendix II for full transcript with Pinyin and glosses.) (2) 1.I have a photo with me.
2. I believe Ms. Chu Chen must know this person.
3. This is Zhuan-Chu Yu-nü, wife of a laborer.4. Every day she sold 10-dollared meals, 5.So that everyone is fed.6.But I can tell everyone.7.For 50 years she only took 10 dollars.8.This tells us, 9.This day in Taiwan, many people still live in hardships.
In telling the narrative, Soong provides a nonverbal "contextualization cue" (Gumperz 1998) to signal a change in reference and participants' activity and to draw attention to the Ama character.This is synonymous with providing details.He not only gives the name of the Ama but also shows a photo of her as evidence (This is Zhuan-Chu Yu-nü, wife of a laborer, line 3), which contributes to his absolute credibility.Contrary to Chu's story, however, Soong's narrative is not about his personal experience; it is also not a traditional Labovian narrative in that it is habitual.Even though he is not in the story as a character, Soong makes a parallel between himself and the Ama.At level 1, the Ama character is portrayed as serving others selflessly over a long period of time (For 50 years she only took 10 dollars, line 7).This helps project the image that Soong is also a public servant by analogously positioning himself next to this Ama.The time span also corresponds to his display of self as a man of persistence, as Soong has run in presidential election four times at that time in 2000, 2004, 2012, and 2016.At level 2, in addition to showing the photo to the audience, Soong uses eye contact to recruit one audience member, Chu Chen, the mayor of Kaohsiung City and a famous politician of DPP.This move serves several purposes.First, Soong finds himself a witness to corroborate his narrative (I believe Ms. Chu Chen must know this person, line 2), enhancing the credibility of the story character.As Bucholtz and Hall (2005) describe, "the detailing of the chain of narration whereby the teller heard the tale also provides evidence for his right to tell it, thus authenticating both the narrative and his interactional identity as its narrator" (p.602).Second, Soong creates an alliance with someone from another political party in telling his narrative.This accentuates his relatively more neutral and bi-partisan candidacy as the third party, not involved in the longstanding opposition between KMT and DPP in Taiwan's politics.Third, he seems to share the tellership with Chen while knowing that he holds the floor because of the debate format, making his story more credible because of the co-constructed nature.The recruitment of Chen also has to do with her being the mayor of Kaohsiung City, where the majority of its residents are working class and speak Taiwanese.This is again brought up when Soong mentions that the Ama is the wife of a laborer, not only alluding to the characteristics associated with Ama mentioned above but also addressing directly the working class in Taiwan.
Soong then directs the audience's/voters' attention from the past to the future hypothetical, positioning himself as someone who has witnessed the hardships in Taiwan while promising a better future.Also, the pronominal choice is the focus here when Soong uses first-person plural wǒmen in Mandarin Chinese to refer to and to address people in Taiwan, indicating that we are all in this together and creating a sense of unity and inclusiveness.The extensive use of repetition makes salient the key concepts that Soong wants to convey.(Original data in Mandarin Chinese.Bold font and arrows used to highlight lines of particular analytic interest.See Appendix II for full transcript with Pinyin and glosses.) (3) 10.Our next generation will not go on living in hardships like this.
11.After I am elected, 12.I will have our Bank of Taiwan make a coin.13.On it is the photo of this grandma. 14.To make people in Taiwan remember forever that people in Taiwan are kind, 15.People in Taiwan traveled the past of poverty. 16.People in Taiwan cannot allow the next generation to live in hardships anymore. 17.We have to work harder! 18.We have to make the freedom and democracy, 19.We have to keep people in Taiwan alive.<sobs> Tannen (2007) claims that repetition creates coherence in discourse and interpersonal involvement as it "not only ties parts of discourse to other parts, but it bonds participants to the discourse and to each other, linking individual speakers in a conversation and in relationships" (p.61).At this point, Soong's narration at level 2, or in the interaction, has moved from recounting a story character and her actions to making it relevant to the here and now and to the recipients.With the use of repetition, both lexical and syntactic one (people in Taiwan, line 14-16 and we have to, line 17-19), Soong continues his narration in a rhythmic way that involves the audience/voters by creating closeness and a sense of unity.
At level 3, the repetition even brings together several different discourses, including the kind-heartedness of Taiwanese people (To make people in Taiwan remember forever that people in Taiwan are kind, line 14), the economic hardships in the history of Taiwan (People in Taiwan traveled the past of poverty, line15), his political belief in democracy (We have to make the freedom and democracy, line 18), and the security of people's livelihood (We have to keep people in Taiwan alive, line 19).The conglomeration projects Soong's political visions for the future as a coherent whole.Thus, Soong outlines his political visions and promotes a coherent image of himself by linking the past, present and future with an allusion to the social meaning of Ama.In the previous extract, Soong establishes a connection between himself and the Ama character based on the association of Ama with her labor and her contribution to the economy and people's lives.Here, Soong positions the Ama character as a metaphor of resilience in the time of economic hardships.He has witnessed the struggles of Taiwanese people in the past and he is here to run for presidency now with a promise of a better future for the next generation.This is in the same vein as what Shenhav (2005b) terms as "concise narrative," or "segments (a few paragraphs) of a political text (e.g. a speech, an interview, a political discussion) that contain its entire chronological range" (p.316).This way, Soong positions himself in the narrative as having not only a persistent characteristic but also a coherent persona over time.His narrative helps establish the "existential coherence" of his political self in Duranti's (2006) terms.
Soong's narrative, in contrast to Chu's, received little attention and media coverage, despite the shared focus on an Ama character.While Chu's narrative hit the headlines and became widely satirized on the Internet, Soong's narrative appeared only briefly in the news.The immediate context of the debate in which the narrative is embedded, namely Soong's closing statements, is not as provocative and charged with tension as cross-examination in Chu's example.Instead, the Ama character was regarded lightly as mere factual information to support Soong's claim while the narration was treated as a part of his performance in the debate.Unlike the Ama in Chu's narrative that was deemed by the public to be fictitious, Soong's Ama character was so real (including a photograph) that there was no room for questioning.This corresponds to what Labov (1982) notes as the inverse relationship between reportability and credibility: "the more reportable an event is, the less credible it is" (p.228).In the social context, while the Ama was used to paint Soong in a positive light, people were also well aware of his past political engagement, so much so that the narrative did not evoke strong reactions from the public.It was recognized as an exemplar of his well-known political identity, and is thus not as "reportable" (Labov and Waletzky 1967) as Soong's.
Conclusion
Bottici (2010:920) asks, "who tells the relevant stories?" and "which forces determine the crucial narrative plots?"In this paper, I have demonstrated how narrative is employed by two candidates in Taiwan's 2016 presidential debate to present themselves as credible through storytelling and how they make their narratives relevant to the debate context to achieve their political aims.In a sense, both narratives are fundamentally argumentative, as Fairclough and Fairclough (2012) state; they were told to make an argument and to convince the audience of a certain political claim.However, Chu's narrative comes to be perceived as a rhetorical device for vindication because it was told during cross-examination when he was questioned by Tsai for his motive in order for him to justify his decision and deflect the accusation.This is made salient, as we have seen, by how he positions himself in the story by declining his responsibility as the city mayor in a way that puts the onus for this decision on an unnamed Ama who urges him to do so for the good of all of Taiwan.On the other hand, Soong's narrative is seen as an exemplar of his political visions as he analogously positions himself parallel to the Ama against the backdrop of economic hardships.In the analysis, I have shown that narrative in political discourse can be used as a rhetorical device, the intersection of storytelling activity and argumentation.Further, I have illustrated how the two candidates positioned themselves at different levels by alluding to the common associations with Ama to construct a political self that is existentially coherent, from the past to present, and from the local context of the debate to the hypothetical future.Finally, I showed that their narratives were perceived differently because of narrative context.Although neither of them won the election, Chu and Soong drew on narrative as a powerful strategy; for Chu, the narrative was compromised by context, both its immediate context in the 3. jiù shì zhuāng zhūyù nǚ, láogōng de qīzi, this BE Zhuan-Chu Yu-nü laborer GEN wife 'This is Zhuan-Chu Yu-nü, wife of a laborer.' 4. měitiān zài mài shí yuán de zìzhùcān, ràng dàjiā chī dé bǎo, every.dayHAB sell ten dollar GEN meal let everyone eat C full 'Every day she sold 10-dollared meals, so that everyone is fed.' 5. dànshì wǒ kěyǐ gēn zhūwèi shuō, tā màile wǔshí nián zhǐ shōu shí kuài qián.but I can to everyone say she sell-PERF fifty year only take ten CL dollar 'But I can tell everyone; for 50 years she only took 10 dollars.' 6. Zhè jiù shuōmíng, This just explain 'This tells us,' 7. jīntiān táiwān hái yǒu hěn duō rén hěn jiānkǔ, today Taiwan still EXI very many person very difficult 'This day in Taiwan, many people still live in hardships.'8. wǒmen xià yī shídài bù yào zài zhèyàng kǔ xiàqù.we next one generation NEG will again this.way difficult RES 'Our next generation will not go on living in hardships like this.' 9. Wǒ dāngxuǎn zhīhòu, I elect after 'After I am elected,' 10. wǒ huì qǐng wǒmen táiwān yínháng zuò yīgè tóng bì, I will have we Taiwan bank make one CL copper coin 'I will have our Bank of Taiwan make a coin.' 11. shàngmiàn jiù shì zhège āmā de zhàopiàn.up.side just BE this CL Ama GEN photo on it is the photo of this grandma.12. Ràng táiwān rén yǒngyuǎn jìdé, táiwān rén shì shànliáng de, let Taiwan person forever remember Taiwan person BE kindness PT To make people in Taiwan remember forever that people in Taiwan are kind, 13. táiwān rén shì zǒuguò pínqióng de guòqù de, Taiwan person BE walk-PERF poverty GEN past PT People in Taiwan traveled the past of poverty.14. táiwān rén bù néng ràng xià yī dài zài zài zhè biān kǔ xiàqù, Taiwan person NEG can let next one generation again at this side difficult RES People in Taiwan cannot allow the next generation to live in hardships anymore.We have to keep people in Taiwan alive.'
|
v3-fos-license
|
2021-10-21T15:05:11.834Z
|
2021-10-19T00:00:00.000
|
243822057
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.techrxiv.org/articles/preprint/Unification_of_gravitation_and_electromagnetic_force_First_Portion_/16817275/2/files/31324087.pdf",
"pdf_hash": "da5a13ef0468c552aa84e12d37084cc051530565",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41843",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "292a14b3a38135a2effa229d53f2191560594925",
"year": 2021
}
|
pes2o/s2orc
|
Unification of gravitation and electromagnetic force(First Portion)
Since electron has been discovered, people have been thinking it possess one electric charge, the nuclear possess contrary charge. Based on the provided new discovery that moving photons do create force, I have calculated the number Z of elements, discovered that the number Z of elements can be calculated by frequency of x-ray, and the atomic weight can also be calculated by frequency of x-ray. This method of calculated way shows the essence of electric charge, and shows the essence of gravitational mass, from this way, here in first time provide one way show the unification of gravitation and electromagnetic force.
INTRODUCTION
Since Newton discovered universal gravitation [1] , people have been thinking that the origin of gravitation only relate to the gravitational mass, and define gravitational mass in the expression of the gravitation without investigating other origin theories. While during the process of investigating the origin of gravitation, my experiments showed that moving photons produce gravitation. This discovery reveals the origin of gravitation. Meanwhile I found that the atomic weight can be calculated by frequency of x-ray [2] . On the other hand, I have also found that the number Z of elements can also be calculated by frequency of x-ray. By these experiments and calculations, to show that the electromagnetic force and gravitation are generated from the same origin. Their essence is the same. So we can comprehend the meaning of the gravitational mass which defined by Newton and Einstein and the meaning of charge which defined by Coulomb and Franklin. This shows the unification of gravitation and electromagnetic force.
Chapter. 1. Moving photon generate force
The experimental devices are indication in picture a, b, c of Figure1. The process of experiment is below: first the light beam L is separated into 2 parts by a ring as position in picture c, d of Figure1 In comparison with experiment 1, it is appearing that the light beam O keeps its circular shape in unchanged form when there are no other light beams moving forward. From this experiment, the light beam O does not appear before phenomenon of gravitation acted upon it become, it is confirm that this is only appeared with interaction force indicating gravitation brought about it, thus we find that moving photons create gravitation.
Chater.2. The quantitative experiment
In before article: Gravitation origin [2] , we get below formula: ⃗ ⃗⃗ ⃗⃗ ⃗ ⃗ About describing of this effect, see Gravitation origin [2] , here do not repeat it.
Chater.3. The essence of electric charge
According to above discoveries and formula: ⃗ we can get a formula that can calculate the nuclear charge of element, see formula: Where, is the nuclear charge of element, is the atomic weight of element, is the atomic number, are the first and second wavelength of X-Ray Emission in ̅ is the mean wavelength of these two wavelengths of X-Ray Emission.
=3.
Other element number or nuclear charge of element can also be calculated by this formula. See above table1. This formula testifies that the nuclear charge of element can be calculated by wavelength of X-Ray of element, thus this formula indicate the essence of nuclear charge of element, namely reveal the essence of electric charge.
Chapter.4. Unification of gravity and electromagnetic force
From below formulas: [2] we can see below interaction between two unclears of lithium, between them, the size of their Coulomb force is: Between them the size of their gravitational force is: .the difference size of value between them is only due to the Coulomb force is the first interactional force by this produced force, it do not include gravitational force, the gravitational force include the effect of Coulomb force interacted:
12
(1 ) m A Z kk . Now we have preliminary known the unification of the electromagnetic force and gravitational force [5][6] first time .
Chapter.5. Conclusion
We have known the essence of electric charge and the essence of gravitational mass in atom; meanwhile we have also known the unification of gravitation and electromagnetic force by the way of applying new discoveries, about the more contents and the more evidences of unification of the electromagnetic force and gravitational force, please waiting to see will be coming next article.
|
v3-fos-license
|
2018-12-17T17:33:37.795Z
|
2009-09-01T00:00:00.000
|
59023931
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1088/1367-2630/11/9/093024",
"pdf_hash": "b850cc6cdd0fe24ff471025b016c4ec344079085",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41852",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "0a6c1e0e106ef081fcc612b14d65e643f8e4e848",
"year": 2009
}
|
pes2o/s2orc
|
Accelerated fluctuation analysis by graphic cards and complex pattern formation in financial markets
The compute unified device architecture is an almost conventional programming approach for managing computations on a graphics processing unit (GPU) as a data-parallel computing device. With a maximum number of 240 cores in combination with a high memory bandwidth, a recent GPU offers resources for computational physics. We apply this technology to methods of fluctuation analysis, which includes determination of the scaling behavior of a stochastic process and the equilibrium autocorrelation function. Additionally, the recently introduced pattern formation conformity (Preis T et al 2008 Europhys. Lett. 82 68005), which quantifies pattern-based complex short-time correlations of a time series, is calculated on a GPU and analyzed in detail. Results are obtained up to 84 times faster than on a current central processing unit core. When we apply this method to high-frequency time series of the German BUND future, we find significant pattern-based correlations on short time scales. Furthermore, an anti-persistent behavior can be found on short time scales. Additionally, we compare the recent GPU generation, which provides a theoretical peak performance of up to roughly 1012 floating point operations per second with the previous one.
Introduction
In computer science applications and diverse interdisciplinary science fields such as computational physics or quantitative finance, the computational power requirements increase monotonically in time. In particular, the history of time series analysis mirrors the needs of computational power and simultaneously the opportunities arising from the use of it. Up to the present day, it is an often made simplistic assumption that price dynamics in financial time series obey random walk statistics in order to simplify analytic calculations in econophysics and in financial applications. However, such approximations, which are, e.g. used in the famous options pricing model introduced by Black and Scholes [1] in 1973, neglect the real nature of financial market observables, and a large number of empirical deviations between financial market time series and models presuming only a random walk behavior have been observed in recent decades [2]- [6]. Already Mandelbrot [7,8] discovered in the 1960s that commodity market time series obey fat-tailed price change distributions [9]. His analysis was based on historical cotton times and sales records dating back to the beginning of the 20th century. In accordance with the technological improvements in computing resources, trading processes were adapted in order to create full-electronic market places. Thus, the available amount of historical price data increased impressively. As a consequence, the empirical properties found by Mandelbrot were confirmed. However, the amount of transaction records available today in time units of milliseconds also requires increased computing resources for its analysis. From such analyses, scaling behavior, short-time anti-correlated price changes and volatility clustering [10,11] of financial markets are well established and can be reproduced, e.g. by a model of the continuous double auction [12,13] or by various agent-based models of financial markets [14]- [22]. Furthermore, the price formation process and cross correlations [23,24] between equities and equity indices have been studied with the clear intention to optimize asset allocation and portfolios. However, in contrast to such calculations, which can be done with conventional computing facilities, a large computational power demand is driven by the quantitative hedge fund industry and also by modern market making, which requires mostly real time analytics. A market maker usually provides quotes for buying or selling a given asset. In the competitive environment of electronic financial markets this cannot be done by a human market maker alone, especially if a large number of assets is quoted concurrently. The rise of the hedge fund industry in recent years and their interest in taking advantage of short time correlations boosted the real-time analysis of market fluctuations and the market micro-structure analysis in general, which is the study of the process of exchanging assets under explicit trading rules [25], and which is studied intensely by the financial community [26]- [29].
Such computing requirements, which can be found in various interdisciplinary computer sciences like computational physics including, e.g. Monte Carlo and molecular dynamics simulations [30]- [32] or stochastic optimization [33], make use of the high-performance computing resources necessary. This includes recent multi-core computing solutions based on a shared memory architecture, which are accessible by OpenMP [34] or MPI [35] and can be found in recent personal computers as a standard configuration. Furthermore, distributed computing clusters with homogeneous or heterogeneous node structures are available in order to parallelize a given algorithm by separating it into various sub-algorithms.
However, a recent trend in computer science and related fields is general purpose computing on graphics processing units (GPUs), which can yield impressive performance, i.e. the required processing times can be reduced to a great extent. Some applications have already been realized in computational physics [31], [36]- [43]. Recently, the Monte Carlo simulation of the two-dimensional and three-dimensional ferromagnetic Ising model could be accelerated up to 60 times [44] using a graphic card architecture. With multiple cores connected by high memory bandwidth, today's GPUs offer resources for non-graphics processing. In the beginning, GPU programs used C-like programming environments for kernel execution such as OpenGL shading language [45] or C for graphics (Cg) [46]. The compute unified device architecture (CUDA) [47] is an almost conventional programming approach making use of the unified shader design of recent GPUs from NVIDIA corporation. The programming interface allows one to implement an algorithm using standard C language without any knowledge of the native programming environments. A comparable concept 'Close To the Metal' (CTM) [48] was introduced by Advanced Micro Devices Inc. for ATI graphics cards. One has to state that computational power of consumer graphics cards roughly exceeds that of a central processing unit (CPU) by 1-2 orders of magnitudes. A conventional CPU nowadays provides a peak performance of roughly 20 × 10 9 floating point operations per second (FLOPS) [31]. The consumer graphics card NVIDIA GeForce GTX 280 reaches a theoretical peak performance of 933 × 10 9 FLOPS. If one tried to realize the computational power of one GPU with a cluster of several CPUs, a much larger amount of electric power would be required. A GTX 280 graphics card exhibits a maximum power consumption of 236 W [49], while a recent Intel CPU consumes roughly 100 W.
We apply this general-purpose graphics processing unit (GPGPU) technology to methods of time series analysis, which includes determination of the Hurst exponent and equilibrium autocorrelation function. Additionally, the recently introduced pattern conformity observable [50], which is able to quantify pattern-based complex short-time correlations of a time series, is calculated on a GPU. Furthermore, we compare the recent GPU generation with the previous one. All methods are applied to a high-frequency data set of the Euro-Bund futures contract traded at the electronic derivatives exchange Eurex.
The paper is organized as follows. In section 2, a brief overview of key facts and properties of the GPU architecture is provided in order to clarify implementation constraint details for the following sections. A GPU-accelerated Hurst exponent estimation can be found in section 3. In section 4, the equilibrium autocorrelation function is implemented on a GPU and in section 5, the pattern conformity is analyzed on a GPU in detail. In each of these sections, the performance of the GPU code as a function of parameters is first evaluated for a synthetic time series and compared to the performance on a CPU. Then the time series methods are applied to a financial market time series and a discussion of numerical errors is presented. Finally, our conclusions are summarized in section 6.
GPU device architecture
In order to provide and discuss information about implementation details on a GPU for time series analysis methods, key facts of the GPU device architecture are briefly summarized in this section. As mentioned in the introduction, we use the compute unified device architecture (CUDA), which allows the implementation of algorithms using standard C language with CUDA specific extensions. Thus, CUDA issues and manages computations on a GPU as a data-parallel computing device.
The graphics card architecture, which is used in recent GPU generations, is built around a scalable array of streaming MPs [47]. One such MP contains amongst others eight scalar processor cores, a multi-threaded instruction unit, and shared memory, which is located on-chip. When a C program using CUDA extensions and running on the CPU invokes a GPU kernel, which is a synonym for a GPU function, many copies of this kernel-known as threadsare enumerated and distributed to the available MPs, where their execution starts. For such an enumeration and distribution, a kernel grid is subdivided into blocks and each block is subdivided into various threads as illustrated in figure 1 for a two-dimensional thread and block structure. The threads of a thread block are executed concurrently in the vacated MPs. In order to manage a large number of threads, a single-instruction multiple-thread (SIMT) unit is used. An MP maps each thread to one scalar processor core and each scalar thread executes independently. Threads are created, managed, scheduled and executed by this SIMT unit in groups of 32 threads. Such a group of 32 threads forms a warp, which is executed on the same MP. If the threads of a given warp diverge via a data-induced conditional branch, each branch of the warp is executed serially and the processing time of this warp consists of the sum of the branches' processing times.
As shown in figure 2, each MP of the GPU device contains several local 32-bit registers per processor, memory that is shared by all scalar processor cores of an MP. Furthermore, constant and texture cache are available, which is also shared on an MP. In order to afford reducing results of involved MPs, slower global memory can be used, which is shared among all MPs and is also accessible by the C function running in the CPU. Note that the GPU's global memory is still roughly 10 times faster than the current main memory of personal computers. Detailed facts of the consumer graphics cards 8800 GT and GTX 280 used by us can be found in table 1. Furthermore note that a GPU device only supports single-precision floating-point operations, with the exception of the most modern graphic cards starting with the GTX 200 series. However, the IEEE-754 standard for single-precision numbers is not completely realized. Deviations can be found especially for rounding operations. In contrast, the GTX 200 series supports also double-precision floating-point numbers. However, each MP features only one double-precision processing core and so, the theoretical peak performance is dramatically reduced for doubleprecision operations. Further information about the GPU device properties and CUDA can be found in [47].
Hurst exponent
The Hurst exponent H [51] provides information on the relative tendency of a stochastic process. A Hurst exponent H <0.5 indicates an anti-persistent behavior of the analyzed process, which means that the process is dominated by a mean reversion tendency. H >0.5 mirrors a super-diffusive behavior of the underlying process. Extreme values tend to be followed by extremal values. If the deviations from the mean values of the time series are independent, which corresponds to a random walk behavior, a Hurst exponent of H = 0.5 is obtained. The Hurst exponent H was originally introduced by Harold Edwin Hurst [52], a British government administrator. He studied records of the Nile river's volatile rain and drought conditions and noticed interesting coherences for flood periods. Hurst observed in the eight centuries of records that there was a tendency for a year with good flood conditions to be followed by another year with good flood conditions. Nowadays, the Hurst exponent as a scaling exponent is well studied in context of financial markets [50], [53]- [56]. Typically, an antipersistent behavior can be found on short timescales due to the nonzero gap between offer and demand. On medium timescales, a super-diffusive behavior can be detected [54]. On long timescales, a diffusive regime is reached, due to the law of large numbers.
For a time series p(t) with t ∈ {1, 2, . . . , T }, the time lag-dependent Hurst exponent H q ( t) can be determined by the general relationship with the time lag t T and t ∈ N. The brackets . . . denote the expectation value. Apart from (1), there are also other calculation methods, e.g. the rescaled range analysis [51]. We present in the following the Hurst exponent determination implementation on a GPU for q = 1 and use H ( t) ≡ H q=1 ( t). The process to be analyzed is a synthetic anti-correlated random walk, which was introduced in [50]. This process emerges from the superposition of two random walk processes with different timescale characteristics. Thus, a parameter-dependent anti-correlation at time lag one can be realized. As a first step, one has to allocate memory on the GPU device's global memory for the time series, intermediate results and final results. In a first approach, the time lag-dependent Hurst exponent is calculated up to t max = 256. In order to simplify the reduction process of the partial results, the overall number of time steps T has to satisfy the condition T = (2 α + 1) × t max , with α being an integer number called the length parameter of the time series. The number of threads per block-known as block size-is equivalent to t max . The array for intermediate results possesses length T too, whereas the array for the final results contains t max entries. After allocation, the time series data have to be transferred from the main memory to the GPU's global memory. If this step is completed, the main calculation part can start. As illustrated in figure 3 for block 0, each block, which contains t max threads each, loads t max data points of the time series from global memory to shared memory. In order to realize such a high-performance loading process, each thread 4 loads one value and stores this value in the array located in the shared memory, which can be accessed by all threads of a block. Analogously, each block also loads the next t max entries. In the main processing step, each thread is in charge of one specific time lag. Thus, each thread is responsible for a specific value of t and summarizes the terms | p(t + t) − p(t)| in the block sub-segment of the time series. As the maximum time lag is equivalent to the maximum number of threads and as the maximum time lag is also equivalent to half the data points loaded per block, all threads have to summarize the same number of addends and so, a uniform workload of the graphics card is realized. However, as it is only possible to synchronize threads within a block and a native block synchronization does not exist, partial results of each block have to be stored in block-dependent areas of the array for intermediate results, as shown in figure 3. The termination of the GPU kernel function ensures that all blocks were executed. In a post processing step, the partial arrays have to be reduced. This is realized by a binary tree structure, as indicated in figure 3. After this reduction, the resulting values can be found in the first t max entries of the intermediate array and a final processing kernel is responsible for normalization and gradient calculation. The source code of these GPU kernel functions can be found in the appendix.
For the comparison between CPU and GPU implementations, we use an Intel Core 2 Quad CPU (Q6700) with 2.66 GHz and 4096 kB cache size, of which only one core is used. The A smaller speed-up factor can be measured for small values of α, as the relative fraction of allocation time and time for memory transfer is larger in comparison to the time needed for the calculation steps. The corresponding analysis for the GTX 280 yields a larger acceleration factor β of roughly 70. If we increase the maximum time lag t max to 512, which is only possible for the GTX 280, a maximum speed-up factor of roughly 80 can be achieved, as shown in figure 5. This indicates that t max = 512 leads to a higher workload on the GTX 280. At this point, we can also compare the ratio between the performances of 8800 GT and GTX 280 for our application to the ratio of theoretical peak performances. The latter is given as the number of cores multiplied with the clock rate, which amounts to roughly 1.84. If we compare the total processing times on these GPUs for α = 15 and t max = 256, we obtain an empirical performance ratio of 1.7. If we use the acceleration factors for t max = 256 on the 8800 GT and for t max = 512 on the GTX 280 for comparison, we obtain a value of 2. After this performance analysis, we apply the GPU implementation to real financial market data in order to determine the Hurst exponent of the Euro-Bund futures contract traded at the European exchange (Eurex). In this context, we will also gauge the accuracy of the GPU calculations by quantifying deviations from the calculation on a CPU. The Euro-Bund futures contract (FGBL) is a financial derivative. A futures contract is a standardized contract to buy or sell a specific underlying instrument at a proposed date in the future, which is called the expiration time of the futures contract, at a specified price. The underlying instruments of the FGBL contract are long-term debt instruments issued by the Federal Republic of Germany with remaining terms of 8. In all presented calculations of the FGBL time series on the GPU, α is fixed to 11. Thus, the data set is limited to the first T = 1 049 088 trades in order to fit the data set length to the constraints of the specific GPU implementation. In figure 7, the time lag-dependent Hurst exponent H ( t) is presented. On short timescales, the well-known anti-persistent behavior can be detected. On medium timescales, small evidence is given, that the price process reaches a super-diffusive regime. For long timescales the price dynamics tend to random walk behavior (H = 0.5), which is also shown for comparison. The relative error shown in the inset of figure 7 is smaller than one-tenth of a per cent. In all presented calculations of the FGBL time series on a GPU, α is fixed to 11. Thus, the data set is limited to the first T = 1 049 088 trades in order to fit the data set length to the constraints of the specific GPU implementation.
Equilibrium autocorrelation
The autocorrelation function is a widely used concept in order to determine dependencies within a time series. The autocorrelation function is given by the correlation between the time series and the time series shifted by the time lag t through For a stationary time series, (4) reduces to as the mean value and the variance stay constant, i.e. p(t) = p(t + t) and p(t) 2 = p(t + t) 2 .
Applied to financial markets it can be observed that the autocorrelation function of price changes exhibits a significant negative value at time lag one tick, whereas it vanishes for time lags t >1. Furthermore, the autocorrelation of absolute price changes or squared price changes, which is related to the volatility of the price process, decays slowly [15]. In order to implement (5) on a GPU architecture, similar steps as in section 3 are necessary. The calculation of the time lag-dependent part p(t) · p(t + t) is analogous to the determination of the Hurst exponent on the GPU. The input time series, which is transferred to the GPU's main memory, does not contain prices but price changes. However, in addition one needs the results for p(t) . Furthermore, evidence is given that the process reaches a slightly super-diffusive region (H ≈ 0.525) on medium timescales (2 4 time ticks < t < 2 7 time ticks). On long timescales, an asymptotic random walk behavior can be found. In order to quantify deviations from calculations on a CPU, the relative error (see main text) is presented for each time lag t in the inset. It is typically smaller than 10 −3 . and p(t) 2 . For this purpose, an additional array of length T is allocated, in which a GPU kernel function stores the squared values of the time series. Then, time series and squared time series are reduced with the same binary tree reduction process as in section 3. However, as this procedure produces arrays of length t max , one has to summarize these values in order to obtain p(t) and p(t) 2 . The processing times for determining the autocorrelation function for t max = 256 on CPU and 8800 GT can be found in figure 8. Here, we find that allocation and memory transfer dominate the total processing time on the GPU for small values of α and thus, only a fraction of the maximum acceleration factor β ≈ 33, which is shown as an inset, can be reached. Using the consumer graphics card GTX 280, we obtain a maximum speed-up factor of roughly 55 for t max = 256 and 68 for t max = 512 as shown in figure 9. In figure 10, the autocorrelation function of the FGBL time series is shown. At time lag one, the time series exhibits a large negative autocorrelation, ρ( t = 1) = −0.43. In order to quantify deviations between GPU and CPU calculations, the relative error is presented in the inset of figure 10. Note that small absolute errors can cause relative errors up to three per cent because the values ρ( t >1) are close to zero.
For some applications, it is interesting to study larger maximum time lags of the autocorrelation function. Based on our GPU implementation one has to modify the program code in the following way. So far, each thread was responsible for a specific time lag t. In a modified ansatz, each thread is responsible for more than one time lag in order to realize a maximum time lag, which is a multiple of the maximal 512 threads per block. This way, one obtains a maximum speed-up factor of, e.g. roughly 84 for t max = 1024 using the GTX 280. In order to quantify deviations from calculations on a CPU the relative error is presented for each time lag t in the inset. The relative error is always smaller than 3 × 10 −2 .
Fluctuation pattern conformity
As a third method of time series analysis, the recently introduced fluctuation pattern conformity (PC) determination [50] was migrated to a GPU architecture. The PC quantifies pattern-based complex short-time correlations of a time series. In context of financial market time series, the existence of complex correlations implies that reactions of market participants to a given time series pattern are related to comparable patterns in the past. On medium and long timescales, one can state that no significant complex correlations can be measured because the price process exhibits random walk statistics. However, if one investigates the trading process on a tick-by-tick basis, evidence is given for recurring events. In the course of these considerations, a general pattern conformity observable is defined in [50], which is not limited to the application to financial market time series. In general, the aim is to compare a current pattern of time interval length t − with all possible previous patterns of the time series p(t). The current observation time shall be denoted byt. Then, the current pattern's time interval measured in time ticks is given by [t − t − ;t). The evolution after this current pattern interval-the distance tot is expressed by t + (see below)-is compared with the prediction derived from all historical patterns. However, as the standard deviation of the price process is not constant in time, all comparison patterns have to be normalized with respect to the current pattern. For this purpose, the true range is used-the difference between high and low within each interval. Let p h (t, t − ) be the maximum value of a pattern of length t − at timet and analogously p l (t, t − ) be the minimum value. Thus, we can create a modified time series, which is true range adapted in the appropriate time interval, through , as illustrated in figure 11. Figure 11. Schematic visualization of the pattern conformity estimation mechanism. The current patternp t − t (t) and the comparison patternp t − t−τ (t − τ ) have the maximum value 1 and the minimum value 0 in [t − t − ;t), as shown by the filled rectangle. For the pattern conformity calculation, we need to analyze for each time difference t + whether the current pattern value and the comparison pattern value att + t + is above or below the last value of the current patterñ p t − t (t − 1). If both are above or below this last value, then +1 is added to the non-normalized pattern conformity ξ χ ( t + , t − ). If one is above and the other below, then −1 is added.
In order to assess the match of a pattern with a comparison pattern, the fit quality Q t − t (τ ) between the current pattern sequencep t − t (t) and a comparison pattern ;t) has to be determined by the summation of the squared variations through Note that Q t − t (τ ) takes values in the interval [0, 1] as a result of the true range adaption. With these elements, one can define a pre-stage of the PC, which is not yet normalized, as motivated in figure 11, by with τ * =t −τ ift −τ − t − 0 and τ * = t − else. In general, we limit the evaluation for each pattern to maximalτ historical patterns. Furthermore, for the sign function, we use the standard definition sgn (x) = 1 for x > 0, sgn (x) = 0 for x = 0, and sgn (x) = −1 for x < 0. In (8), the parameter χ weighs pattern terms according to their qualities Q t − Processing times for the calculation of the pattern conformity on GPU and CPU for t − max = t + max = 20. The GTX 280 is used as GPU device. The total processing time on GPU is broken into allocation time, time for memory transfer, and time for main processing. The acceleration factor β is shown as inset. A maximum acceleration factor of roughly 19 can be obtained.
of current and comparison pattern sequences aftert for a proposed t + relative top t − t (t − 1), is given by By normalizing (8) through its altered version, in which the sign function is replaced by its absolute value, the pattern conformity can be written as We repeat that the pattern conformity is the most accurate measure to characterize the short-term correlations of a general time series. It is essentially given by the comparison of subsequences of the time series. Subsequences of various lengths are compared with historical sequences in order to extract similar reactions on similar patterns.
In order to realize a GPU implementation of the pattern conformity provided in (10), one has to allocate memory as for the Hurst exponent and for the autocorrelation function determination in sections 3 and 4, respectively. The allocation is needed for the array containing the time series, which has to be transferred to the global memory of the GPU, and for further processing arrays. The main processing GPU function is invoked with a proposed t − and a givent. In the kernel function, shared memory arrays for comparison and current pattern sequences are allocated and loaded from global memory of the GPU. In the main calculation, each thread handles one specific comparison pattern, i.e. each thread is responsible for one (in percent) between calculation on the GPU and CPU ( CPU χ=100 ( t − , t + ), with the same parameter settings). The processing time on the GPU was 5.8 h; the results on the CPU were obtained after 137.2 h, which corresponds to roughly 5.7 days. Thus, for these parameters an acceleration factor of roughly 24 is obtained.
value of τ and so,τ = γ × σ is applied with γ denoting the scan interval parameter and σ denoting the number of threads per block. Thus, γ corresponds to the number of blocks. The partial results of ξ χ ( t + , t − ) are stored in a global memory based array of dimension τ × t + . These partial results have to be reduced in a further processing step, which uses the same binary tree structure as applied in section 3 for the Hurst exponent determination.
The pattern conformity for a random walk time series, which exhibits no correlations by construction, is 0. The pattern conformity for a perfectly correlated time series is 1 [50]. A maximum speed-up factor of roughly 10 can be obtained for the calculation of the pattern conformity on the GPU and CPU for t − max = t + max = 20, T = 25 000, χ = 100 and σ = 256 using the 8800 GT. In figure 12, corresponding results for using the GTX 280 are shown in dependence of the scan interval parameter γ . Here, a maximum acceleration factor of roughly 19 can be realized.
With this method, which is able to detect complex correlations of a time series, it is also possible to search for pattern conformity based complex correlations in financial market data, as shown in figure 13 for the FGBL time series. In figure 13(a), the results for the pattern conformity GPU χ=100 ( t − , t + ) are presented withτ = 16 384 calculated on the GTX 280. One can clearly see that for small values of t − and t + large values of GPU χ =100 are obtained with a maximum value of roughly 0.8. For the results shown in figure 13(b), the calculation of the pattern conformity is executed on the CPU, and in figure 13(c), the relative absolute error is shown, which is smaller than two-tenths of a per cent. This small error arises because the GPU device summarizes only a large number of the weighted values +1 and −1. Thus, the limitation to single precision has no significant negative effect for the result.
This raw pattern conformity profile is dominated by trivial pattern correlation parts caused by the jumps of the price process between best bid and best ask price-the best bid price is given by the highest limit order price of all buy orders in an order book and analogously the best ask price is given by the lowest limit order price of all sell orders in an order book. As performed in [50], there are possibilities for reducing these trivial pattern conformity parts. For example, it is possible to add such jumps around the spread synthetically to a random walk. Let p * φ be the time series of the synthetically anti-correlated random walk created (ACRW) in a Monte Carlo simulation through p * φ = a φ (t) + b(t), which was used in sections 3-5 as synthetic time series. With probability φ ∈ [0; 0.5] the expression a φ (t + 1) − a φ (t) = +1 will be applied and with probability φ a decrement a φ (t + 1) − a φ (t) = −1 will occur. With probability 1-2φ the expression a φ (t + 1) = a φ (t) is used. The stochastic variable b(t) models the bid-ask spread and can take the value 0 or 1 in each time step, each with probability 0.5. Thus, by changing φ, the characteristic timescale of process a φ in comparison to process b can be modified.
Parts of the pattern-based correlations in figure 13 stem from this trivial negative autocorrelation for t = 1. In order to try to correct for this, in figure 14 (an animated visualization can be found in the multimedia enhancements of this publication), the pattern conformity of the ACRW with φ = 0.044, which reproduces the anti-correlation of the FGBL time series at time lag t = 1, is subtracted from the data of figure 13(a). Obviously, the autocorrelation for the time lag t = 1, which is understood from the order book structure is not the sole reason for the pattern formation conformity, which is shown in figure 13(a). Thus, evidence is obtained that financial market time series show pattern correlation on very short timescales beyond the simple anti-persistence which is due to the gap between bid and ask prices.
Conclusion and outlook
In this paper, we applied the compute unified device architecture-a programming approach for issuing and managing computations on a GPU as a data-parallel computing device-to methods of fluctuation analysis. Firstly, the Hurst exponent calculation was presented performed on a GPU. These results of the scaling behavior of a stochastic process can be obtained up to 80 times faster than on a current CPU core and the relative absolute error of the results obtained from the CPU and GPU is smaller than 10 −3 . The calculation of the equilibrium autocorrelation function was also migrated to a GPU device successfully and applied to a financial market time series. In this case, acceleration factors up to roughly 84 were realized. In a further part, the pattern formation conformity algorithm, which quantifies pattern-based complex short-time correlations of a time series, was determined on a GPU. For this application the GPU was up to 24 times faster than the CPU, and the values provided by the GPU and CPU differ only in a relative error of maximal two-tenths of a per cent. Furthermore, we could verify, that the current GPU generation is roughly two times faster than the previous one. The presented methods were applied to an FGBL time series of the Eurex, which exhibits an anti-persistent regime on short timescales. Evidence was found that a super-diffusive regime is reached on medium timescales. On long timescales, the FGBL time series complies to random walk statistics. Furthermore, the anti-correlation at time lag one-an empirical stylized fact of financial market time series-was verified. The pattern conformity which is used is the most accurate measure to characterize the short-term correlations of a general time series. It is essentially given by the comparison of subsequences of the time series. Subsequences of various lengths are compared with historical sequences in order to extract similar reactions on similar patterns. The pattern conformity of the FGBL contract exhibits large values up to 0.8. However, these values also include the trivial auto-correlation property at time lag one, which can be removed by the pattern conformity of a synthetic anti-correlated random walk. However, significant pattern based correlations are still exhibited after correction. Thus, evidence is obtained that financial market time series show pattern correlation on very short timescales beyond the simple anti-persistence, which is due to the gap between bid and ask prices. Further applications of the GPU-accelerated techniques in context of the Monte Carlo simulations and agent-based modeling of financial markets are certainly well worth pursuing. As already mentioned in the introduction, the main advantage of general-purpose computations on GPUs is that one does not need special-purpose computers. Although GPU computing opens a large variety of possibilities, the recent development of using graphic cards for scientific computing will perhaps also revive special-purpose computing as GPU implementations are not appropriate for each problem.
|
v3-fos-license
|
2023-07-12T05:06:55.457Z
|
2023-07-03T00:00:00.000
|
259655717
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/crid/2023/5597996.pdf",
"pdf_hash": "6ed71b6b1553d324d66bccb0774da7a47a6a71fb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41854",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6ed71b6b1553d324d66bccb0774da7a47a6a71fb",
"year": 2023
}
|
pes2o/s2orc
|
Mineral Trioxide Aggregate as an Apexogenesis Agent for Complicated Crown Fractures in Young Permanent Incisor
Traumatic dental injuries are extremely common in children, and trauma to developing permanent teeth can disrupt root maturation; vital pulp therapy is an appropriate treatment for these teeth. This case report describes a 9-year-old boy who suffered dental trauma while playing football, resulting in an enamel–dentin fracture with pulp exposure in the left central incisor with an open apex (Cvek's stage 3) and an enamel–dentin fracture in the right central incisor with an open apex (Cvek's stage 3). Apexogenesis with mineral trioxide aggregate was performed to preserve the neurovascular bundle, allowing normal radicular formation in the left central incisor. During a 2-year follow-up, the tooth showed no signs and symptoms, and radiographic examinations revealed no evidence of radiolucent lesions in the periapical region. This case study provides compelling evidence that the utilization of the described agent yields significant efficacy in treating traumatic fractures accompanied by pulp exposure.
Introduction
A complicated crown fracture refers to a dental injury where there is damage to both the enamel and dentin of a tooth, leading to exposure of the pulp. The prevalence of such fractures can vary between 2% and 13% of all dental injuries [1]. Typically, these injuries occur in newly erupted or young permanent teeth with immature roots [2]. Trauma to teeth with vital pulps and open apices can result in pulpal and periapical diseases. Preserving pulp vitality while allowing for root formation and apical closure is a significant challenge in these cases. Treatment planning is influenced by factors, such as the extent of pulp exposure to the oral environment, the stage of root development (Cvek's stage; Figure 1), and the time between the injury and examination [3].
When dealing with traumatized immature teeth with open apices and pulp exposure, vital pulp therapy is the preferred treatment option. Apexogenesis and other vital pulp therapies are gaining popularity due to several advantages, including shorter appointments and less technique sensitivity. Apexogenesis encourages natural root development, leading to apical closure and strengthening of the root structure [4]. In this method, the coronal portion of the pulp is partially or completely removed, usually up to the canal orifices. The remaining pulp is then capped with a suitable medicament to stimulate hard tissue formation and create a potential seal [5].
For many years, calcium hydroxide has been the primary pulp-capping agent. However, it has certain disadvantages, such as the formation of defects in the dentin bridge beneath the calcium hydroxide layer and increasing the risk of failure [6,7]. In recent years, regenerative endodontic materials have gained popularity. Mineral trioxide aggregate (MTA) has emerged as an effective capping agent for pulp tissue healing [1,8]. It possesses excellent sealing ability, biocompatibility, low cytotoxicity, and the capacity to induce odontoblast-like cells, which contribute to the formation of a hard barrier [9,10]. These cells originate from the differentiation of progenitor cells that are stimulated and accumulate at the site of MTA application [8][9][10].
The objective of this clinical case report is to describe a partial pulpotomy procedure using MTA in a complicated crown fracture of an immature permanent incisor tooth on the left side. Additionally, the report highlights the accelerated apical closure observed in the affected tooth with an open apex during a two-year follow-up period.
Case Presentation
A nine-year-old boy presented to the Department of Pediatric and Preventive Dentistry with a chief complaint of dental fractures. The patient had an Ellis III fracture in the maxillary left central incisor (tooth #21) and an Ellis II fracture in the maxillary right central incisor (tooth #11) due to a traumatic injury sustained while playing football ( Figure 2). The parents promptly brought the child to the department, approximately six hours after the incident occurred. The patient did not experience any spontaneous pain at the time of presentation.
During the intraoral examination, pulp exposure was observed in the maxillary left central incisor (tooth #21) with mild pain upon percussion. No periodontal pockets were detected, and the affected teeth showed class I mobility.
Pulp vitality testing revealed a positive response to cold stimulation. The extent of pulp exposure in tooth #21 was measured to be approximately 2-3 mm. The radiographic examination did not reveal any root fractures or periradicular radiolucency in the regions of teeth #11 and #21, but it did confirm an enamel-dentin fracture in the maxillary right central incisor (tooth #11; Figure 3). According to Cvek's classification, both incisors were determined to be at stage 3 of root development; with two-thirds of root length present (Figures 1 and 3).
Oral prophylaxis was performed, and the sandwich technique followed by composite buildup was carried out 2 Case Reports in Dentistry for tooth #11 ( Figure 4). For tooth #21, the treatment plan involved a partial pulpotomy procedure, which was explained to the patient and parents. Local anesthesia (LA) was administered through the infiltration of 2% lidocaine HCl with 1 : 100,000 epinephrine (LA). Using a sterile high-speed diamond bur and water irrigation to prevent thermal damage, approximately 2-3 mm of visibly inflamed pulp and adjacent dentin were removed from tooth #21. The access cavity was then rinsed with normal saline, and the coronal pulp tissue was removed until adequate hemostasis was achieved. A moistened sterile cotton pellet was placed over the remaining pulp for 5 minutes. White MTA powder (ProRoot MTA, DENTSPLY, Tulsa, OK, USA) mixed with distilled water was applied to the exposed pulp without pressure. A moistened cotton pellet was gently placed over the MTA to facilitate its setting. After 10 minutes, the MTA was covered with glass ionomer restorative cement (GC Gold Label II, GC Fuji II Tokyo, Japan), and the patient was discharged (Figures 4 and 5). During the twoweek follow-up, the patient remained asymptomatic, with no pain, periodontal pockets, mobility, or sensitivity upon cold testing. The glass ionomer restoration was partially replaced with a direct bonded composite restoration ( Figure 6). Clinical and radiographic evaluations were performed at one month, three months, six months, and oneyear post-treatment, with no symptoms observed. Radiographs showed increased root lengths, accelerated apical closure, complete root growth, increased thickness of the root wall, and the formation of a calcified bridge above the vital pulp ( Figure 7). The periodontal ligament space appeared normal in thickness, and the continuity of the lamina dura was observed. No radiolucent lesions were detected during the six-month follow-up or in subsequent examinations conducted over one to two years ( Figure 8).
Discussion
Dental health professionals should be guided when making decisions and providing the best possible care to their patients. After reviewing several dental articles on traumatic tooth injury, the International Association of Dental Traumatology and the American Academy of Pediatric Dentistry published a guideline that recommended partial pulpotomy in this clinical scenario [12,13]. This recommendation is based on the idea that clinicians should make every effort to preserve pulp vitality in developing teeth to maintain physiologic root development and strengthen tooth resistance [12,13]. According to research, neither the duration of the injury nor the size of the pulpal exposure (<4 mm) affects the outcome of partial pulpotomy with calcium hydroxide dressing [13,14]. The patient in this case is young, with immature roots and open apices, which would lead to a better prognosis. The patient's age can influence the outcome of pulp treatments, as pulp in older patients tends to be more fibrotic and less capable of recovering [15][16][17].
Since many years ago, vital pulpotomy has used calcium hydroxide to induce coagulation necrosis, a low-grade irritation that causes undifferentiated pulp cells to undergo differentiation. These cells produce predentine, which is then mineralized, whereas the coagulated tissue is calcified 3 Case Reports in Dentistry [18,19]. MTA has been suggested as the material of choice for use in vital partial pulpotomy treatment, similar to that of calcium hydroxide because it produces significantly more dentinal bridging in a shorter period of time with significantly less inflammation, and also provides a hard setting, non-resorbable surface without the presence of tunnels in the dentin barrier [19][20][21]. Furthermore, in the current clinical context, the partial pulpotomy with MTA performed on tooth #21 resulted in faster apical closure, thickening, and root strengthing than tooth #11. MTA is more efficient at inducing reparative dentinogenesis. One of the reasons for this is that MTA acts as a "calcium hydroxide-releasing material." It is known to stimulate the natural woundhealing process of exposed pulps, which can result in reparative dentin formation [22,23]. In addition to the calcium hydroxide release, in vitro studies have shown that MTA has dentinogenic mechanisms specific to itself [9,23]. MTA can stimulate cells responsible for hard tissue formation, promoting the deposition of matrix, and mineralization. MTA also possesses several beneficial physical properties over calcium hydroxide. It exhibits good sealing ability, meaning it can effectively seal the exposed pulp from the oral environment. MTA has a lower degree of dissolution, meaning it does not degrade or dissolve as easily as calcium hydroxide. This higher structural stability ensures that MTA can provide a longer-lasting effect. Another noteworthy property of MTA is its ability to interact with phosphate-containing fluids, leading to the spontaneous formation of apatite precipitates [9,22]. This not only explains its biocompatibility and bioactivity but also contributes to its sealing ability [22]. The formation of apatite precipitates helps create a local environment that supports the inherent wound-healing capacity of the pulp. Overall, the capacity of MTA to induce hard tissue repair in exposed pulps is influenced by its ability to maintain a conducive environment for the natural wound-healing process while providing the necessary stimulation for dentin formation. The unique properties of MTA, including calcium hydroxide release, dentinogenic mechanisms, sealing ability, structural stability, and apatite formation contribute to its effectiveness in promoting reparative dentinogenesis [22]. One of the most frequently mentioned disadvantages of MTA is discoloration. Furthermore, it appears that the primary cause of discoloration is the penetration of blood constituents into porosities within MTA, rather than the type of MTA (grey or white) [23][24][25][26]. MTA powder ingredients, such as ferric oxide, bismuth oxide, and magnesium oxide, may also be responsible for tooth discoloration [25,26]. In the present case, a complicated crown fracture treated with MTA partial pulpotomy demonstrated successful clinical and radiographic outcomes due to its superior physical and biological properties and short follow-up period ( Figure 8). This outcome can be attributed to the outstanding sealing ability of MTA, which effectively prevents the microleakage of bacteria. In a study conducted by Kararia et al. [27], a comparison was made between the sealing ability of MTA and retroplast. The researchers concluded that MTA demonstrated superior performance when compared with retroplast.
Conclusion
MTA has been scientifically proven effective in various endodontic procedures. In this study, MTA apexogenesis treatment was completed in 24 months, resulting in a calcified barrier and no need for further treatments. This suggests MTA's suitability as a pulp-capping material. However, it is crucial to note that this conclusion is based on one study, and longer clinical studies are recommended for better longterm effectiveness and safety data. Healthcare professionals should consider patient factors and case-specific requirements, and consult current research and guidelines for treatment decisions. Case Reports in Dentistry
Data Availability
Data supporting this research article are available from the corresponding author or first author upon reasonable request.
Consent
The written informed consent of the patient was obtained and it is mentioned in the article.
Conflicts of Interest
The author(s) declare(s) that they have no conflicts of interest.
|
v3-fos-license
|
2023-05-18T15:18:00.464Z
|
2022-12-22T00:00:00.000
|
258747594
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "HYBRID",
"oa_url": "http://jurnal.radenfatah.ac.id/index.php/yonetim/article/download/15200/5253",
"pdf_hash": "63b7b8480a13b6813615c4db7866dfa2344a3a03",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41856",
"s2fieldsofstudy": [
"Business"
],
"sha1": "9a8f44ff48f60070c9d561bfffa356f4990f4bea",
"year": 2022
}
|
pes2o/s2orc
|
MANAGING WORK PRODUCTIVITY THROUGH GLOBAL COMPETENCE
This research was conducted to determine the role of global competence in practice in the industrial world on work productivity with motivation as a moderating variable that influences it with this type of applied research using quantitative methods Partial Least Square Structural Equation Model (PLS-SEM) explaratory prediction using SmartPLS software , the results of the research In this case, there is a significant effect between Global Competence on Work Productivity, while for the motivation variable as a moderating Global Competence on work productivity, it has a very small effect, namely 0.817 but has no effect at a significant level of 0.005.
INTRODUCTION
Productivity is the key to maintaining competitiveness, both at the organizational and country level, and in ensuring sustainable socio-economic development or in a sense productivity is the relationship between the quantity of output (goods and services produced) and the quantity of input (i.e., resources such as labor). , materials, machinery and energy) used in production ("Handbook-on-Productivity-2015(1).pdf," nd) . Productivity is critical to the long-term competitiveness and profitability of an organization (SPRING Singapore, 2011) . Labor productivity is a component of economic indicators because it offers a dynamic measure of competitiveness, economic growth and living standards in an economy. It is the measurement of labor productivity (and all these measures take into account) that can help explain the main economic underpinnings that must be both for social development and economic growth (Myronenko, nd) . In the sense expressed, it can also be interpreted that productivity is enjoyed at work by obtaining work objectives, placement, treatment, and a good working environment for employees. Productivity is a person's attitude towards his work that reflects pleasant and unpleasant experiences in his work and his hopes for future experiences, which are manifested by emotional attitudes and work results that are efficient, effective and productive. (Bahri, 2016) .
Identified motivation occurs when employees accept the values that underlie organizational goals as their own values through identification with them (Marikyan et al., 2022) . Morrison et al. (2007) emphasize that everyone is not motivated in the same way and what motivates one person may not necessarily motivate another, especially if there are differences in cultural or socioeconomic factors (Al-Abbadi and Agyekum-Mensah, 2022) .
Global or international competence is often used to describe how a person's attitudes, knowledge, and skills interact with others (international) and all backgrounds (global) to achieve the same understanding (Corrales et al., 2021) . Global competence is an important skill that must be named and developed as strictly as academic content (Rensink, 2020a) . According to the OECD (Organization for Economic Co-operation and Development) global competence is the capacity to examine local, global and intercultural issues to understand and appreciate the perspectives and worldviews of others to engage in open, appropriate and effective interactions with people. -people from different cultures and acting for collective well-being and sustainable development. Global competence is defined by the Center for Global Education at the Asia Society as a combination of four domains (Richard Lee Colvin and Edward, 2018) .
Source: OECD 2030
In many studies, global competence is widely studied in the education industry, but in other industries, very few of these are seen in Scopus indexed journals in 2022 until today only seven journals ("scopusresults GC.pdf," nd ) , while the other two variables such as work productivity and motivation have been discussed by many previous researchers in various industrial fields, but more specifically for the type of building material distributor industry, it can be said that there is no discussion of this study and the discussion of this study is in the manufacturing, construction, banking and others.
The concept or framework of this research can be seen in the chart below: H1 H2
Image 1 Research Framework
The hypotheses that will be studied in this research are: • Global Competence Affects Work Productivity • The relationship between Global Competence and Work Productivity will be strengthened if the motivation is good.
2.1.Research Approach
This study uses quantitative methods.
2.2.Types of Research and Data Sources a. Types of research
In this study, the researcher uses a type of applied research, which aims to find a practical solution to a particular problem. This research does not try to develop ideas, theories, or ideas, but tries to apply the research in everyday life. The biggest feature of this research is that the level of abstraction is low and the effects or results can be felt immediately ("Applied research," 2019) . This applied research also uses an exploratory approach. b. Data source 1) Primary Data Primary data is data obtained from the individual or the individual's first source. The primary data of this study was given directly by the respondents through a questionnaire. 2) Secondary Data is Primary Data that has been further processed. The data for this research was collected through the website and/or social media.
c. Research sites
This research will be carried out in the company PT. X regional, Sumatra, the country of Indonesia, which is a national-scale distributor of building materials.
2.3.Data collection technique
This research uses survey data collection method. However, deeper information gathering methods are used.
1) Questionnaire
This questionnaire is intended to obtain primary data on Global Competence, Motivation and Work Productivity. By categorizing the scoring for alternative answers as follows: Score 5 for strongly agree answers, score 4 for agree answers, score 3 for disagree, score 2 for answers disagree, and score 1 for strongly disagree answers.
2) Interview
Interviews are information gathering where researchers can ask questions to respondents who are thought to be able to share valid data. The interview that was tried in this research was to meet face-to-face or face-to-face with several participants to strengthen the primary data. After that, the questions asked are questions that are universally unstructured and only to support the data collection process required in detail.
3) Population and Sample
The population in this study was taken from the sales & operations division of PT. X Regional Sumatra which consists of 9 branches of Palembang, Jambi, Bengkulu, Lampung, Padang, Pekanbaru, Medan, Pangkal Pinang, Belitung, as many as 115 people with job characteristics from branch heads, sales managers, sales supervisors and sales teams by determining the number of samples using slovin formula (Hidayat, 2017) , So that the sample in this study was 89 people with random sampling method.
4) Method of collecting data
The data processing method in this study is the SEM (Structural Equation Modeling) equation . SEM can be described as an analysis that combines factor analysis approaches, structural models and path analysis (Hamid and M Anwar, 2019) . This research uses partial least squares (PLS) quantitative analysis approach . PLS is a variance-based statistical SEM technique designed to solve multiple regression for specific data problems such as small study sample sizes, missing values, and multicollinearity. (Harahap and Pd, nd) . The data from the questionnaire answers that have been distributed and answered by the sample using google form are then processed using Data Analysis Techniques including the Outer Model Test (Reliability and Validity) which is then carried out by the Inner Model Test (R-Square, T-value, PathCoefficition, GoF ( GoodnessofFit)), the data will be processed using Smart PLS 3.2.9.
5) Data analysis 5.1. Test Indicator/ Outer Model
The measurement model is a model that describes the relationship between latent variables (constructs) and their indicators (Juliandi, 2018) . To get valid results, the SmartPLS 3.2.9 software is used by doing two tests by eliminating the loading factor value below 0.5 (Bafadal, 2012) . as shown in the image below: With the value of Average Variance Extracted (AVE) the initial output, as shown below: When the results of the first outer loading test have an invalid value or are below 0.5, then the next step is to eliminate indicators that are below 0.5 with a minimum of three remaining indicators. So we get the results as shown below: With the value of Average Variance Extracted (AVE) the second output, as shown below: Furthermore, for the last step, a final evaluation of the outer loading calculation results can be carried out , namely looking at reliability by looking at Figure 3 in the composite reliability column, the value is above 0.7 and it can be said that the data already has good reliability (Hamid and M Anwar, 2019)
Structural Test/ Inner Model
After testing the outer model, then a structural test or inner model is carried out , which in this test is seen there is an R Square value from the results of existing data processing, and can be seen in the table below:
RESULTS
From the results of data processing using SmartPLS 3.2.9, the calculation of path coefficients is obtained , with hypothesis testing can be seen in the image below: In this test, it was found that from the 2 hypotheses proposed by the researcher with a significant level of 0.05 there was an interaction value of Global Competence x Motivation that was not significant (0.817 > 0.05), then at a significant level of 0.05, the motivation variable did not moderate the effect. Global Competence on Work Productivity but still has an influence value of certain significance. Meanwhile, the influence of Global Competence on Work Productivity has a significant influence. The results of hypothesis testing can be seen in the
DISCUSSION
In practice, the term global competence is used more in the world of education in developing the soft skills of students and actually global competence is needed for the feasibility of digital economy work (Rensink, 2020b) , with the development of the digital era, it is expected that global competence has penetrated into various industries outside field of education.
And in the initial scale or derivative of the notion of competence itself, it is very influential on work productivity (Bahri, 2016) . Meanwhile, in this study it was found that motivation does not have an effect as a moderating variable of global competence on work productivity and because this research is applied research and only tests predictions without being based on a strong theory, the problem or hypothesis raised is an initial argument with evidence after this research. conducted.
The results of this study are expected that Global Competence will become one of the variables that get special attention for further research and hopefully can have an impact both theoretically and practically so that it can answer existing problems related to global competence, motivation and work productivity in various industrial fields. .
CONCLUSION
From the results of the study, it can be ascertained that global competence is a variable that must be subject to a more in-depth study because with the same understanding, hopefully the parties engaged in various industries have put forward aspects of global competence as a focus step to increase work productivity so that the company's revenue targets are is the ultimate hope can be fulfilled and why a business was founded?, this is in line with what was written in a business blog that says: In the business world, the main goal is to make a profit. To maximize profits, entrepreneurs or companies must be able to develop a good marketing strategy. In addition, they must be able to provide fast and precise information in financial records, so that they do not neglect any details to avoid losses ("Revenue is," nd) , hopefully global competence can be one indicator of increasing work productivity and ultimately impact on company revenue .
As for motivation as a moderating variable of global competence on work productivity which in this study has not shown significance with a level of 0.05 but still has an influence even though it is classified as very small so it cannot answer the hypothesis, hopefully in the future it can be used as material for further research. for other researchers.
|
v3-fos-license
|
2016-06-17T05:40:26.579Z
|
2015-01-15T00:00:00.000
|
834934
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2014.00798/pdf",
"pdf_hash": "1f13eb66198320313e7f7a43ae1e87510af2a549",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41857",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science"
],
"sha1": "1f13eb66198320313e7f7a43ae1e87510af2a549",
"year": 2014
}
|
pes2o/s2orc
|
Post-translational hydroxylation by 2OG/Fe(II)-dependent oxygenases as a novel regulatory mechanism in bacteria
Protein hydroxylation has been well-studied in eukaryotic systems. The structural importance of hydroxylation of specific proline and lysine residues during collagen biosynthesis is well established. Recently, key roles for post-translational hydroxylation in signaling and degradation pathways have been discovered. The function of hydroxylation in signaling is highlighted by its role in the hypoxic response of eukaryotic cells, where oxygen dependent hydroxylation of the hypoxia inducible transcription factor both targets it for degradation and blocks its activation. In contrast, the role of protein hydroxylation has been largely understudied in prokaryotes. Recently, an evolutionarily conserved class of ribosomal oxygenases (ROX) that catalyze the hydroxylation of specific residues in the ribosome has been identified in bacteria. ROX activity has been linked to cell growth, and has been found to have a direct impact on bulk protein translation. This discovery of ribosomal protein hydroxylation in bacteria could lead to new therapeutic targets for regulating bacterial growth, as well as, shed light on new prokaryotic hydroxylation signaling pathways. In this review, recent structural and functional studies will be highlighted and discussed, underscoring the regulatory potential of post-translational hydroxylation in bacteria.
INTRODUCTION
Of the commonly observed post-translational modifications, posttranslational hydroxylation represents the smallest change. However, despite its diminutive nature, this modification may have significant effects on protein production and we are just beginning to discover its potential as a regulatory mechanism. Since the discovery of enzyme-catalyzed hydroxylation of prolyl residues during collagen biosynthesis, the importance of post-translational hydroxylation of proteins has been well established (Stetten, 1949;Hutton et al., 1966). More recently roles for protein hydroxylation in cell signaling and degradation pathways have been identified, expanding the significance of this post-translational modification. While important roles for protein hydroxylation have been observed, in comparison with other post-translational modifications such as phosphorylation, their full extent has yet to be determined (Loenarz and Schofield, 2011). Although not likely to be as widespread as others, new functions of protein hydroxylation are being discovered that indicate it may have a larger role than previously thought.
The post-translational hydroxylation of collagen, one of the most abundant structural proteins in animals, has been extensively studied. The discovery that the hydroxylation of prolyl and lysyl residues in collagen as a result of an oxygenase catalyzed modification provided the first evidence of the importance of enzyme catalyzed post-translational hydroxylation (Stetten, 1949;Hutton et al., 1966;Jenkins and Raines, 2002). Three different post-translational hydroxylations are present in collagen: 4R-hydroxy-L-proline, 3S-hydroxy-L-proline and 5R-hydroxy-Llysine, with the 4R-hydroxy-L-proline being the most commonly observed (Myllyharju and Kivirikko, 2001). These hydroxylations are vital for the structure of collagen, which is comprised uniquely of three left-handed helices wound together around a centralaxis to form a triple stranded right-handed tertiary structure, with multiple collagen molecules cross-linked to form connective tissues. The stability and strength of this structure is dependent upon a Gly-Xaa-Yaa repeating motif, where Xaa is typically L-proline and Yaa is typically 4R-hydroxy-L-proline (Engel and Bächinger, 2005). The hydroxylation of proline residues in collagen is catalyzed by procollagen prolyl 3-and 4-hydroxylases (P3H and P4H) to form the 3S-hydroxy-L-proline and 4R-hydroxy-Lproline, respectively ( Figure 1A; Myllyharju, 2003;Vranka et al., 2004). The hydroxylated 4R-hydroxy-L-proline in the Yaa position is critical for stabilizing the structure, partly through essential hydrogen bonds and partly by the stereoelectric gauche effect (Prockop and Kivirikko, 1995;Myllyharju and Kivirikko, 2004). In contrast to the stabilizing effect of the 4R-hydroxy-L-proline, the rare 3S-hydroxy-L-proline has a less defined role, with initial evidence indicating a slight destabilizing effect (Jenkins et al., 2003). Evidence has now been found that this modification mediates inter-helical interactions and helps the assembly of the collagen triple helix (Weis et al., 2010). Mature collagen molecules are assembled by tissue specific cross-linking of domains flanking the triple stranded helical domain. This cross-linking is mediated by the presence of hydroxylysine, the formation of which is catalyzed by procollagen lysl 5-hydroxylase (PLOD; Figure 1B; Yamauchi and Sricholpech, 2012). Defects in all three types of collagen hydroxylation have been linked to number of diseases, highlighting the importance of this post-translational modification.
www.frontiersin.org FIGURE 1 | 2-oxoglutarate oxygenase catalyzed hydroxylation and their biochemical effects. (A) Hydroxylation of proline residues of collagen by procollagen prolyl 4-hydroxylation (P4H) and prolyl 3-hydroxylase (P3H) increase the structural stability of collagen. (B) Hydroxylation of collagen lysine residues by procollagen lysine 5-hydroxylase (PLOD) provides further stability through creating cross-linking sites. (C) Hydroxylation of HIF proline residues by prolyl hydroxylase domain (PHD) enzymes stabilizes the interaction between HIF and von Hippel Lindau (VHL) protein targeting HIF for degradation, while asparaginyl hydroxylation of HIF by factor inhibiting HIF (FIH) acts to inhibit transcriptional activity by disrupting the interaction of HIF with p300. Both hydroxylation events result in suppression of the hypoxic response. (D) Lysyl hydroxylation of a variety of proteins by hydroxylase Jmjd6 has effects on alternative splicing (U2AF65), epigenetic regulation (histones H2A/H2B and H3/H4), and p53 tumor suppressor activity.
Post-translational hydroxylation is now known to be involved, not only in protein structural stability, but also in cellular signaling. The transcription factor hypoxia inducible factor (HIF) is critical for the initiation of the hypoxic response, which occurs when multicellular organisms are subjected to low oxygen levels (Palmer and Clegg, 2014). HIF is a constitutively expressed, heterodimeric protein, comprised of HIF1α and HIF1β subunits (Wang et al., 1995). The HIF1β subunit resides in the nucleus while, under hypoxic conditions, the HIF1α subunit is translocated into the nucleus and the active heterodimer recruits the transcriptional coactivator p300, initiating transcription of genes required for hypoxic response (reviewed in Hewitson and Schofield, 2004). Under normal oxygen levels, the HIF1α subunit is not detectable in the cell implicating oxygen regulated suppression of HIFα lifetime and activity in the cell (Huang et al., 1996). It is now known that post-translational hydroxylation occurs at three separate sites on HIF1α under normoxic conditions. The oxygen dependent post-translational hydroxylation of two critical proline residues by a family of three closely related prolyl hydroxylases (PHD1-3) results in the recruitment of HIF1α to the E3 ubiquitin ligase complex and subsequent degradation of HIF1α ( Figure 1C; Ivan et al., 2001;Jaakkola et al., 2001;Masson et al., 2001;Yu et al., 2001). Another level of HIF inhibition under normal oxygen levels is the post-translational hydroxylation of an asparaginyl residue by the hydroxylase: factor inhibiting HIF (FIH). This hydroxylation blocks the interaction of HIF1α with p300, thus inhibiting transcriptional activity ( Figure 1C; Lando et al., 2002). This hydroxylase-mediated control of the hypoxic response is the first comprehensively described instance of post-translational hydroxylation as a regulatory mechanism.
Following the discovery that HIF activity is regulated by hydroxylation, the possibility of similar systems being regulated by hydroxylation was investigated and revealed posttranslational hydroxylation of lysine residues in the splicing factor U2 small nuclear ribonucleoprotein auxiliary factor 65-kDa subunit (U2AF65) catalyzed by the FIH related hydroxylase Jumonji domain 6 protein (Jmjd6; Webby et al., 2009). Jmjd6 was originally identified as a histone arginine demethylase (Chang et al., 2007); however, large scale analysis of Jmjd6 interacting proteins revealed that its predominate function is lysyl hydroxylation (Figure 1D; Webby et al., 2009). Hydroxylation of U2AF65 was found to change alternative RNA splicing for some genes indicating a role in the regulation of gene splicing, with knockdown of Jmjd6 resulting in similar splicing patterns to those observed under hypoxic conditions (Webby et al., 2009). Jmjd6 is now known to regulate alternative gene splicing through interaction with a number splicing factors, as well as the pre-RNA itself, though the exact conditions and physiological role for hydroxylation have yet to be identified (Heim et al., 2014). Jmjd6 was subsequently found to also hydroxylate lysine residues on the histones H2A/H2B and H3/H4 (Unoki et al., 2013). The hydroxylated lysines were found to inhibit both N-acetylation and N-methylation of histone peptides in vitro. Conversely, both N-acetylation and N-methylation of lysine residues blocked Jmjd6 catalyzed hydroxylation. Combined these results suggest a role of post-translational histone hydroxylation in the epigenetic regulation of gene expression and chromosomal rearrangement. Most recently, Jmjd6 was found to catalyze the lysyl hydroxylation of the tumor suppressor p53, decreasing p53 activity, and promoting colon carcinogenesis (Wang et al., 2014). Jmjd6 hydroxylation of p53 reveals a connection between oxygen levels, cell cycle control, and apoptosis.
PROTEIN HYDROXYLASES
Post-translational hydroxylation involves the oxidative conversion of a C-H bond to a C-OH group on an amino acid side chain. Protein hydroxylases, the enzymes responsible for catalyzing this conversion, are found to belong to the 2-oxoglutarate (2OG)/Fe 2+ -dependent oxygenase (2OG oxygenase) superfamily of proteins (Loenarz and Schofield, 2011). The majority of these enzymes use a Fe 2+ cofactor and 2OG and dioxygen as co-substrates. 2OG oxygenases are widely distributed evolutionarily conserved enzymes involved in many biologically important processes such as DNA repair, protein modification, lipid metabolism, and secondary metabolite production in plants and microbes. The enzymes catalyze a diverse array of oxidative reactions, including desaturation, ring formation or expansion, epimerization, and carbon-carbon bond cleavage. However, the most common reaction they catalyze is hydroxylation (Hausinger, 2004). 2OG oxygenases are known to catalyze the hydroxylation of a variety of substrates ranging from small molecules to macromolecular molecules, including both proteins and DNA.
Proteins belonging to the 2OG oxygenase family are identified by a conserved HXD/EX n H sequence motif, where the histidine and aspartate/glutamate residues are involved in coordinating the metal cofactor (Clifton et al., 2006). The three-dimensional structures of many 2OG oxygenases, including a number of post-translational hydroxylases have been determined. All 2OG oxygenases are found to have a conserved core structure of eight β-strands, which form two anti-parallel β-sheets that come together in a right-handed double-stranded β-helix (DSBH) or jelly roll (Figure 2; Roach et al., 1995;Clifton et al., 2006). The conserved sequence motif is found within the DSBH, and marks the location of the enzyme active site where Fe 2+ is brought together with the substrates. The DSBH forms a very robust active site allowing for the accommodation of the different substrates of the 2OG oxygenases, and for the catalysis of complex oxidative reactions. Structural data, combined with kinetic and spectroscopic analysis suggest that the 2OG oxygenases share a common enzymatic mechanism wherein the Fe 2+ -bound enzyme interacts with 2OG, triggering reaction with dioxygen, which leads to the formation of a ferryl intermediate that acts as a reactive oxidizing species formed upon oxidative decarboxylation of 2OG (Holme, 1975;Chowdhury et al., 2014). The ferryl intermediate is then poised to react with the substrate. This is where the mechanisms diverge depending on the type of reaction being catalyzed, accounting for the breadth, and versatility of this class of enzyme.
RIBOSOMAL OXYGENASES
With the discovery of the importance of hydroxylation of HIF and splicing relating proteins by 2OG oxygenases in eukaryotes, the question of whether oxygenase catalyzed post-translational hydroxylation has a role in prokaryotic cells was raised. The Escherichia coli gene of unknown function, ycfD, was identified as a potential 2OG oxygenase, which was confirmed by the observation of YcfD bound to the 2OG as a co-substrate (van Staalduinen et al., 2014) and catalyzed 2OG turnover in the absence of substrate (Ge et al., 2012). A peptide screen combined with co-immunoprecipitation analyses revealed that YcfD hydroxylated the β carbon of arginine 81 of ribosomal protein L-16 (Rpl16; Ge et al., 2012). Interaction with Rpl16 was independently confirmed and shown to be highly specific by glutathione S-transferase pull-down experiments (van Staalduinen et al., 2014). Consistent with the close link between translation and growth, alteration of YcfD expression has been shown to have dramatic effects on cell growth. Comparison of wild-type cell growth to that of a strain lacking the ycfD gene ( ycfD) showed that, under normal conditions, there was no difference between the two cell lines. However, under nutrient-limiting conditions, the growth of ycfD cells was significantly reduced, which correlated with a reduction in bulk protein translation by three-and fourfold (Ge et al., 2012). Overexpression of YcfD was also shown to significantly inhibit E. coli colony formation under standard growth conditions, indicating a clear role for YcfD in E. coli cell growth regulation (van Staalduinen et al., 2014). Two human homologs to YcfD, Mina53 and NO66, were also identified to hydroxylate ribosomal proteins, and have similar effects on cell proliferation (Tsuneoka et al., 2002;Teye et al., www.frontiersin.org FIGURE 2 | Structure conservation of ROX enzymes. A structural comparison of (A) the bacterial ROX YcfD (PDB ID 4NUB) to (B) the eukaryotic ROX Mina53 (PDB ID 4BU2). The structures align with an overall RMSD of 2.6 Å. The characteristic DSBH is shown in yellow, and the dimerization domain (dimer) and the winged-helix domain (WH) are labeled. (C) Structure based sequence alignment of the DSBH from YcfD and Mina53. Conserved metal binding residues are marked with a red asterisk and the 2OG binding residues are marked with a green asterisk. Alignment was done using Dalilite (Holm and Park, 2000). (D) Binding of YcfD to Rpl16 peptide. The surface of the Rhodothermus marinus YcfD (PDB ID 4CUG) active site is shown with an Rpl16 peptide (green) and the 2OG analog N-oxalylglycine bound (NOG; orange). The site of hydroxylation is marked with an asterisk (*). Below, a schematic hydroxylation of Rpl16-R81 by YcfD is shown.
Structural studies of YcfD and other ROX enzymes showed that they are comprised of three domains: an N-terminal DSBH, followed by a dimerization domain and a C-terminal wingedhelix domain (WH ; Figures 2A,B; Chowdhury et al., 2014;van Staalduinen et al., 2014). The N-terminal DSBH displays the characteristic topology of a stereotypical 2OG oxygenase. Despite overall low sequence homology (15% homology to Mina53 and NO66) YcfD is structurally very similar to the eukaryotic ROX enzymes (YcfD-Mina53 RMSD 2.6 Å). The DSBH is particularly well conserved with residues involved in metal and 2OG binding conserved (Figure 2C), while the dimerization and WHs show much lower conservation. The ROX active site is found in a pocket within the DSBH, and the substrate of YcfD, Rpl16 was shown to dock to the active site in a complementary manner (van Staalduinen et al., 2014). Co-crystallization of peptide substrates with the ROX enzymes provides more detailed insight into the interaction (Chowdhury et al., 2014). There are very minor changes observed when the Rpl16 peptide is bound by
Frontiers in Microbiology | Microbial Physiology and Metabolism
YcfD; the overall structure remains largely unchanged with only a few residues in the active site shifting to accommodate the substrate. The arginine side chain to be hydroxylated sits deep in the active site with the β-carbon aligned with 2OG, in an ideal geometry for hydroxylation ( Figure 2D). The surface of the area surrounding the active site of the YcfD is intimately involved with binding of the substrate with a number of clefts to allow docking of side chains to the surface of the enzyme. The dimerization domain is comprised of three α-helices which form intimate contacts with the dimerization domain of another molecule and have been shown to be important for catalytic activity (Chowdhury et al., 2014). The C-terminal WH distinguishes the ROX proteins from other 2OG oxygenases. Typically, WHs mediate proteinprotein or protein-nucleic acid interactions (Teichmann et al., 2012); in this case, it is unlikely that the ROX proteins bind nucleic acids directly due to the overall negative charge of this domain (Chowdhury et al., 2014). Instead, it is likely that this essential domain plays a role in substrate binding, either binding substrate directly or interacting with another part of the ribosomal complex.
REGULATORY POTENTIAL OF ROX IN PROKARYOTES
The substrate of YcfD, Rpl16, is an essential late-assembly component of the 50S ribosomal subunit and is responsible for the architectural organization of the aminoacyl-tRNA binding site (Nierhaus, 1991). A loss of Rpl16 has been associated with defects in stages of both ribosomal assembly and function, including maturation of the 50S subunit (Jomaa et al., 2014), binding of the 30S subunit (Kazemie, 1975), association with aminoacyl-tRNA (Kazemie, 1976), peptidyl-tRNA hydrolysis activity (Tate et al., 1983), peptidyl transferase activity (Moore et al., 1975;Hampl et al., 1981), as well as antibiotic interactions (Nierhaus and Nierhaus, 1973;de Bethune and Nierhaus, 1978;Teraoka and Nierhaus, 1978). The structure of Rpl16 has been determined by NMR (PDB ID: 1WKI), revealing that the hydroxylation site is on an extended, flexible loop that becomes locked upon binding to the 23S rRNA, as observed in crystal structures of the bacterial ribosome (Harms et al., 2001;Nishimura et al., 2004;Dunkle et al., 2010). The site of YcfD hydroxylation, R81, is inserted between two 23S rRNA helices in the intact ribosome, indicating a role for the hydroxyl group in stabilizing the architecture of the aminoacyl-tRNA binding site through hydrogen bonding and, ultimately, in the spatial optimization of the Rpl16-rRNA complex. There is evidence that YcfD binds very specifically to Rpl16, and is capable of pulling down Rpl16 in the absence of other ribosomal proteins. In addition, Rpl16 plays a role as a late ribosomal assembly protein, which indicates a potential function for YcfD in sequestering Rpl16 prior to its addition to the maturing ribosome, thus ensuring proper assembly of the ribosome. The overall importance of Rpl16 in the competency of the bacterial ribosome, combined with the fact that it is the target of a number of antibiotics, indicate that hydroxylation of Rpl16 by YcfD may play a role in the regulation of protein translation and, consequently, in bacterial cell growth.
2-oxoglutarate oxygenases have been found to provide a link between metabolism and transcriptional regulation via evidence that oxygenases involved in transcriptional regulation are inhibited by increased amounts of tricarboxylic acid cycle intermediates or 2-hydroxyglutarate in tumor cells (Ge et al., 2012;Mullen and DeBerardinis, 2012). A similar relationship between metabolism and translation through regulation of ROX activity may exist and investigation into the effects of metabolic molecules on ROX activity could lead to an understanding of this relationship. This connection between metabolism and translational regulation seems very intuitive, particularly for bacteria, as cell growth would need to decrease in response to limited nutrition and, conversely, under nutrient rich conditions the cells do not need to limit their growth. The activity of the ROX enzymes, like that of other 2OG oxygenases, was also found to be limited under hypoxic conditions (Ge et al., 2012). This loss of YcfD activity under anaerobic conditions suggests a regulatory role for the hydroxylase under hypoxic stress, resulting in reduced translation and subsequent loss of cell growth. As the connection between ROX enzyme activity and cell growth is better understood, there is opportunity for the development of new antibiotics which target YcfD, the YcfD-Rpl16 complex, or hydroxylated Rpl16.
CONCLUDING REMARKS
Post-translational hydroxylation, though well characterized in eukaryotes, remains understudied in prokaryotes. The discovery that YcfD is a bacterial ROX, responsible for the hydroxylation of an essential component of the bacterial ribosome, highlights the potential for post-translational hydroxylation as an important bacterial regulatory mechanism. The sensitivity of hydroxylases to alterations in metabolism and hypoxic conditions makes them ideal candidates for regulating bacterial cell response to changes in the environment. Investigation of other putative 2OG oxygenase could elucidate novel post-translational hydroxylation regulatory pathways in prokaryotes, as well as uncover novel therapeutic targets.
|
v3-fos-license
|
2023-04-27T15:17:50.501Z
|
2023-04-25T00:00:00.000
|
258345175
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/24/9/7799/pdf?version=1682403249",
"pdf_hash": "8a0da33f61a4d8aeddabba889d2208e735fca7fa",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41858",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "76d9d0085e47bede936e0fa2f902565c0ffd9dfe",
"year": 2023
}
|
pes2o/s2orc
|
Sex-Specific Features of the Correlation between GWAS-Noticeable Polymorphisms and Hypertension in Europeans of Russia
The aim of the study was directed at studying the sex-specific features of the correlation between genome-wide association studies (GWAS)-noticeable polymorphisms and hypertension (HTN). In two groups of European subjects of Russia (n = 1405 in total), such as men (n = 821 in total: n = 564 HTN, n = 257 control) and women (n = 584 in total: n = 375 HTN, n = 209 control), the distribution of ten specially selected polymorphisms (they have confirmed associations of GWAS level with blood pressure (BP) parameters and/or HTN in Europeans) has been considered. The list of studied loci was as follows: (PLCE1) rs932764 A > G, (AC026703.1) rs1173771 G > A, (CERS5) rs7302981 G > A, (HFE) rs1799945 C > G, (OBFC1) rs4387287 C > A, (BAG6) rs805303 G > A, (RGL3) rs167479 T > G, (ARHGAP42) rs633185 C > G, (TBX2) rs8068318 T > C, and (ATP2B1) rs2681472 A > G. The contribution of individual loci and their inter-locus interactions to the HTN susceptibility with bioinformatic interpretation of associative links was evaluated separately in men’s and women’s cohorts. The men–women differences in involvement in the disease of the BP/HTN-associated GWAS SNPs were detected. Among women, the HTN risk has been associated with HFE rs1799945 C > G (genotype GG was risky; ORGG = 11.15 ppermGG = 0.014) and inter-locus interactions of all 10 examined SNPs as part of 26 intergenic interactions models. In men, the polymorphism BAG6 rs805303 G > A (genotype AA was protective; ORAA = 0.30 ppermAA = 0.0008) and inter-SNPs interactions of eight loci in only seven models have been founded as HTN-correlated. HTN-linked loci and strongly linked SNPs were characterized by pronounced polyvector functionality in both men and women, but at the same time, signaling pathways of HTN-linked genes/SNPs in women and men were similar and were represented mainly by immune mechanisms. As a result, the present study has demonstrated a more pronounced contribution of BP/HTN-associated GWAS SNPs to the HTN susceptibility (due to weightier intergenic interactions) in European women than in men.
Introduction
HTN belongs to the group of the most common human diseases, a characteristic feature of which is high BP [1]. According to statistical reports in the world over the past decades (from 1990 to 2015), the number of persons with a systolic BP (SBP) of 140 mm Hg and above rose by 18.59% (from 17,307 to 20,526 per 100 thousand population), and the mortality and disability-adjusted life-years (DALYs) rates associated with an SBP of 140 mm Hg and higher increased by 8.58% (from 97.9 to 106.3 per 100,000 people) and 49.11% (from 95.9 million to 143.0 million), respectively [2]. A rise in SBP by 10 mm Hg leads on these results, the aforementioned features were introduced into genetic calculations as covariates in both men and women (Model 1). There were also differences between patients and controls (both in men and women) in such parameters as low physical activity and high fatty food consumption (in patients, these indicators were significantly higher, p < 0.001). In order to assess the effect of these indicators of lifestyle (physical activity) and diet (fatty food consumption) on genetic associations, we additionally included them in the analysis as confounders in Model 2.
The SNPs allele and genotype frequencies in the HTN and HTN-free groups of men (Table S1) and women (Table S2) have Hardy-Weinberg (H-W) equilibrium state, p Bonferroni ≥ 0.025 (Bonferroni's correction related to the amount of comparison pairs studied was accounted for, n = 2 [men and women]).
The men-women differences in involvement in the disease of the BP/HTN-associated GWAS SNPs were detected (both in Model 1 and Model 2). According to Model 1, among men, the polymorphism BAG6 rs805303 G > A was founded as HTN-correlated (genotype AA was protective; OR AA = 0.30; p AA = 0.0008; p permAA = 0.0008 [recessive model] statistical power = 98.75%) ( Table 2). In women, the HTN risk was associated with HFE rs1799945 C > G (genotype GG was risky; OR GG = 11.15; p GG = 0.011; p permGG = 0.014 [recessive model] statistical power = 99.99%) ( Table 2). Almost similar results were obtained in Model 2: the SNP BAG6 rs805303 G > A was HTN-associated in men (OR AA = 0.31; p AA = 0.001; p permAA = 0.001; statistical power = 98.43% [recessive model]) ( Table 2) and locus HFE rs1799945 C > G was HTN-involved in women (OR GG = 10.96; p GG = 0.012; p permGG = 0.014; statistical power = 99.99% [recessive model]) ( Table 2). The great similarity in the results of our evaluation of genetic associations in Model 1 and Model 2 may be due to the fact that, apparently, the effects of additional confounders included in Model 2 (low physical activity and high fatty food consumption) have already been "taken into account" in the HTN effects of confounders of Model 1. For example, the high consumption of fatty foods (confounder Model 2) obviously directly determines the lipid profile of the body such as TC, TG, LDL-C and HDL-C levels (confounders Model 1), and low physical activity (confounder Model 2) will largely correlate with BMI (confounder Model 1). Accordingly, at the next stage of the work, when analyzing the associations of inter-locus interactions with HTN, we used a list of confounders of Model 1.
Note: The results were obtained using the MB-MDR method with adjustment for covariates (Model 1); NH-number of significant high-risk genotypes in the interaction; beta H-regression coefficient for high-risk exposition in the step 2 analysis; WH-Wald statistic for high-risk category; NL-number of significant low-risk genotypes in the interaction; beta L-regression coefficient for low-risk exposition in the step 2 analysis; WL-Wald statistic for low-risk category; p perm -permutation p-value for the interaction model (1.000 permutations).
Intended Functionality of HTN-Associated SNPs in Men and Women Cohorts
In this section of our work, we evaluated the alleged functionality of all 10 GWAS loci examined and 125 strongly linked SNPs associated with HTN in women (in total, information about 135 loci was considered), and 8 GWAS loci and 96 LD SNPs correlated with HTN in men (data on 104 loci were studied).
Intended Functionality of HTN-Associated SNPs in Men and Women Cohorts
In this section of our work, we evaluated the alleged functionality of all 10 GWAS loci examined and 125 strongly linked SNPs associated with HTN in women (in total, information about 135 loci was considered), and 8 GWAS loci and 96 LD SNPs correlated with HTN in men (data on 104 loci were studied).
Epigenetic Changes of DNA Determined by HTN-Related Loci
Haploreg epigenomic annotations were used to identify regulatory variants among HTN-related loci in men (n = 104) and women (n = 135). The proposed functionality of the enormous proportion of analyzed loci [100/104, 96.15% (men); 130/135, 96.29% (women)] was found (Table S6), and these loci were located in such functionally active DNA sequences as enhancers [ (Table S6).
Strongly coupled loci also have significant eQTL potential in the above-mentioned HTN target organs (Table S10) (Table S10).
Splicing Quantitative Traits (sQTL) Associated with HTN-Significant SNPs
The presumable splicing regulation by investigated heritable DNA variations were detected. The tissue-specific SNP-splicing associations were recognized for five HTNcausal loci [CERS5 rs7302981 G > A, HFE rs1799945 C > G, OBFC1 rs4387287 C > A, BAG6 rs805303 G > A, TBX2 rs8068318 T > C] and 65 of 96 proxies SNPs (67.71%) in men and six HTN-causal polymorphisms [CERS5 rs7302981 G > A, HFE rs1799945 C > G, OBFC1 rs4387287 C > A, BAG6 rs805303 G > A, TBX2 rs8068318 T > C, ATP2B1 rs2681472 A > G] and 67 out 125 linked loci (53.60%) in women (Tables S11 and S12). The HTN-significant sQTL-dependent gene list in men (78 genes) and women (79 genes) has an impressive likeness due to the similarity of the sQTL-correlated polymorphism list (five of six loci were the same). A list of the same sQTL-dependent genes for men and women such as BAG6; ATF6B; ATF1; AIF1) (Tables S11 and S12). In women, only one additional sQTL dependent gene (POC1B-AS1) was added to this list.
Multiple SNP-sQTL connections, which are quite essential for HTN pathophysiology, were identified by us as they manifest themselves in organs that are targeted for HTN, such as the heart (rs805303 (Table S11). More than 60 linked loci also exhibit their sQTL effects in target organs: the heart (ten genes such as TBX2-AS1; TBX2; STK19B; RP11-332H18. (Table S12).
HTN-Associated Gene Pathways
As a result of the evaluation of the functionality at HTN-correlated loci (135 SNPs in women [10 causal loci and 125 strongly linked SNPs] and 104 SNPs in men [8 causal loci and 96 LD SNPs]), 159 genes were found to be involved in the HTN susceptibility in women and 147 genes in men. The substantial similarity in the HTN-involved genes list in men and women determines the almost identical biological pathways in them [in total, using Gene Ontology enrichment analysis tools, more than 140 different pathways were identified in both men (n = 145, Table S13) and women (n = 141, Table S14)] and were largely represented by pathways associated with the involvement of many different immune reactions/processes. Among both men and women, two biological pathways had the highest rates of statistical significance: MHC (major histocompatibility complex) protein (PANTHER Protein Class ID 00149) (p (fdr) equal 5.30 × 10 −11 in men and 8.33 × 10 −11 in women) and antigen processing & presentation (PANTHER Slim Biological Process ID 0019882) (p (fdr) equal 6.00 × 10 −10 in men and 9.44 × 10 −10 in women) (Tables S13 and S14).
The estimates of the mechanisms of intergenic interactions of HTN-significant genes in men and women, derived with help from the Genemania bioinformatic resource, also turned out to be almost the same. In both men ( (Table S15) and women (Table S16).
Discussion
The present study identified the men-women differences in Europe in the HTN involvement of the BP/HTN-associated GWAS SNPs. Among women, the HTN risk was determined by HFE rs1799945 C > G and inter-locus interactions of all 10 examined SNPs as part of 26 intergenic interactions models, whereas in men, the locus BAG6 rs805303 G > A and inter-SNPs interactions of eight loci in only seven models were correlated with
Discussion
The present study identified the men-women differences in Europe in the HTN involvement of the BP/HTN-associated GWAS SNPs. Among women, the HTN risk was determined by HFE rs1799945 C > G and inter-locus interactions of all 10 examined SNPs as part of 26 intergenic interactions models, whereas in men, the locus BAG6 rs805303 G > A and inter-SNPs interactions of eight loci in only seven models were correlated with HTN. The strongly pronounced functionality of HTN-correlated loci (135 SNPs in women [10 causal loci and 125 strongly related SNPs] and 104 SNPs in men [8 causal loci and 96 LD SNPs]) determines the involvement of 159 genes in women and 147 genes in men in the disease susceptibility architecture. A significant similarity in the list of genes involved in HTN in men and women determines almost identical signaling pathways (mainly due to immune mechanisms) in them.
The results of our study showed that the presence of the GG genotype (rs1799945 HFE) in a woman substantially increases the chance of HTN developing by more than 10 times (OR-11.15). The HTN-dangerous (high BP) effect of allele G and HTN-safe value (low BP) of allele C polymorphism HFE rs1799945 C > G have been detected in earlier studies (GWAS and etc.) [9,18,[66][67][68][69][70], which is undividedly compatible with our data in the women's cohorts.
The polymorphism HFE rs1799945 C > G and the seven proxy loci exhibit pronounced functionality in relation to fifteen genes (U91328.19; RP11-457M11.5; HIST1H4C; ALAS2; SLC17A3; HIST1H1T; BTN2A3P; SLC17A1; HIST1H2AC; GUSBP2; ZNF322; HFE; HIST1H3E; TRIM38; HIST1H2BC) (our in silico data) and are of paramount importance in the regulation of iron metabolism (serum concentration of such iron status biomarkers as iron/transferrin/ferritin/transferrin saturation, total iron binding capacity) (literary GWAS data [71,72]) and related metabolic pathways which are HTN-important (hemoglobin concentration, red cells parameters, glucose homeostasis, glycated hemoglobin levels, etc.) [72][73][74][75][76][77][78][79]. SNP HFE rs1799945 C > G has been correlated with medication agents acting on the renin-angiotensin system [80]. Importantly, Gill et al. used a Mendelian randomization analysis of GWAS (48,972 European/Genetics of Iron Status Consortium) and PheWAS (424, 439 European/UK Biobank) summary data and founded a causal link between genetically determined levels of serum iron and a hazard of hypercholesterolemia and anemia [81]. The risk value of hypercholesterolemia for HTN (and in common for cardiovascular diseases) is well-known [6,82] and it is also revealed in the sample studied in this work. So, data from the literature and this study material displayed visible HTN-impact pleiotropic effects of HFE rs1799945 C > G.
In the studied men, the genotype AA of BAG6 rs805303 G > A dramatically reduced the danger of HTN (OR = 0.30). In two early GWAS allele G rs805303 was HTN/high BP-risky and allele A was linked with low BP [18,76]. The higher systolic/diastolic BP in Ugandan adolescents having allele G rs805303 was detected by Lule et al. [83]. So, it can be noted that the same orientation of allelic variants BAG6 rs805303 G > A is associated with HTN/BP in our study and previously performed works. According to the in silico study results, the locus (with ten proxy SNPs) is the major regulator (the so-called "master regulator") of epigenetic/expression/splicing traits at fifty-five genes, including immune system genes (e.g., HLA, LY6, HSP gene clusters) which strongly correlated with HTN [84][85][86][87][88] (detailed information about the connection of the immunity-importance above-mentioned genes with HTN is given below).
Large-scale epidemiological studies indicate the presence of visible men-women differences in BP [2,6,8,82,89,90]. At that, these differences were more noticeable in high-income countries and those in Central/Eastern Europe than in the countries of other regions [89]. It is believed that the incidence of HTN in men is 1.2 times higher in comparison with women and the DALYs linked with SBP ≥ 140 mmHg (1.38 times) and SBP ≥ 110-115 mmHg (1.44 times) are greater by1.4 times [2,6]. One of the reasons determining the higher incidence of HTN in men (in comparison with women) may be the prevalence of such risk factors for cardiovascular diseases (including HTN) in them as a diet high in sodium and low in fruit, smoking, alcohol and drug use, and high fasting blood glucose levels [8]. Thus, a more pronounced exposure of environmental risk factors in men than in women, on the one hand, may have an independent HTN risk value, while on the other hand have a significant modifying effect on the realization of hereditary predisposition to the disease. The significant role of gene-environment interactions (SNPs with smoking, alcohol intake) in the nature of candidate gene polymorphism associations with HTN, including those considered in this work ARHGAP42 rs633185 C > G, HFE rs1799945 C > G, AC026703.1 rs1173771 G > A was shown in previously conducted GWAS [9,10,65]. It should be noted that a number of the above-mentioned HTN risk factors involved in SNP-environment interactions significant for the disease (in accordance with the GWAS data) were also registered in the studied sample of patients, both men and women (higher values of blood glucose, smokers, hypercholesterolemia, etc.).
Together with this, an important value in the HTN susceptibility is played by sex hormones, which, firstly, are directly involved in the process regulation of vasoconstriction/vasodilation [90][91][92][93]; secondly, they have a pronounced influence on a number of cardiovascular/HTN risk factors (distribution of adipose tissue in the body, the development of obesity and metabolic syndrome, the formation of obesity-dependent/independent insulin resistance, etc.) [7]; thirdly, they can be significant "modifiers" in the phenotypic manifestation of potential genetic determinants of HTN by modulating various nuclear and extra-nuclear pathways that control the expression of multiple genes, post-translational modifications of protein molecules, various HTN-impact signaling pathways, etc. [94,95]. Obviously, estrogens cause a BP decrease in premenopausal women and, accordingly, have a protective effect on HTN development [90][91][92][93]. Estrogens realize their HTN-protective phenotypic effects through vasoconstriction/vasodilation mechanisms due to the regulation of the renin-angiotensin-aldosterone system, the production of catecholamines, endothelins and angiotensin II [91][92][93]. After the menopause, the HTN-protective effects of estrogen in women decrease and the risk of raised BP increases [90]. It is assumed that testosterone, by increasing the activity of the renin-angiotensin-aldosterone system, promotes the development of oxidative stress, leading to the increased production of vasoconstrictors and a decrease in the effects of vasodilators (nitric oxide), which predetermines a higher blood pressure level and, accordingly, a higher HTN risk in men [92]. With age, testosterone levels in men decrease; however, the HTN risk does not decrease as expected, but rather increases due to a decrease in the regulatory effect of testosterone on adipose tissue (such as the suppression of adipocyte proliferation, decreased stromal vascular growth, androgen receptor deficiency, etc.) [7,96]), which, in turn, determines an increased risk of abdominal (visceral) obesity in men [7,97,98]. Thus, an age-dependent decrease in testosterone levels in men increases the risk of visceral obesity and thereby increases the HTN risk [7]. The data on the effect of testosterone on the development of obesity in women are ambiguous; there is evidence of a connection between high testosterone levels and both low visceral fat [99] and high fat content [100].
There is convincing evidence of correlations of sex-specific differences in the functioning of the autonomic nervous system and related features of the immune status/reactions of the body with the HTN risk [90,101]. It has been recorded that, in women compared with men, with age and also with obesity, the activity of the sympathetic nervous system is increased [90]. A change in the activity of the sympathetic nervous system has a direct regulatory effect on T cells, which in turn activates various signaling pathways of innate and adaptive immunity (production of various cytokines of pro-inflammatory action, pro-/anti-inflammatory cytokines signaling, interferon-γ-mediated reactions, activation of natural killer cells and monocytes, vascular inflammation, etc.) [85,87,101,102]. Along with the sympathetic nervous system, sex hormones are also involved in the regulation of the innate/adaptive immune system molecular mechanisms (estrogens have an immunostimulating effect while androgens have an immunosuppressive effect) [95]. Interestingly, the materials derived by us from data bioinformatic analysis indicate the paramount importance of immune mechanisms/processes (such as MHC proteins, antigen processing and presentation, and more than 100 other different immune pathways) in the HTN susceptibility in both men and women.
Our in silico data indicate a connection of HTN with a number of genes that control the organism's immune responses. These are genes such as HLA system (HLA-DRB6; HLA-DRB5; HLA-DRB1; HLA-DQA1; HLA-S; HLA-B), HSP (heat shock protein:HSPA1A; HCP5; HSPA1B), LY6 (lymphocyte antigen 6:LY6G5C; LY6G5B; LY6G6C; LY6G6E; LY6G6D; LY6G6F), etc., the expression/splicing of which is regulated by HTN-associated polymorphisms in various organs, including those that are targets for disease (heart/aorta/coronary and other arteries). It is believed that the HLA background shapes the T cell receptor (TCR) repertoire by neglecting/favoring specific T cell subgroups represented by T cell receptor variable beta chain (TCRBV) usage [84]. Different HLAs present auto-/foreign antigens to TCRBV-specific subpopulations of T cells in different ways (better/worse), which determines the individual specific features of interactions in the HLA-TCR system and is important in the formation of susceptibility to various immuno-significant diseases, including hypertension [84,85]. It seems important to point out the presence of clearly expressed differences between men and women regarding the effect of HLA genes on TCRBV transcripts (CD8-T cells in men affected by autoimmune disorders had the ability to multiply in the absence of TCR expression with similarities in key HLA-binding regions), which are supposed to be based on hormone-mediated mechanisms [84]. HSPs act as regulators of the organism's immune responses, and they are produced when the body is exposed to various damaging/stressful factors (mechanical/oxidative stress, cytokine influences, etc.) and they are the "protective" response of the cells (including cells of the arterial wall) to these effects [86,87]. T cells, interacting with HSPs (primarily HSP60, HSP70), form a regulatory T-cell response of anti-inflammatory directivity [88]. So, momentary effects of HSPs in HTN are protective due to suppression of the NF-κB pathway and improve the BP reply to angiotensin II [88]. Nonetheless chronic hyperexpression of HSP70 probably has prohypertensive value due to its capacity to evoke autoimmune processes [88]. Data from the literature on the increased formation of some HSPs (HSP70, HSP72) in HTN patients (including arteries (adventitial areas) and kidney) are presented [87]. In the work by Li et al., the relationship of a number of polymorphisms of three HSP70 genes (HSPA1A; HSPA1B; HSPA1L) both independently and as part of individual haplotypes with HTN, was shown in Uygur [86]. Of importance, the involvement of two of these genes (such as HSPA1A and HSPA1B) in the disease was also established by our in silico analysis. LY6 protein family members (including those considered in our work; gene products such as LY6G5C; LY6G5B; LY6G6C; LY6G6E; LY6G6D; LY6G6F) interacting with various endogenic regulatory factors (interferon-γ, type I and II interferons, retinoic acid,β2-integrins, matrix metalloproteinases, lymphotoxin alpha, etc.) affect immunity-significant cellular functions (T cell activation/proliferation, CD8+ T cell migration, cell adhesion, B cell specification, neutrophil recruitment, etc.) which are essential for the wide range of HTN-involved processes such as inflammation progression, activity of complement, neuronal activity, angiogenesis, etc. [103]. Despite the presence of certain sex-specific features of the immune pathways involved in HTN biology, as indicated in the literature (women, in comparison with men, are more likely to have inflammatory/autoimmune disorders that increase the HTN risk; have greater numbers of circulating IgM and more CD4 T cells; higher infiltration of the kidneys by T cells with an increase in the number of Th17 cells; an increase in the content of regulatory T cells in adipose with weight gain in women, and a pattern of reverse orientation in men; differences in HLA-mediated T-cell selection/expansion, predestining the beta chain of the determining the features of the beta chain of the T cell receptor, differences in the pro-/anti-inflammatory cytokines signaling, etc.) [84,85,102], according to our bioinformatic data, almost all of the detected HTN-significant immune pathways were the same in men and women (due to the almost identical HTN susceptibility gene list in men [147 genes] and women [159 genes], determined by us on the basis of in silico data).
Importantly, all of the above-described HTN-important environmental risk factors, biological processes/mechanisms (state of the autonomic nervous system, immune status, hormonal background, etc.) are closely correlated with each other and represent a sophisticated multi-level, multi-stage and multi-directional sex-specific system of regulation of BP-related traits (phenotypes) in HTN. There is no doubt that all these factors discussed above can be powerful epigenetic modifiers of the phenotypic manifestation of HTN predisposition genes and determine men-women differences in the involvement of genetic determinants in susceptibility to the disease. This study shows differences in the nature of the HTN genetic determination in men and women both within the framework of the main effects of BP/HTN-associated GWAS SNPs and their intergenic interactions: in women, susceptibility to the disease is determined by the polymorphism HFE rs1799945 C > G and strongly pronounced inter-locus interactions of all 10 examined SNPs (26 intergenic interactions were identified models), whereas in men, predisposition to the disease was associated with BAG6 rs805303 G > A polymorphism and significantly less pronounced interactions between only 8 considered loci (just 7 models were founded).
It is quite interesting that the data of this work are largely consistent with our previously published materials in which, during an associative study of a women sample with pre-eclampsia/without pre-eclampsia from the identical population of Europeans in Central Russia (the list of studied loci was analogous), it was found that the HFE rs1799945 C > G increased the pre-eclampsia risk (for allele G OR = 2.24) and BAG6 rs805303 G > A decreased the pre-eclampsia risk (for allele A OR = 0.55-0.78) [104] including this pregnancy complication risk in women with BMI ≥ 25 (for allele A OR = 0.36-0.66) [105]. So, the polymorphic variant G rs1799945 HFE increases both the HTN risk in women (data from this work, OR = 11.15 for genotype GG) and the pre-eclampsia risk in women (a specific symptom of this complication of pregnancy is increased BP (OR= 2.24 [104])), which may indicate the "universality" of this genetic marker as a risk factor for the development of hypertensive conditions in European women in Central Russia, and allows us to recommend its use in practical medicine in order to distinguish women with a high risk of developing elevated BP. Concurrently, there are some discrepancies in the results of these studies: SNP BAG6 rs805303 G > A has a protective value for HTN in men (not in women!) (OR = 0.30), but in parallel, this polymorphism marks a low risk of developing pre-eclampsia in pregnant women as a whole (OR = 0.55-0.78) [104] and among women with a BMI ≥ 25 (OR = 0.36-0.66) [105]. One of the possible reasons for these sex-specific differences in the value of the BAG6 rs805303 G > A locus in the development of pathology with elevated BP may be the modifying effect on the phenotypic manifestation of this polymorphism of an unequal confounder factor list taken into account in these studies (BMI, TC, TG, HDL-C, LDL-C, blood glucose, smokers-present study; age, family history of PE, pre-pregnancy BMI, obesity, number of gravidities, spontaneous/induced abortions, stillbirths, smokers-previous study [104]). At the same time, there is an obvious need to continue studies on the association of SNP BAG6 rs805303 G > A with diseases associated with elevated BP in the studied population, in order to finally establish its predictive potential (including their sex-specific features).
This study has a number of limitations. Firstly, experimental confirmations of the functional effects of HTN-significant GWAS loci identified in silico by us (influence on expression, splicing, epigenetic modifications of genes) are needed. Secondly, it is necessary to confirm (in silico and experimentally) men-women differences in the functional effects of GWAS-significant loci. Thirdly, an increase in the number of samples of men and women under consideration would allow the identification of phenotypic effects (association with the disease) and others which were "weaker" for this population of GWAS loci. Fourthly, the expansion of the panel of studied GWAS loci would allow to expand the data on menwomen features of the genetic determination of HTN. It should be noted that conducting similar studies in other ethno-territorial groups of the population can apparently "show" other patterns (different from our results) since these studies (as well as our work) will be replicative and their results will be largely determined by various features (the structure of the gene pool and the associated features of the "main" effects of genes and the nature of intergenic interactions, the spectrum and severity of environmental risk factors and the associated features of gene-environmental interactions, etc.) of those ethno-territorial groups of the population that will be studied.
Study Subjects
Two groups of European subjects of Russia (n = 1405 in total, all participants were born in Central Russia and have Russian origin (self-reported) [106,107], such as men (n = 821 in total: n = 564 HTN, n = 257 control) and women (n = 584 in total: n = 375 HTN, n = 209 control) were included in this "case-control" association study. The clinical examination of participants was performed during the 2013-2016 period in the "cardiology department" at the "St. Joasaph Belgorod Regional Clinical Hospital". Diagnosis of HTN (or absence of HTN) was carried out by qualified cardiologists according to the standards set out in the WHO methodological guidelines [1] (information about this has been presented previously [42]). BP indicators were confirmed by Korotkov (auscultative method using a sphygmomanometer) [108]. BP was measured at least twice within a few days. Thirty minutes before the procedure, the subjects did not consume caffeine/smoke/exercise. The measurement was carried out in the patient's sitting position after a five-minute rest. BP was measured on both hands; at least two measurements were taken with an interval of one to two minutes between measurements. As an indicator of an individual's BP, the average value for two measurements taken at least twice was taken. The HTN group was formed from the clinic's (cardiology department) patients. All HTN patients had a clinical history of disorder for one year or more and 81.79% (82.80% men and 80.27% women) received antihypertensive drugs. The absence of HTN (parameters BP were lower than 140mmHg for SBP and lower than 90mmHg for DBP), coronary artery disease and type 2 diabetes mellitus were the basis for inclusion in the control group. Persons who do not suffer from HTN (control group) were recruited during regular (annual) medical examinations at the aforementioned clinical hospital (these examinations were carried out by doctors of various specialties, including qualified cardiologists). All of the participants (HTN and HTN-free) did not have severe chronic allergic/autoimmune/hematological/oncological pathology [109]. Blood specimens of the subjects for defining TC, TG, LDL-C, HDL-C and blood glucose were obtained in the morning (7-9 h) after an eight-hour fast. The implementation of this study was supervised by the Ethics Committee (Human Investigation Committee) at "Belgorod State University" and was accompanied by the written consent of all the subjects.
Information about lifestyle and diet was collected for each subject (patient/control). The consumption of vegetables and fruits in an amount of less than 400 g daily (excluding salted/pickled vegetables, and starchy vegetables (potatoes)) was considered "low fruit/vegetable consumption" [110]. The average weekly physical activity at work and at home related to transport and recreation (including walking, running, fitness club classes, etc.) less than 150 min of moderate-intensity physical activity (for example, brisk walking) for 30 min or longer, at least five times a week) was considered "low physical activity". [111,112]. The average daily intake of fatty foods of more than 10% of the total food consumed (daily energy consumption due to fatty foods in the total amount of daily energy) was considered "high fatty food consumption" [110]. Daily salt intake (sodium chloride) of 5 g (teaspoon) or more per day was considered "high sodium consumption" [110].
Laboratory DNA Testing
We gathered peripheral blood (leukocytes) to extract genomic DNA [113] (the phenol/chloroform DNA extraction methodology was presented earlier [114]). The ten specially selected polymorphisms (confirmed associations of GWAS level with blood pressure (BP) parameters and/or HTN in European (Table S17) and assumed functional ability [104,105,115] (HaploReg information was regarded [116]) (Table S18)) were considered. The list of studied loci was as follows: (PLCE1) rs932764 A > G, (AC026703.1) rs1173771 G > A, (CERS5) rs7302981 G > A, (HFE) rs1799945 C > G, (OBFC1) rs4387287 C > A, (BAG6) rs805303 G > A, (RGL3) rs167479 T > G, (ARHGAP42) rs633185 C > G, (TBX2) rs8068318 T > C and (ATP2B1) rs2681472 A > G. All ten loci were correlated with BP in Europeans and all ten SNPs were associated with HTN: eight SNPs were HTN-linked in Europeans and two loci (rs4387287 OBFC1 and rs2681472 ATP2B1) were disorder-associated in the sample with a predominance (>85%) of Europeans (Table S17). Nine loci out of ten (excluding rs4387287 OBFC1) were associated with HTN/BP in two or more GWAS (Table S17). All selected ten SNPs have significant functionality (Table S18). One of the generally accepted methods of genotyping (the allelic discrimination method) and the CFX96 RT System device (Bio-Rad Laboratories, Hercules, CA, USA) were used for laboratory genetic studies [117]. The participants case/control status was masked throughout the laboratory genetic analysis. Genotyping of a random duplicated sample (near 4-6% from all sample) was utilized as an independent internal control to provide the individual genotyping data quality assurance [118,119]. No genotyping errors were registered.
Association Statistical Analysis
The genotypes of the examined 10 loci were tested for H-W equilibrium [120,121]. The sex-specific disorder-impact genetic association of individual polymorphisms [additive/recessive/dominant/allelic common model was calculated [122]] and interaction between SNPs [123] and HTN was appreciated based on the results (OR with 95%CI was evaluated [124,125]) obtained in the gPLINK [126], MDR [127,128], MB-MDR [129,130] genetic programs. In the logistic regression calculations, we took into account covariates (BMI, TC, TG, LDL-C, HDL-C, blood glucose, smokers (Model 1) and BMI, TC, TG, LDL-C, HDL-C, blood glucose, smokers, low physical activity, high fatty food consumption (Model 2) in both men and women (Table 1)), performed permutation procedures (in order to minimize the probability of false positive results [131,132]) and the results' analysis was carried out on the basis of a stronger level of statistical significance, p perm-Bonferroni ≤ 0.025 (Bonferroni's correction related to the amount of comparison pairs studied was accounted, n = 2 [men and women]). For individual SNPs statistical power was estimated by Quanto (v.1.2.4) [133].
Definition of the Alleged Functional Ability of HTN-Related Polymorphisms and Genes
For the purpose of biological interpretation of the identified associations (establishing the mechanisms underlying these associations), we used in silico information on the assumed functional ability of HTN-related polymorphisms (taking into account strongly coupled loci with a coupling strength of at least 0.80 [134,135]) and genes. We used such seven bioinformatics databases as (1) HaploReg [116] (determination of the regulatory potential of polymorphisms: location in putative promoters/enhancers, association with transcription factors/regulatory proteins, localization in regions of open chromatin and evolutionarily conservative DNA sites), (2) SIFT [136] and (3) PolyPhen-2 [137] (identification and evaluation of predictive potential of non-synonymous SNPs), (4) GTEx [138] (correlation of loci with gene expression and alternative splicing in 54 different organs/tissues), (5) Blood eQTL browser [139] (the relationship of SNPs with gene expression in peripheral blood), (6) Gene Ontology [140] (identification of HTN-associated genes pathways), (7) Gen-eMANIA [141] (estimation and visualization of the mechanisms of intergenic interactions of HTN-significant genes).
Conclusions
This study showed a more strongly pronounced contribution of BP/HTN-associated GWAS SNPs to HTN susceptibility (due to weightier intergenic interactions) in European women than in men.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2015-10-15T00:00:00.000
|
18212388
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2015.00875/pdf",
"pdf_hash": "15edf1863861c45909516103cb78a3a7e43fd2ab",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41860",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "15edf1863861c45909516103cb78a3a7e43fd2ab",
"year": 2015
}
|
pes2o/s2orc
|
A security network in PSI photoprotection: regulation of photosynthetic control, NPQ and O2 photoreduction by cyclic electron flow
Cyclic electron flow (CEF) around PSI regulates acceptor-side limitations and has multiple functions in the green alga, Chlamydomonas reinhardtii. Here we draw on recent and historic literature and concentrate on its role in Photosystem I (PSI) photoprotection, outlining causes and consequences of damage to PSI and CEF’s role as an avoidance mechanism. We outline two functions of CEF in PSI photoprotection that are both linked to luminal acidification: firstly, its action on Photosystem II with non-photochemical quenching and photosynthetic control and secondly, its action in poising the stroma to overcome acceptor-side limitation by rebalancing NADPH and ATP ratios for carbon fixation.
In the early years of photosynthesis research, a cyclic photophosphorylation was described that required ferredoxin (Fd), did not evolve oxygen (O 2 ) and resulted in the accumulation of ATP (Arnon et al., 1954). From this observation, experiments performed in a variety of organisms from cyanobacteria to higher plants using a combined pharmacological and in vitro approach created a robust model for what is now referred to as cyclic electron flow (CEF; thoroughly reviewed, in Bendall and Manasse, 1995). More recently, Arabidopsis thaliana lines altered in CEF have been identified and have enriched the ways we have to study these pathways (Joet et al., 2001;Munekage et al., 2002Munekage et al., , 2004DalCorso et al., 2008). Biochemical approaches have shown that the Proton-Gradient Regulator5 (PGR5) and PGR5-Like1 (PGRL1) proteins form an interaction that results in a ferredoxin-plastoquinone reductase (FQR) activity (Hertle et al., 2013). In the unicellular, green alga, Chlamydomonas reinhardtii, this pathway and the function of these proteins is conserved (Petroutsos et al., 2009;Tolleter et al., 2011;Johnson et al., 2014). In Chlamydomonas a second type of CEF is also in operation where the mediator at the level of the PQ pool is a type-2 NADPH dehydrogenase (Desplats et al., 2009), with the nda2 mutant shown to have a phenotype in CEF (Jans et al., 2008). Here, we focus on the PGR5 pathway and work done on the Chlamydomonas mutants pgr5 and pgrl1 mutants, that both demonstrate no PGR5/PGRL1dependent CEF (Alric, 2014). Our focus is on Chlamydomonas but due to the conservation of this pathway we also make reference to work done in other photosynthetic organisms.
Cyclic electron flow is a generator of proton motive force that (i) can produce supplementary ATP to meet ATP:NADPH requirements for the Calvin Benson Bassham (CBB) cycle and the CO 2 concentrating mechanism (CCM; reviewed by Alric, 2010), and (ii) triggers regulatory mechanisms, namely nonphotochemical quenching (NPQ) and cytochrome b 6 f complex (cytb 6 f ) "photosynthetic control" (Joliot and Johnson, 2011). Its rate is highest under conditions where the stromal poise is reduced, thus PGR5-CEF has been considered as a regulator of redox homeostasis for the photosynthetic chain (Nishikawa et al., 2012). Among the phenotypes observed in CEF-altered strains of both Arabidopsis and Chlamydomonas, Photosystem I (PSI) photoinhibition arose in conditions of high light or limiting CO 2 (Munekage et al., 2002;Dang et al., 2014;Johnson et al., 2014) and fluctuating light (Suorsa et al., 2012) leading to the assignment of yet another role for PGR5-CEF. While Photosystem II (PSII) photoinhibition is frequently observed and has complex models that describe the mechanism (Murata et al., 2012), PSI photoinhibition remains poorly understood. In this work, we review the potential causes of photoinhibition that occur at the acceptor-side of PSI and the processes triggered by CEF that can contain it. For the sake of comprehensive reviewing of mechanisms involved in PSI photoprotection, other connected pathways are also introduced.
ACCEPTOR-SIDE LIMITATION IS THE CAUSE OF PSI PHOTOINHIBITION
PSI photoinhibition was first reported in isolated chloroplasts submitted to strong light (Jones and Kok, 1966). Satoh was able to differentiate two types of damage that corresponded to damage to the two photosystems using fragmented chloroplasts. Artificial donors were used to measure the capacity of PSI to transfer electrons to terminal acceptor, NADPH. These experiments showed that the addition of the PSII inhibitor, DCMU specifically but incompletely prevented photo-inactivation of PSI (Satoh, 1970a,b). Photo-inactivation of PSI was avoided by addition of excess Fd showing that PSI photoinhibition is an acceptor-side limited phenomenon. The observation that the same group of cofactors that could enhance CEF (including Fd) were also involved in the avoidance of photoinactivation of PSI led to the discussion of CEF as a photoprotectant for PSI (Satoh, 1970c). Further studies demonstrated the destruction of PSI-bound iron-sulfur centers (F X , F A/B ) by oxidative species primarily superoxide anion radical (O •− 2 ) (Sonoike et al., 1995). Production of O •− 2 can occur within: the iron-sulfur centers of PSI, reduced Fd and stromal flavodehydrogenases (NADP+ ferredoxin dehydrogenase, glutathione reductase and monohydrate ascorbate reductase) in plant chloroplasts (discussed, in Asada, 2000). In permissive conditions, radicals are enzymatically neutralized into water, resulting in the net uptake of O 2 reported by Mehler (1951), establishing a pseudo-cyclic pathway for electrons known as the water-water cycle. When radical production exceeds detoxifying capacity, O •− 2 irreversibly damages PSI primary acceptors (F X , F A/B ) and prevents stable accumulation of P700 + in high light (Figure 1). This is because of fast charge recombination at the level of intermediary acceptors A 0/1 (Setif and Brettel, 1990). The resultant decrease in the quantity of oxidizable P700 is thus a common measurement for probing the photoinhibition of PSI.
Interestingly, no singlet oxygen ( 1 O 2 ) is produced in overexcited PSI (triplet excited state, 3 P700) because P700 is sterically screened from O 2 (Setif et al., 1981). Hence, P700 + and 3 P700 are most probably efficient quenchers of excess excitation of plant PSI as observed for cyanobacterial PSI (Schlodder et al., 2005;Shubin et al., 2008). Contrarily, 1 O 2 is the main photodamaging species produced in acceptor-side limited PSII ( 3 P680) (Durrant et al., 1990). It is remarkable that O 2 can be sensitized in PSII and not in PSI (Hideg and Vass, 1995), in other words that 1 O 2 production within PSII was not evolutionarily eliminated. Over time this may be why a signaling role has developed for 1 O 2 (Telfer, 2014) resulting in some selectivity in the degradation of PSII protein under photoinhibitory conditions. As compared to the "monolithic" architecture of PSI, the modular architecture of PSII allows for a unique degradation of damaged D1 and re-use of other subunits (extensively reviewed, in Caffarri et al., 2014) and may be another reason why PSII damage-and-repair cycle has been a target of selective pressure. On the contrary, PSI has no known molecular mechanism per se to set its turnover in tune with light intensity. The protection of PSI from photoinhibition would appear to require a set of distinctly different properties than that of PSII (Allahverdiyeva et al., 2015) which includes buffering acceptor side limitations in the stroma. Selective, irreversible photoinhibition of PSI in Chlamydomonas is observed to occur both in CEF-altered strains (Dang et al., 2014;Johnson et al., 2014;Kukuczka et al., 2014;Bergner et al., 2015) and in strains with severe acceptor side limitations such as those lacking RuBisCO (Johnson et al., 2010). Crpgr5 and Crpgrl1 strains demonstrate decreased amounts of oxidizable P700 and PSI protein measured by western hybridization after exposition to high light (Johnson et al., 2014;Kukuczka et al., 2014) and after transition from high (2%) to atmospheric concentrations of CO 2 (Dang et al., 2014). In the following sections we present CEF's role in triggering several mechanisms avoiding long-lasting limitations at the acceptor-side of PSI.
CEF TRIGGERS FAST QUENCHING, PHOTOSYNTHETIC CONTROL AND PSII PHOTOINHIBITION RESULTING IN PSI PHOTOPROTECTION
As already suggested (Sonoike, 2011), non-photochemical quenching (NPQ) of PSII avoids excessive electron flow to PSI via linear electron flow (LEF) to prevent photoinhibition. CEF limits electrons entering the thylakoid chain because it prompts both excitation-dependent quenching (qE) and indirectly PSII photoinhibition (qI), thus avoiding overflow to PSI. Acidification of the lumen triggers qE (Briantais et al., 1979) and occurs during CEF due to coupling of electron transfer and proton translocation in the cytb 6 f. Since both LEF and CEF pass through cytb 6 f, the exact contribution of CEF to the formation of a qE is hard to determine but an altered ability to develop qE is observed in Crpgr5 and Crpgrl1 strains (Tolleter et al., 2011;Dang et al., 2014;Johnson et al., 2014;Kukuczka et al., 2014) concomitantly with PSI photoinhibition (Dang et al., 2014;Johnson et al., 2014). This FIGURE 1 | Acceptor-side limitation and excess electron flow promotes CEF or in its absence leads to the irreversible damage of PSI centers. The linear electron flow coming from PSII (gray dashed arrow) is the source of electrons for the PSI reaction center (P700 + /P700) that transfers electrons from the chlorophyll excited state (P700 * ) and subsequently delivers to downstream acceptors within PSI (FX, FA, FB iron-sulfur centers) then to stromal electron carriers (ferredoxin, FNR, NADP + ) (light gray arrow). When CO2 fixation decreases, acceptor-side limitation gradually leads to accumulation of NADPH and overreduction of stromal and PSI electron carriers (light red arrow). In this case electrons are redirected to O2 either at the level of NADPH without the production of reactive oxygen species (ROS; gray dashed arrow) or produce the very reactive superoxide anion radical (O •− 2 ) at the level of FX, FA, FB, Fd, and FNR with a rate exceeding the detoxification process. Thus O •− 2 will irreversibly destroys the centers (red arrows) resulting in an inability to oxidize *P700 and on a longer time scale the degradation of the entire PSI complex. Preventing this scenario, cyclic electron flow triggers downregulation of linear electron flow at the site of PSII and cytb6f by enhancing proton accumulation in the lumen.
is also consistent both with the failure to acidify the lumen under short saturating illumination in Atpgr5 plants (Suorsa et al., 2012) and reduced growth of Crpgrl1 strains in fluctuating light (Dang et al., 2014). A recent report challenging the effects of rapid quenching of PSII in PSI photoprotection showed that an absence of qE (in Atnpq4 mutants lacking the PsbS protein that induces qE in higher plants, Li et al., 2000) does not have a dramatic effect on P700 oxidation kinetics at any light regime as opposed to Atpgr5 mutants where steady state oxidation of P700 is abolished . Partial compensatory mechanisms may, however, act between CEF and qE as double mutant strain Crpgrl1npq4, (lacking both CEF and the LHCSR3 protein that acts as the activator for qE in Chlamydomonas, Peers et al., 2009), are particularly susceptible to PSI photoinhibition in comparison to the simple Crpgrl1 mutant (Kukuczka et al., 2014;Bergner et al., 2015). This may coincide with a PSI-photoprotective role recently proposed for LHCSR3 via its association with the PSI antenna system under "state 2 conditions" (Allorent et al., 2013;Bergner et al., 2015) but here LHCSR3-dependent quenching of LHCIs and/or PSI-bound LHCIIs has not been strictly established. The argument against would be that the quenching of PSI antenna is irrelevant given the harmlessness of *P700 and furthermore photo-oxidation events at PSI (measured on isolated complexes in vitro) have been shown to take place after photoinhibition is completed, chlorophyll oxidation being preceded by irreversible carotenoid oxidation (Santabarbara, 2006). For now, the literature would suggest that qE is a first level of photoprotection in photoinhibitory conditions and rapidly protects not only PSII but also PSI by reducing electron flow (Figure 2).
Aside from ATP production and qE, high pmf downregulates LEF (Rumberg et al., 1968) at the site of PQH 2 oxidation: originally called "back pressure, " (Stiehl and Witt, 1969), and now known as "photosynthetic control. " Comparison of the Cr∆rbcL mutant against the double mutant Cr∆rbcL pgr5 or wildtype (WT) clearly shows the effects of photosynthetic control imposed by CEF witnessed by a strong increase in chlorophyll fluorescence in the single mutant indicative of a gradually decreasing electron flow, while the double mutant has fluorescence kinetics that resemble the WT (Johnson et al., 2014). The mutant lacking RuBisCO is an important genetic tool to observe the effects of an absence of CO 2 fixation on CEF because it is difficult to modulate CO 2 much below atmospheric concentrations in Chlamydomonas due to the efficiency of the carbon concentrating mechanism (CCM): thus this double mutant gives us a window on the mechanisms of CEF as if the WT were under strong CO 2 -limited conditions. As important as qE in fluctuating light, photosynthetic control is established when a rapid response to FIGURE 2 | Cyclic electron flow promotes proton accumulation in the lumen and triggers regulatory mechanisms that can protect PSI from photoinhibition. Under constraining conditions, electrons are recycled from the acceptor-side of PSI by PGR5-CEF that results in a rapid acidification of the lumen. This promotes (i) the energy-dependent quenching of PSII antennas (qE) and (ii) photosynthetic control at the level of cytb6f that exerts reducing pressure on PSII to provoke a controlled photoinhibition (qI). These mechanisms result in a decrease of electron flow to PSI. Regulation of ATPsynthase conductivity by protons, electrochemical gradient partitioning and O2 photoreductive pathways produce ∆pH, producing ATP and contributing to the recycling of NADPH. Extra ATP produced by CEF is used by the Calvin-Benson-Bassham (CBB) cycle to assimilate CO2 and contributes to the regeneration of NADP + . Decreasing linear electron flow or increasing the sinks downstream of PSI avoids the over-reduction of ferredoxin (Fd) and PSI centers.
severe acceptor side limiting conditions is required to buffer a sudden burst of electron flow toward PSI (Suorsa et al., 2012). Moreover, while qE is not constitutive but inducible in Chlamydomonas, on the contrary to plants (Peers et al., 2009), photosynthetic control is likely to be crucial in the very first hours of exposition to drastic conditions. As a secondary consequence of photosynthetic control, reducing pressure increases on the Q B site of PSII so that PSII centers remain in a closed state longer. Overexcited chlorophyll ( 3 P680) activates O 2 into 1 O 2 triggering photoinhibition of PSII (reviewed, in Sonoike, 2011;Murata et al., 2012). Highlighting the key role of pmf in control of linear electron transfer, PSI was shown more susceptible to photoinhibition than PSII in nigericin-infiltrated leaves where control at the level of cytb 6 f could not develop (Joliot and Johnson, 2011). Moreover, photoinhibition of PSI in Atpgr5 could be avoided by modulating PSII turnover with the addition of the protein translation inhibitor lincomycin (Tikkanen et al., 2014). Thus, PSII photoinhibition mediated by photosynthetic control is a secondary level of photoprotection in drastic photoinhibitory conditions that exceed qE dissipation capacity: it indirectly but effectively acts as a shunt to avoid sustained PSI acceptor-side limitations (Figure 2).
In Chlamydomonas, the pgr5 mutation combined with an absence of chloroplast ATPsynthase results in a less photosensitive phenotype than the ATPase mutant alone, where light sensitivity has been attributed to luminal over-acidification (Johnson et al., 2014). This observation shows that photosynthetic control relying on CEF actively contributes to decreasing the luminal pH and supports previous work (Rott et al., 2011). In higher plants, triggering of low luminal pH has also been correlated with changes in conductivity of the ATPsynthase to protons (Kanazawa and Kramer, 2002) and also to partitioning of the proton motive force between its osmotic (or concentration gradient, ∆pH) and electrical (∆Ψ) component (Avenson et al., 2005). ATPsynthase conductivity to protons is increased in Atpgr5 (Avenson et al., 2005;Wang et al., 2015) with similar observations seen in knocked-down PGR5 rice lines (Nishikawa et al., 2012). This may be ruled by the concentration of substrate for ATP production, i.e., ADP and phosphate (P i ): in spinach thylakoids, artificially decreasing P i levels resulted in lower ATPase conductivity and a lower luminal pH, thus promoting qE (Takizawa et al., 2008). These observations show the metabolic interconnections between ATP, CEF and ATPsynthase. As already suggested (Shikanai, 2014), further studies should be done to explain the acceptorside limitation occurring in strains affected in PGR5-CEF in the light of the scenario proposed by Kramer and coworkers for qE regulation. regulates acceptor-side limitation in the absence of reactions for consumption of NADPH. Rubisco-less mutants but also CBB cycle mutants and those affected in starch metabolism show a strong increase in CEF or in CEF-dependent photosynthetic control that results in a repressed rate of LEF (Livingston et al., 2010;Johnson and Alric, 2012;Johnson et al., 2014;Krishnan et al., 2015). When the CBB cycle is an insufficient sink for reducing power, O 2 photoreduction pathways may work in conjunction with CEF to protect PSI. On the other hand, under non-acceptor side limited conditions (steady state high light and/or high CO 2 ) an absence of CEF in Crpgrl1 did not result in photoinhibition to PSI and this capacity to acclimate was shown to be due to a sustained dependence on O 2 photoreduction pathways (Dang et al., 2014). While photorespiration is a minimal process in green algae due to the CCM, other important sinks exist for reducing equivalents downstream of PSI that terminate on O 2 . These include: (i) export of reducing power to the respiratory chain to stimulate oxidative phosphorylation in the mitochondria, (ii) ROS-producing ("Mehler") reactions with a concomitant increase in detoxifying enzymes and (iii) ROS-independent ("Mehler-like") NADPH:O 2 oxidoreduction probably by flavodiiron proteins (FLV; Peltier et al., 2010). Mechanisms (ii) and (iii) dually generate proton gradient and thus ATP Elli, 1995, 1996) and regenerate NADP + thus avoiding PSI acceptor-side limitations, and over-expression of FLV proteins 1 and 3 in cyanobacteria have been observed to stabilize PSI under fluctuating light (Allahverdiyeva et al., 2013). Mechanism (i) mitochondrial cooperation, also generates ATP but the ATP is probably not reshuttled back into the chloroplast, its major role would be thus to regenerate oxidized NADP + (Figure 2). Radmer and Kok (1976) first observed the potential for O 2 to replace CO 2 fixation during a light-to-dark transition or in the presence of CBB cycle inhibitors. The role of such an acceptor side activity within the chloroplast such as Mehler (O 2 reduction) or hydrogenase (H + reduction) would enable Chlamydomonas cells to reoxidise the electron transport chain in the light, convincingly shown after anaerobic incubation (Forti et al., 2005;Ghysels et al., 2013). In the Crpgr5 ∆rbcL, lacking both CO 2 fixation and CEF, O 2 photoreduction rates can completely compensate for CO 2 fixation resulting in WT O 2 evolution levels (Johnson et al., 2014). Similarly, in a detached leaf assay addition of antimycin A provokes both production of H 2 O 2 and a strong sustained malate dehydrogenase activity resulting in high rates of mitochondrial O 2 uptake (Fridlyand et al., 1998). While very removed from the steady-state metabolic flow observed in WT strains under standard conditions, these experimental observations provide us with the maximal rates for the different pathways, and suggest possible compensatory reactions. It would appear that CEF down regulates ATP-independent O 2 reducing pathways and up regulates ATP-dependent CO 2 reduction by CBB cycle. Therefore, CEF can be seen as limiting ROS production under acceptor side limitations. Furthermore, it has been suggested that an interplay between CEF and O 2 photoreduction acts as a buffer to poise electron flow toward carbon fixation (Backhausen et al., 2000). The action of H 2 O 2 as an activator of NDH CEF in Arabidopsis provides further evidence that O 2 photoreduction pathways and CEF are working in tandem (Strand et al., 2015). The model that emerges is that regulation of temporary excesses of reductant at the acceptor side of PSI is controlled by an interplay between CEF, the Mehler reaction, FLV proteins and the malate valve with another level of control exerted by redox regulators such as thioredoxins (Scheibe and Dietz, 2012). These pathways likely form a set of communicating reactions that can rebalance NADPH/NADP + ratios and avoid PSI photoinhibition.
CONCLUDING REMARKS
While the study of mutants reveals to us the limitations of a system, the complete photosynthetic apparatus is perfectly able to acclimate to both light and changing redox conditions with CEF and its protective role over PSI placed centrally as a regulator of this flexibility. Further understanding of PSI photoinhibition, proposed to be a major determinant in crop productivity (Tikkanen et al., 2014), may allow the rational modification of photosynthesis to improve the efficiency of plant crops and the production of renewable algal biomass.
ACKNOWLEDGMENTS
This work was supported by grants from the Agence Nationale de la Recherche (ChloroPaths: ANR-14-CE05-0041-01) and the CEA Tech Department of Commissariat des Energies Atomiques et Energies Alternatives (CEA) for FC grant. We thank Jean Alric for constructive discussions.
|
v3-fos-license
|
2019-04-27T13:12:11.337Z
|
2018-11-28T00:00:00.000
|
135119194
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hyp.13326",
"pdf_hash": "7bc3d91a76312f97e15b90762cee28b8c6b8a662",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41861",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "e85d949030176fbb5988ab40d313cbd2783444f3",
"year": 2018
}
|
pes2o/s2orc
|
Effects of subsurface soil characteristics on wetland–groundwater interaction in the coastal plain of the Chesapeake Bay watershed
Ecosystem services provided by depressional wetlands on the coastal plain of the Chesapeake Bay watershed (CBW) have been widely recognized and studied. However, wetland–groundwater interactions remain largely unknown in the CBW. The objective of this study was to examine the vertical interactions of depressional wetlands and groundwater with respect to different subsurface soil characteristics. This study examined two depressional wetlands with a low‐permeability and high‐permeability soil layer on the coastal plain of the CBW. The surface water level (SWL) and groundwater level (GWL) were monitored over 1 year from a well and piezometer at each site, respectively, and those data were used to examine the impacts of subsurface soil characteristics on wetland–groundwater interactions. A large difference between the SWL and GWL was observed at the wetland with a low‐permeability soil layer, although there was strong similarity between the SWL and GWL at the wetland with a high‐permeability soil layer. Our observations also identified a strong vertical hydraulic gradient between the SWL and GWL at the wetland with a high‐permeability soil layer relative to one with a low‐permeability soil layer. The hydroperiod (i.e., the total time of surface water inundation or saturation) of the wetland with a low‐permeability soil layer appeared to rely on groundwater less than the wetland with a high‐permeability soil layer. The findings showed that vertical wetland–groundwater interactions varied with subsurface soil characteristics on the coastal plain of the CBW. Therefore, subsurface soil characteristics should be carefully considered to anticipate the hydrologic behavior of wetlands in this region.
| INTRODUCTION
Depressional wetlands (a.k.a. "Delmarva bays") are abundant on the coastal plain of the Chesapeake Bay watershed (CBW) due to the flat topography; the close proximity to the groundwater table and the coast; and the high precipitation relative to evapotranspiration . These densely distributed wetlands provide important ecosystem services for this region as follows: Water purification (Denver et al., 2014;Jordan, Whigham, Hofmockel, & Pittek, 2003;Sharifi, Kalin, Hantush, Isik, & Jordan, 2013), flood control (Lee et al., 2018), wildlife habitat (Russell & Beauchamp, 2017;Yepsen et al., 2014), and carbon storage (Fenstermacher, Rabenhorst, Lang, McCarty, & Needelman, 2016). A dramatic decline in wetland areas, mainly owing to conversion to croplands, is likely to have substantially decreased the provision of these wetland-related ecosystem services (USFWS, 2002). Accordingly, the return of cropland to its original wetland condition (hereafter, referred to as "wetland restoration") should lead to higher levels of wetland benefits (Van Houtven, Loomis, Baker, Beach, & Casey, 2012).
To achieve wetland management goals, monitoring and assessing wetland functions are essential (Shuman & Ambrose, 2003). Understanding wetland hydrology is critical because ecosystem services provided by a wetland (e.g., water purification and carbon storage) are highly dependent on inflow to, and outflow from, that wetland (Fenstermacher et al., 2016;Sharifi et al., 2013). Wetlandgroundwater interactions have been widely examined due to the substantial impact of groundwater on controlling wetland water balance.
Several attempts have been made to examine wetland hydrologic characteristics at the catchment scale for the coastal plain of the CBW using remotely sensed data (Huang, Peng, Lang, Yeo, & McCarty, 2014;Jin, Huang, Lang, Yeo, & Stehman, 2017;Lang & McCarty, 2009), hydrologic modeling (Lee et al., 2017), and geospatial data (Lang, McDonough, McCarty, Oesterling, & Wilen, 2012). These catchment scale findings were mostly limited to changes in the surface water of wetlands. A study using a hydrologic model showed that wetland-groundwater interaction was a key hydrologic process affecting fluctuation of downstream flow (Lee et al., 2018). Denver et al.
(2014) observed lateral groundwater exchange between depressional wetlands and adjacent upland areas. However, field measurements of wetland-groundwater interaction are limited, and the vertical interaction of wetlands and groundwater remains largely unknown for this region.
Wetland functional assessments among prior converted croplands, natural and restored wetlands, have been carried out extensively in the coastal plain of the CBW under the "Wetland" component of the U.S. Department of Agriculture Conservation Effects Assessment Project (CEAP-Wetlands), Mid-Atlantic Regional (MIAR) study focused on the following: Water quality (Denver et al., 2014), plant biomass (McFarland et al., 2016, plant species (Yepsen et al., 2014), carbon storage (Fenstermacher et al., 2016), and dissolved organic matter (Hosen, Armstrong, & Palmer, 2018). Despite being a key area of research for wetland functional assessment, wetland-groundwater interaction has been poorly examined using in situ observations relative to other coastal regions, such as those in South Carolina (Pyzoha et al., 2008) and Florida (McLaughlin & Cohen, 2013).
Furthermore, the Chesapeake Bay (CB) is nationally important, as it is the largest estuary in the United States (CEC, 2000), and it is also listed as a RAMSAR wetland site of international importance (Gardner & Davidson, 2011). The CB is the first estuary for restoration in the United States, and similar efforts for other coastal regions have followed the CB restoration (Executive Order 13508, 2010). Regarding the important ecological value of the CB, monitoring physical processes that have been poorly examined (e.g., wetland-groundwater interaction) contributes to expanding our understanding, leading to developing appropriate and necessary restoration plans for this region.
As a part of the CEAP-Wetlands study, multiple wells and piezometers were installed to monitor the surface water level (SWL) and groundwater level (GWL) of wetlands situated within the mid-Atlantic coastal plain. In this study, we analysed observations obtained from a well and piezometer within the coastal plain of the CBW. These instruments were installed at two wetlands with distinctive subsurface soil characteristics. One wetland site includes a low-permeability soil layer characterized by an extremely low-soil hydraulic conductivity, whereas a fairly high-soil hydraulic conductivity underlies the other wetland. A wetland with a low-permeability soil layer can have a high potential to sustain surface water due to limited water loss by seepage compared with a wetland with a high-permeability soil layer (O'Driscoll & Parizek, 2003;Rains et al., 2006).
The goal of this study is to examine vertical wetland-groundwater interactions with respect to different subsurface soil characteristics on the coastal plain of the CBW. The two depressional wetlands described above with a low-permeability and high-permeability soil layer were selected. Although lateral groundwater flow is one of the key hydrologic components affecting wetland water levels, this study focused on the vertical interaction of depressional wetlands and groundwater, mainly due to limited observation points along a horizontal gradient.
We, first, compared the similarity in water level dynamics over time (referred to as consistency, hereafter) between the SWL and GWL for the two wetlands to test the hypothesis that the consistency between the SWL and GWL is positively proportional to the saturated hydraulic conductivity. Then, the magnitude of the vertical hydraulic gradient between the SWL and GWL was compared between the two wetlands using cross-correlation analysis (see Section 2.3). Finally, the wetland hydroperiod (i.e., the total time of surface water inundation or saturation) between the two wetlands was compared to examine how subsurface soil characteristics affect wetland hydrology.
| Study area and wetland characteristics
The two wetland study sites are located within the coastal plain of the CBW (Figure 1). This area is characterized by relatively flat topographic relief (Lang et al., 2012) and a humid temperate climate with an annual precipitation of 1,200 mm (Ator, Denver, Krantz, Newell, & Martucci, 2005). Nearly half of the precipitation is lost via evapotranspiration, and the remainder infiltrates into groundwater or flows into nearby streams (Ator et al., 2005). A restored wetland (referred to as "wetland," hereafter) was examined for this study. Wetlands converted to croplands have been restored under the CEAP-Wetlands, and those restored wetlands are mostly dominated by sedge, grasses, and herbs (McDonough, Lang, Hosen, & Palmer, 2015;Yepsen et al., 2014). According to the Soil Survey Geospatial (SSURGO) database, wetland #1 is underlain by a Whitemarsh silt loam soil with a lowsaturated hydraulic conductivity (30-120 mm/day) at depths of 30-157 cm (Figure 2a). In contrast, the soil type at wetland #2 is a complex of Hammonton, Fallsington, and Corsica soil types with a fairly high-hydraulic conductivity (90-61,000 mm/day). Penetration resistance was measured at depths of 10, 20, and 30 cm below the wetland bottom using a soil penetrometer (Eijkelkamp Hand Penetrometer Set, Eijkelkamp Soil & Water, the Netherlands). Observations indicated penetration resistance values ranging from 0.34 to 0.73 and from 0.08 to 0.29 (kPa) at wetlands #1 and #2, respectively (Figure 2 b). Because penetration resistance is inversely proportional to the saturated hydraulic conductivity (Shafiq, Hassan, & Ahmad, 1994), the observed penetration resistance agreed well with the SSURGO database. Wetland #1 had high-penetration resistance with low-saturated hydraulic conductivity, whereas wetland #2 had low-penetration resistance with high-saturated hydraulic conductivity. Hydrogeologic data also showed that wetland #1 is within the coastal plain dissected upland characterized by fine sediments and limited infiltration, whereas wetland #2 is within the coastal plain upland characterized by coarse sediments that facilitate infiltration (Ator et al., 2005). Based on SSURGO data, observed penetration resistance and hydrogeologic data, a low-permeability soil layer exists at wetland #1, impeding the vertical water movement between surface and groundwater, whereas the interaction of surface and groundwater is fairly high at wetland #2.
| Monitoring data
The SWL and GWL values were monitored hourly from January 1, 2016, to December 31, 2016. A well and piezometer were adjacently installed at the wetland invert elevation (i.e., lowest land surface elevation on the wetland bottom) of each wetland to monitor the SWL and States) were deployed in the wells, and the piezometers were physically linked to the data logger (Campbell Scientific CR1000) of the second station. Hourly data were aggregated into daily data for analyses.
Any outliers for each day were identified using the Tukey method (Tukey, 1977) and were excluded, and then, hourly data for each day were averaged. Hourly precipitation data measured by a tippingbucket rain gauge (Campbell Scientific TE525-L10) at each wetland were used for investigating the relation of water level variations to climatic conditions. Daily precipitation was calculated as the sum of hourly precipitation for each day.
| Analytical method
For higher soil hydraulic conductivity, the water levels of the adjacent well and piezometer are closer to equilibrium. In other words, the SWL and GWL were more similar when soil hydraulic conductivity values were higher. Under natural conditions, weather and soil heterogeneity cause the SWL and GWL to rarely have the same value. However, the consistency between the SWL and GWL can be an indicator of how strong the interaction between surface and groundwater is. Strong consistency indicates a higher soil hydraulic conductivity and, therefore, increased interactions between surface water and groundwater.
We, first, investigated the consistency between the SWL and GWL for the two wetlands over the monitoring period. Then, a daily gap between the SWL and GWL (referred to as "GSG" hereafter, GSG = SWL -GWL) between the two wetlands was examined with the assumption that the wetland with the high-permeability soil layer would have a relatively consistent GSG compared with the one with the low-permeability soil layer. Accordingly, we calculated the daily forward difference between GSG at day+1 and day (ΔGSG = GSG day + 1 -GSG day ) over the monitoring period and compared the values between wetland #1 and wetland #2. We used the nonparametric Wilcoxon-signed rank test to evaluate whether ΔGSG at wetland #1 is different from ΔGSG at wetland #2 at the significance level of α = 0.05.
We hypothesized that the change in direction between the SWL and GWL would be less consistent in the wetland with a lowpermeability soil layer. We calculated a measure defined as the forward difference between water levels (Water level day + 1 -Water level day ) for the two wetlands. When the daily change was extremely small (lower than 0.05 m), it was not included in this analysis. We counted the number of days with the same and different changes in the direction of the SWL and GWL. If the change in direction was the same for the SWL and GWL, the day was considered to indicate the same change in direction and vice versa. Because dry and wet conditions lead to the fall and rise of wetland water levels, respectively, we divided the data sets into dry and wet days for this analysis. This separation can help to compare the responses of the two wetlands with different climatic conditions. Wet (dry) days were defined as days with (without) observed precipitation.
A cross-correlation time-series analysis method was employed to calculate the time-lagged correlation of two wetland time-series data sets using the "tseries" module of the R program (Trapletti & Hornik, 2018). In the present study, GWL overlapped with lagged SWL to measure the similarity between GWL (observed at t) and SWL with positive (observed at t + n, where n is a time step) or negative (observed at tn) lag times. A strong cross-correlation between the GWL and SWL with positive or negative lag time would indicate that the SWL leads the FIGURE 2 Comparison of soil characteristics between wetland #1 and wetland #2: (a) Saturated hydraulic conductivity from soil survey geospatial and (b) soil compaction from in situ observations. Note: For wetland #1, saturated hydraulic conductivity is 3,600-12,000 (depth of 0-5.1 cm), 350-1,200 (depth of 5.1-30.5 cm), 30-120 (depth of 30.5-157.5), and 120-12,000 (depth of 157.5-203.2 cm). For wetland #2, saturated hydraulic conductivity is 3,600-61,000 (depth of 0-5.1 cm), 690-1,600 (depth of 5.1-25.4 cm), 1,600-3,000 (depth of 25.4-81.3 cm), 170-2,200 (depth of 81.3-116.8 cm), and 90-1,300 (depth of 116.8-203.2 cm). Soil survey geospatial databases were derived from physical soil properties of HoB in Caroline County, Maryland for wetland #1 and WhA in Queen Anne's County, Maryland for wetland #2. Soil penetration resistance was collected on July 11 (wetland #2) and 12 (wetland #1), 2017, respectively GWL or that the GWL leads the SWL, respectively. This approach can compute the strength of the vertical hydraulic gradient between the SWL and GWL. Regarding the downward hydraulic gradient and flux processes, the SWL is expected first to show changes, and then, the GWL will respond to these changes at the wetland where surface and groundwater are well connected. Thus, we hypothesized that the strongest correlation would be found between the SWL, with a negative lag time, and the GWL at the wetland with a high-permeability soil layer. The correlation would be weaker at a wetland with a lowpermeability soil layer relative to one with a high-permeability soil layer. Hourly data were used for this analysis. If missing data were encountered, missing values were replaced with the observation from the hour before. There were 11 and nine missing data points for the GWL and SWL in wetland #2, respectively. Using the Box-Ljung test (R CT, 2017), we confirmed that all input data are stationary (their mean and variance are steady over the monitoring period), which met the requirements of cross-correlation analysis.
Finally, we sought to explore wetland hydroperiod. Relative to a wetland with a high-permeability soil layer, a wetland with a low-permeability soil layer could sustain a more stable SWL regardless of the variation in the GWL due to limited water infiltration into subsurface layers. We hypothesized that the wetland with a lowpermeability soil layer would show a longer hydroperiod compared to the wetland with a high-permeability soil layer. We compared the wetland hydroperiod between the two wetlands by counting the number of days with the SWL above the wetland invert elevation. Additionally, the number of days with GWL above the wetland invert elevation was simultaneously considered to examine the groundwater contribution to the wetland hydroperiod.
| Surface water and groundwater levels
The SWL and GWL over the monitoring period are presented in daily and monthly time steps (Figure 3). Wetland #1 showed low consistency between the SWL and GWL in daily and monthly time steps The surface water level and groundwater level in daily (a,b) and monthly (c,d) time steps. Note: The vertical bars indicate daily and monthly precipitation. Pink (wet periods) and green (dry periods) bands in (b) relate to Figure 9 (Section 3.4), and this figure relates to Section 3.2 compared with wetland #2 (Figure 3). At wetland #1, the range in variations of the daily SWL and GWL over the monitoring period was from −0.4 to 0.4 m and −1.5 to 0.3 m, respectively. As indicated by the range of variations, the daily SWL at wetland #1 with a small range in variations (0.8 meter) was relatively consistent. However, the daily GWL at wetland #1 with a large range in variations (1.8 meters) indicated noticeable monthly changes, for example, an increase from January to February and a decrease from March to December ( Figure 3c). In contrast, wetland #2 showed a smaller difference in daily variations, ranging between the SWL (1.3 meter) and GWL (1.4 meter) compared with wetland #1 (Figure 3b). In addition, the monthly patterns of the SWL and GWL were similar at wetland #2: Both were high from January to April, gradually decreased from May to August, and rose from September again (Figure 3d).
Inconsistencies between the SWL and GWL at wetland #1 are clearly seen in the scatter plot (Figure 4). Points at wetland #1 showed a disproportionate relationship between the SWL and GWL, which represented that the SWL was high, whereas the GWL was low ( Figure 4a). Figure 3, the SWL was consistently high, whereas the GWL tended to decrease from March to December at wetland #1. Strongly correlated points at wetland #2 indicated that when the GWL was high (low), the SWL was also high (low, Figure 4b). Consistent with the point distribution, coefficients of determination (R 2 ) were also greater at wetland #2 (0.95) compared with wetland #1 (0.5). The point within the red circle at wetland #2 shows that the greatest precipitation over the monitoring period at wetland #2 did not coincide with the overall linear trend. This was likely because an extremely heavy rain event (93 mm) caused an abrupt increase in the SWL, but a commensurate increase in the GWL did not occur. In effect, even wetland #2 became perched under extremely high rainfall conditions. ΔGSG values over the monitoring period at wetland #1 were significantly higher than those at wetland #2 (p value < 0.01, Figure 5). The median values were 0.02 at wetland #1 and 0.008 at wetland #2. High inconsistencies between SWL and GWL at wetland #1 due to the low-permeability soil layer resulted in a high value of ΔGSG, whereas the relationship between the SWL and GWL at wetland #2 indicated high consistency, leading to a small ΔGSG. The findings shown in
| Change in the direction of the SWL and GWL
Changes in the direction of the SWL and GWL are shown in Figure 6. (Table 1). In contrast, FIGURE 4 Scatter plot between the surface water level and groundwater level at wetland #1 (a) and wetland #2 (b). Note: Green and blue points in (a) are further analyzed in Figure 7. P value < 0.001 for both (a) and (b). The point within a red circle indicates the day with the greatest precipitation over the monitoring period The greatest difference in the change in direction between the SWL and GWL at wetland #1 was shown on dry days with a decreased SWL and an increased GWL (subportion C). This case was observed for the dry days following heavy precipitation events ( Figure 7a, blue points in Figure 4a). The SWL decreased for a few days after precipitation, whereas the GWL increased (Figure 7a). Lateral groundwater from contributing areas likely flowed into wetland #1, leading to an increase in the GWL for a few days after precipitation. In contrast, both the SWL and GWL at wetland #2 decreased during the same period due to high-saturated hydraulic conductivity ( Figure 7c). The same case (i.e., decreasing SWL and increasing GWL, subportion C) was also observed at wetland #1 on wet days with light rain following a heavy rain event (Figure 7b, green points in Figure 4a). Rain events with a small amount of precipitation might not be sufficient to increase the SWL, but the GWL increased, likely owing to lateral groundwater flow from contributing areas (Figure 7 b), as shown in Figure 7a. During the same period, the responses of the SWL and GWL to climatic conditions were consistent at wetland #2 ( Figure 7d).
Interestingly, two consecutive rain events with heavy precipitation (start day and the following day) led to large increases in the SWL at wetland #2 (Figure 7c,d). However, an increase in the SWL occurred only for the first rain event at wetland #1 (Figure 7a,b).
Minimal water infiltration through the low-permeability soil layer at wetland #1 likely caused the wetland to hold a large amount of surface water and, therefore, reach its maximum SWL at the first heavy rain. As a result, an increase in the SWL at wetland #1 did not occur during the following rain event. When the amount of precipitation was extremely small (0.3 mm), the SWL and GWL decreased at wetland #2 (Figure 7d).
| Cross-correlation time-series analysis
The strongest cross-correlation between the SWL and GWL, of 0.98, was observed at wetland #2 (Figure 8). The response to precipitation events occurred first in the SWL and then in the GWL after 10-18 hr, which clearly indicated a strong vertical gradient between the SWL and GWL (Figure 8a). The cross-correlation between the SWL and GWL was much weaker (R = 0.7) at wetland #1, with no observed lag period for responses (Figure 8b). These findings were indicative of a very weak hydraulic gradient between the SWL and GWL at wetland #1 relative to wetland #2. Considering the downward hydraulic gradient and soil heterogeneity, any variations shown in the SWL would eventually appear in the GWL after a specific time that differs by site physical characteristics. Thus, this finding confirmed that the SWL and GWL were well connected at wetland #2, whereas the connection of the SWL with GWL at wetland #1 was limited due to differences in subsurface soil characteristics.
At wetland #2, the cross-correlation pattern and lag time at which it peaked differed by climate conditions (Figure 9). During dry periods (pink band in Figure 3), the correlation of SWL and GWL with no lag time was the strongest, and the correlation of SWL with negative lag FIGURE 6 Daily change in the surface water level and groundwater level on wet and dry days at wetland #1 (a) and wetland #2 (b). Note: Days with a daily change <0.05 m are not considered. One hundred twenty-two and 198 days are not included in (a) and (b), respectively. A dry (wet) day is defined as a day without (with) observed precipitation. The number of dry and wet days are 174 and 70 in (a) and 114 and 54 in (b), respectively. The number of dry and wet days for each portion is available in Table S1. To clearly show the differences between wetland #1 and wetland #2, the range of axes was limited from −0.2 to 0.3 m, and therefore, a few extreme observations are not shown here. All data are shown in Figure S2 TABLE (6) Note. The percentage value in parentheses denote the proportion of the day with a different change direction to the total days. time and GWL was the strongest during wet periods (pink bank in Figure 3). This was because precipitation caused the downward water movement from surface to groundwater, leading to varying responses of the SWL and GWL. However, downward water movement rarely occurs during dry periods. The degree of change in water levels during wet periods was rapid and large and was measurable by our sensors. In contrast, the timing and magnitude of changes in water levels during dry periods were extremely slow and small, respectively, resulting in an extremely small change in hydraulic head differential that was not detectable. Thus, no lag between the SWL and GWL could be measured during the dry period.
| Wetland hydroperiod
Contrary to our hypothesis, the two wetlands exhibited a long hydroperiod. SWLs for the two wetlands were found to exceed the wetland invert elevation for more than 320 days out of the year (Figure 10).
Similar to wetland #1, the SWL was mostly higher than the wetland invert elevation at wetland #2, although substantial infiltration at wetland #2 was expected to cause a short hydroperiod. However, the groundwater contribution to the wetland hydroperiod differed between the two wetlands. GWL was higher than the wetland invert elevation for 48 and 215 days out of the monitoring period at wetlands #1 and #2, respectively. It can be deduced that wetland #1 had a high capacity to sustain surface water with minimal groundwater contribution compared with wetland #2.
The unexpectedly long hydroperiod of wetland #2 might be attributed to the short monitoring period or lateral groundwater flow from contributing areas. Short-term groundwater observations could be inadequate to generalize wetland inundation patterns at wetland #2. Lateral subsurface flow from surrounding areas could be one factor causing the observed seasonal changes in the horizontal groundwater direction (Denver et al., 2014). In conjunction with additional sensors, data collection over a longer period of time could help elucidate the role of lateral groundwater movement and, therefore, the contribution of lateral groundwater water to the long hydroperiod of wetland #2.
| IMPLICATIONS AND LIMITATIONS
The strong relationship between GWL and wetland inundation at wetland #2 demonstrates the strong sensitivity of wetland hydroperiod to GWL within the surficial aquifer. Ditch drainage is prevalent in the region and can readily draw down surficial GWLs adjacent to a ditch.
The use of irrigation, primarily for corn and soybean production, has FIGURE 7 Representative cases for decreasing surface water level and increasing groundwater level on dry and wet days at wetland #1 (a and b) and changes in surface water level and groundwater level at wetland #2 for the corresponding periods (c and d) increased markedly within the last 30 years. As an example, over a 20- year period, irrigated croplands in Maryland increased from 162 to 283 km 2 , leading to substantial groundwater withdrawal, usually from the surficial aquifer (Wolman, 2008). Increased water use by crops via increased evapotranspiration may lower GWLs, which may further amplify the downward vertical gradients and subsequently dewater wetlands faster. Our findings help to characterize wetland hydrologic dynamics and potential responses to physical conditions and human activities. This information can be used to better develop watershed management plans that preserve wetland ecosystem services.
Based on the observed hydrological processes (Figure 7), it is expected that a wetland with a low-permeability soil layer might be less effective at mitigating peak flows from consecutive heavy rain events compared with a wetland with a high-permeability soil layer.
Limited water infiltration from wetland #1 led the wetland to reach the maximum water storage in one heavy rain event; therefore, the SWL of wetland #1 rarely changed in response following heavy rain (Figure 7 ab), potentially causing spillage from wetland #1 during subsequent heavy rain events. However, the SWL of wetland #2 continued to increase in response to subsequent heavy rain events (Figure 7c,d). High infiltration at wetland #2 quickly drained water within the wetland into the groundwater system, sustaining the water-holding capacity. Therefore, wetland restoration plans for areas that frequently experience heavy rain events should consider subsurface soil characteristics to effectively control flooding conditions.
In this region, GWLs are known to exhibit seasonal variations. For example, a high level during early spring, a declining pattern from early summer (June) to late fall (November), and increasing the levels during the winter (Fisher et al., 2010). Responding to this pattern, inundated The number of days when the surface water level or groundwater level exceeded the wetland invert elevation. Note: There were 118 and 116 wet days (daily precipitation > 0) at wetlands #1 and #2, respectively wetlands are frequently observed during early spring (Huang et al., 2014). Overall, the SWL and GWL at wetland #2 were consistent with the local seasonal pattern observed in the study of Fisher et al. (2010).
However, the SWL and GWL at wetland #1 were less consistent with previous observations than those at wetland #2. Because wetland #2 and the groundwater gauge location in Fisher et al. (2010) are in the same hydrogeologic region (coastal plain upland), wetland #2 might better compare with previous observations. Contrary to prior observations, the SWL at wetland #1 did not show a sharp increase and decrease during the summer season due to the presence of a lowpermeability soil layer, and the GWL at wetland #1 was low during the winter season. Regarding observed seasonal changes in groundwater direction for this region (Denver et al., 2014), lateral subsurface flow at wetland #1 might lead to a low GWL during the winter season.
As mentioned above, evidence for the interaction of lateral groundwater flow and wetland conditions was limited in this study due to the lack of supporting measurements. Thus, the installation of additional wells and piezometers is essential for improved understanding of wetland-groundwater interactions in this region.
In addition to a low GWL at wetland #1 during the winter season, several hydrological processes observed in this study offered clues to the impact of lateral groundwater flow on wetland water levels as follows: The sudden increase in the GWL in February relative to the small amount of precipitation at wetland #1 (Figure 3c), the increased GWL with decreasing SWL at wetland #1 (Figure 7ab), and the long hydroperiod of wetland #2 ( Figure 10). As noted in the previous studies (McLaughlin & Cohen, 2013;Pyzoha et al., 2008), lateral groundwater flow is a key driver that impacts the fluctuation of wetland water levels in the coastal plain region. Thus, examination of lateral groundwater flow is essential to accurately interpret wetland hydrologic processes and to understand wetland behaviors in this region. Additional sensors will be implemented on the coastal plain of the CBW under ongoing CEAP-wetland projects. With these new measurements, a future study will explore lateral groundwater impacts on wetland hydrology.
| CONCLUSIONS
This study demonstrates distinctive patterns of the SWL and GWL between two wetlands with differing subsurface soil hydraulic conductivity conditions. The wetland with a low-permeability soil layer (wetland #1) showed low consistency (similarity in water level dynamics over time, R 2 : 0.50) between the SWL and GWL, whereas high consistency (R 2 : 0.95) was seen at the wetland with a high-permeability soil layer (wetland #2). A cross-correlation time-series analysis further demonstrated the strong vertical hydraulic gradient between the SWL and GWL at wetland #2, although there was limited vertical connection between the two water levels at wetland #1. This was likely caused by the low-permeability soil layer at wetland #1 that limited vertical water recharge to groundwater. As a result, wetland #1 did not reflect substantial groundwater contributions to maintaining the hydroperiod, whereas the hydroperiod of wetland #2 heavily relied on groundwater contributions. This is the first study to document the dependence of wetland hydrologic characteristics on subsurface soil characteristics within the nationally and internationally important coastal plain of the CBW. This study is unique because it documents high-temporal resolution water levels both near the surface and at deeper depths. By pairing two sites in close proximity but with different subsurface soil characteristics, the side-by-side results document the differing hydrologic dynamics brought on by the differences in subsurface soil characteristics. Therefore, the findings of this study contribute to the understanding and interpretation of various wetland ecosystem services for this region and the critical importance of subsurface soil characteristics to wetland inundation behavior.
|
v3-fos-license
|
2023-01-02T05:06:13.569Z
|
2022-11-01T00:00:00.000
|
255340297
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "dbeb68f2fe588089827f98f251c0c02bc6045559",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41862",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "dbeb68f2fe588089827f98f251c0c02bc6045559",
"year": 2022
}
|
pes2o/s2orc
|
Lipotoxicity-related sarcopenia: a review
A body of literature supports the postulation that a persistent lipid metabolic imbalance causes lipotoxicity, “an abnormal fat storage in the peripheral organs”. Hence, lipotoxicity could somewhat explain the process of sarcopenia, an aging-related, gradual, and involuntary decline in skeletal muscle strength and mass associated with several health complications. This review focuses on the recent mechanisms underlying lipotoxicity-related sarcopenia. A vicious cycle occurs between sarcopenia and ectopic fat storage via a complex interplay of mitochondrial dysfunction, pro-inflammatory cytokine production, oxidative stress, collagen deposition, extracellular matrix remodeling, and life habits. The repercussions of lipotoxicity exacerbation of sarcopenia can include increased disability, morbidity, and mortality. This suggests that appropriate lipotoxicity management should be considered the primary target for the prevention and/or treatment of chronic musculoskeletal and other aging-related disorders. Further advanced research is needed to understand the molecular details of lipotoxicity and its consequences for sarcopenia and sarcopenia-related comorbidities.
INTRODUCTION
Several experimental and clinical studies have shown an association between advanced age and an inevitable gradual decrease in skeletal muscle strength and mass, known as sarcopenia [1]. Sarcopenia usually begins in the fifth decade of life and has been linked to an increased incidence of falls and fractures [2], as well as a loss of functionality and independence [3], which leads to increased morbidity and/or mortality [4]. Sarcopenia is histologically characterized by a reduction in the cross-bridging components between muscle fibers, smaller and/or fewer mitochondria in muscle cells, atrophy of type II myofibers, and tissue necrosis [1]. Published evidence has also shown that adipose tissue infiltration of the skeletal muscle predicts a loss of muscle power in the elderly, even in those who maintain a healthy weight [4].
Adipose tissue is an immune endocrine organ that also serves an energy storage function [5]. Triglycerides are hydrolyzed intracellularly by lipases into free fatty acids and glycerol for transportation to extra-adipose tissues, where they are oxidized by mitochondria. If the hydrolysis process exceeds the capacity to esterify intracellular free fatty acids, the resulting net release of free fatty acids can have many adverse effects, such as cytotoxicity, ectopic storage, and susceptibility to lipotoxicity insult [6,7]. In aging humans, despite an increase in the total percentage of visceral fat, the capacity of white adipose tissue (lipid storage) to buffer plasma non-esterified fatty acids (the end products of fasting lipolysis) diminishes due to impaired adipogenesis [2]. Obesity causes a further formation of excessive triglyceride deposits, known as steatosis or ectopic fat deposits, in several tissues, such as muscle, heart, pancreas, and liver [8,9].
The metabolic profile of skeletal muscle fibers is either more glycolytic (essentially using glucose) in rapid-firing type II fibers or more oxidative (essentially using lipids) in slow-firing type I fibers [10,11]. While fatty acid oxidation is relatively high in the skeletal muscle, lipid overload could also occur, eventually triggering muscle cell death through insulin resistance and other mechanisms.
MECHANISMS UNDERLYING THE DEVELOPMENT OF LIPOTOXICITY-RELATED SARCOPENIA
Lipotoxicity is a systemic disorder associated with metabolic and senescence diseases, such as obesity and sarcopenia. The pathogenesis of lipotoxicity-related sarcopenia takes place through a cascade of intermingled mechanisms. Lipotoxicity leads to ectopic storage of lipids in the skeletal muscles (myosteatosis) and enhances the release of adipokines, cytokines, and chemokines, eventually leading to chronic sterile inflammation of muscles and impaired function of their mitochondria. The end result is a reduced capacity to consume fatty acids, followed by oxidative stress, insulin resistance, calcium store depletion, protein degradation, and extracellular matrix changes ( Figure 1).
Myosteatosis
In lean individuals, triacylglycerol (TAG) normally represents 0.5% of the skeletal muscle volume, but this percentage can increase to 3.5% in obesity [12]. The increase in body fat mass is associated with ectopic fat deposits that occur preferentially in muscles. This is termed myosteatosis [9] and appears to act synergistically with sarcopenia, as shown in Figure 1. Myosteatosis should be regarded as a physiologically accelerated degenerative process arising due to concomitant lipotoxic stress [3,13]. It materializes as intramyocellular lipids that accumulate due to an increased inflow of fatty acids that exceeds the oxidative capacity of skeletal muscles [14] and the intramyocellular adipocytes of the extramuscular adipose tissue as a result of stimulation of adipogenic metabolism. Some research has shown that aged persons with high muscle fat infiltration in the midthigh have a high incidence of mobility impairment during a 2.5-year follow-up period [15]. Similarly, fat accumulation in the middle-aged can develop into fibrosis, further impairing muscle movement and function [16].
Chronic sterile low-grade inflammation
Adipose tissue secretes many different factors, including pro-inflammatory cytokines, extracellular matrix proteins, pro-thrombotic factors, and chemokines [17]. Macrophages also release pro-inflammatory cytokines that activate a large number of stress-signaling cascades. These cascades upregulate CD11c surface expression in adipose-resident macrophages and stimulate them to assume a pro-inflammatory secretory profile [18]. The released chemical factors exacerbate and trigger other stress-signaling cascades, causing the release of free fatty acids and, ultimately, lipotoxicity [19].
Crosstalk has recently been identified between inflamed skeletal muscle and adipose tissue, generating an age-related and harmful vicious cycle that may be the key conjoining mechanism between lipotoxicity and sarcopenia [12,20]. Sequences of pro-inflammatory cytokine signaling and cellular stress responses are triggered by lipotoxicity, thereby depleting the preadipocyte progenitor pool. The muscles then switch to a pro-inflammatory condition similar to that of macrophages [21].
These changes are exacerbated by aging, as skeletal muscle fibers become damaged by fatty acids and inflammation while also losing their capacity to store lipotoxic acids. The further release of pro-inflammatory cytokine signals and the vicious feedback loop has a profound impact on skeletal muscle fibers and motor function and can play a significant role in sarcopenia [22].
Oxidative stress
Many studies have shown that feeding a high-fat diet increases reactive oxygen species and causes nitric oxide imbalances, thereby altering cellular antioxidant defense systems. The result is cellular membrane disruption, decreased protein synthesis due to endoplasmic reticulum stress, and activation of muscle fiber apoptosis [23]. In elderly individuals, changes in the intramyocellular ultrastructure have been correlated with transcriptional alterations related to mitochondrial dysfunction and lipid metabolism [24]. Increased levels of reactive oxygen species in type I muscle fibers, and disturbances in cellular homeostasis predispose muscles to impairments in the function and integrity of neuromuscular junctions [25].
Insulin resistance
In skeletal muscle, exercise and binding insulin with its tyrosine-kinase receptors exert several biological effects, including protein synthesis and glucose metabolism (Figure 2). Auto-phosphorylation of the receptor leads to the recruitment of insulin receptor substrate (IRS)-1, which guides downstream pathways [26]. When the phosphatidylinositol 3-kinase (PI3K) is activated, it promotes phosphorylation of protein kinase B (PKB)/AKT and allows the internalization of glucose by translocation of glucose transporter (GLUT)-4. The phosphorylation of glycogen synthase kinase 3 (GSK3) promotes glycogen synthesis. All of these mechanisms aim to store and dispose of glucose. Additionally, PKB/AKT stimulates the mammalian target of rapamycin (mTOR), ribosomal S6 kinase 1 (S6K1), and 4E-binding protein 1 (4E-PB1), which are involved in the importance of tropism, muscle mass anabolic metabolism and protein synthesis [26].
A more important signaling pathway is represented by the AMP-activated kinase (AMPK), which promotes free fatty acids and glucose metabolism as well as modulates long-term responses in mitochondria by interacting with peroxisome proliferator receptor-gamma activator 1α (PGC-1α) [27]. In the presence of intracellular energy deficiency, AMPK inhibits protein synthesis by suppressing mTOR signaling [28].
One direct effect of insulin on the muscle phenotype is the suppression of protein catabolism [29]. Insulin resistance is a senescence morbidity reciprocally associated with sarcopenia [30]. Insulin resistance inhibits β-oxidation, increases the supply of free fatty acids, and alters triglyceride transport, resulting in steatosis [31] and suppression of the growth hormone (GH)-insulin-like growth factor 1 (IGF1) axis responsible for muscle protein synthesis [32]. The hyperinsulinemia resulting from insulin resistance directly accelerates muscle degradation and decelerates protein synthesis [33], thereby leading to an increased production of myostatin that reduces muscle mass [34].
Leptin resistance
Central (visceral) obesity is a well-known pathological condition where the adipose tissue represents an actively secreting organ, contributing to the release of several pro-inflammatory cytokines that enhance local and systemic inflammation (Figure 3).
Leptin, secreted by adipose tissue, acts as a pro-inflammatory hormone, especially in subjects with sarcopenic obesity rather than in those with either visceral obesity or sarcopenia alone [4]. Hyperleptinemia could be due to defective signaling at the hypothalamic neurons and leptin resistance [35].
In healthy subjects, leptin stimulates AMPK in skeletal muscles. Meanwhile, this pathway is suppressed in obese individuals, which is attributed to the increased hypothalamic expression of the obesity-related suppressors of cytokine signaling 3 (SOCS3). In the experimental model, SOSC3 inhibits leptin activation of AMPK, contributing to the impaired fatty acid metabolism in skeletal muscle [36].
Extracellular matrix remodeling
The extracellular matrix (ECM) plays crucial roles in skeletal muscle development [38], biomechanics [39][40][41], regeneration [42,43], motor endplate function [44,45], and glucose metabolism [46]. Consequently, the pathophysiologies of many skeletal muscle diseases, such as different varieties of muscular JOURNAL of MEDICINE and LIFE dystrophy [47] and senescence-associated sarcopenia [48,49], likely involve a remodeling of ECM components. ECM remodeling is associated with the consumption of high-fat diets, and an increased collagen content has an apparent association with insulin resistance in skeletal muscles [46]. The collagen deposition and extracellular matrix remodeling triggered by lipotoxicity cause changes in the functionality of the sarcoplasmic reticulum, resulting in impaired fiber contractility [50]. An involvement of lipotoxicity is claimed in tubulointerstitial fibrosis in the kidney due to the increased expression of connective tissue growth factors and the promotion of apoptosis [51]. However, evidence for the operation of a similar mechanism in lipotoxicity-related sarcopenia is presently lacking.
Calcium imbalance
Lipid stress may cause transitions in calcium cycling protein isoforms, thereby influencing calcium homeostasis, calcium signaling, and muscle twitch (fiber excitation, contraction, and relaxation) [52,53]. Sarcopenia is associated with the dysfunctional enlargement of mitochondria, which increases mitochondrial Ca 2+ uptake and depletes calcium in the myofibers [54].
Myofiber atrophy
A synergistic relationship exists between muscle loss and skeletal muscle fatty infiltration, suggesting that fat accumulation might accelerate the pathogenesis of sarcopenia [13]. In aged rats, a high-fat diet results in intramyocellular accumulation of fatty acids. This increase in fatty acids is correlated with impaired skeletal muscle protein synthesis [55] and may indicate that the increased accumulation of lipid metabolism byproducts, such as ceramide, has adverse effects on mitochondrial performance [56]. By contrast, the accumulation of adipocytes in skeletal mus-cles has negative effects on the muscle phenotype and promotes muscle atrophy, as shown in both humans and rats by Pellegrinelli and collaborators [57]. Impaired autophagy and reduced numbers of satellite cells, as occur in overweight-related sarcopenia, further contribute to muscle wasting [58].
Lipotoxicity triggers cell death in smooth muscles [59] as well as in cardiac muscles [60]. The result is smooth muscle cell proliferation, muscle remodeling, pathological alterations in vascular tone, vascular foam cell formation, and plaque destabilization. In the heart, the effects can include abnormal right ventricle geometry, increased left ventricular mass, enlarged atrial chemotaxis, and cardiomyopathy [61,62].
Endoplasmic reticulum stress
The endoplasmic reticulum (ER) has a significant role in protein, lipid, glycogen, and calcium metabolism [63]. Lipotoxicity, through increasing reactive oxygen species and oxidative stress, can trigger ER stress and, consequently, the accumulation of unfolded proteins in ER [64]. Normally, unfolded protein response (UPR) compensates for this stress, and the ER restores its normal function in maintaining protein and lipid homeostasis [65]. Nevertheless, in the case of prolonged stress, as in lipotoxicity, ER stress can lead to apoptosis [66]. ER stress can also induce insulin resistance [67] and anabolic intolerance in skeletal myocytes [68].
HISTOLOGICAL CRITERIA OF LIPOTOXICITY-RELATED SARCOPENIA
Quantitative histological staining using Sudan black dye reveals myosteatosis as one of the most apparent histological changes occurring in lipotoxicity-related sarcopenia. The relative fat JOURNAL of MEDICINE and LIFE mass in muscles can be measured using the Lipid Accumulation Index (LAI) and quantified as described previously [55]: total area with lipid droplets of muscle fiber × 100/total cross-sectional area of muscle fiber. Myofiber atrophy has been histologically defined in obese rat skeletal muscles as heterogenicity in the cross-sectional areas of the myofibers [57]. In elderly humans with sarcopenia, the fast-twitch myofibers show considerable reductions in diameter [69].
Satellite cells decline in number and function in aging skeletal muscles [70][71][72]. Proteomic analysis, quantitative immunofluorescence, and ultrastructural morphological and morphometrical analysis have shown that matrisome changes accompany the aging of skeletal muscle in the form of increases in some proteins, such as collagens IV and VI and laminin. In aged rats, these changes present as more linear and larger collagen bundles in the perimysium and thickening of the endomysium of the gastrocnemius [49].
CONCLUSION
The important repercussions of lipotoxicity in patients with sarcopenia are that it increases disability, morbidity, and mortality. Therefore, appropriate lipotoxicity management should be considered a primary target for the prevention and/or treatment of chronic musculoskeletal and other aging-related disorders. Further research advances are needed to better understand the molecular details underlying lipotoxicity and its consequences for sarcopenia and sarcopenia-related comorbidities.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2012-11-30T00:00:00.000
|
6197675
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1371/journal.pone.0050803",
"pdf_hash": "1163cc4026ba8f367b5dc4aea13c9f19ec6b0142",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41863",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"sha1": "1163cc4026ba8f367b5dc4aea13c9f19ec6b0142",
"year": 2012
}
|
pes2o/s2orc
|
Depressive Symptoms and the Risk of Ischemic Stroke in the Elderly—Influence of Age and Sex
Although a relationship between depression and cardiovascular events has been suggested, past study results regarding the risk of stroke in relation to depression by subgroups are ambiguous. The aim of this study was to investigate the influence of depressive symptoms on risk of incident ischemic stroke in elderly according to age and sex. This prospective cohort study followed up 3852 subjects older than 55 years. Baseline depressive symptoms were defined by a score ≥5 on the Geriatric Depression Scale or antidepressant intake. The outcome measure was incident ischemic stroke within 6 years of follow-up. Multivariate Cox-proportional hazard models as well as cumulative survival analyses were computed. A total of 156 ischemic strokes occurred during the study period (24 strokes in the age-group<65 years and 132 strokes in the age-group≥65 years). The distribution of strokes in sex-subgroups was 4.5% in men and 3.7% in women. The multivariate analysis showed an elevated stroke risk (Hazard Ratio (HR): 2.84, 95% CI 1.11–7.29, p = 0.030) in subjects from 55 to 64 years with depressive symptoms at baseline but not in subjects older than 65 years. In the multivariate analysis according to sex the risk was increased in women (HR: 1.62, 95% CI 1.02–2.57, P = 0.043) but not in men. The Cox-regression model for interaction showed a significant interaction between age and sex (HR: 3.24, 95% CI 1.21–8.69, P = 0.020). This study corroborates that depressive symptoms pose an important risk for ischemic stroke, which is particularly remarkable in women and patients younger than 65 years.
Introduction
Stroke prevention requires the treatment of modifiable ''classical risk factors'' such as coronary heart disease (CHD), hypertension, cigarette smoking, diabetes mellitus, hyperlipidemia, obesity, atrial fibrillation, and physical inactivity as well as recently suggested or less well studied risk factors such as metabolic syndrome, excessive alcohol consumption, drug abuse, use of oral contraceptives, sleep-disordered breathing, migraine, hyperhomocysteinemia, elevated lipoprotein(a), hypercoagulability, inflammation, and infection [1]. Although affective disorders, to date, have not been formally established as independent risk factor for stroke, attention to this potential risk factor has continued to increase in the past decade [2]. The lifetime incidence of depression has been estimated at more than 16% in the general population [3] and an association between depression and medical diseases has been shown for diabetes [4], cardiovascular disease [5,6,7,8], and hypertension [9]. A link between depression and the incidence of stroke has been strengthened by two recent metaanalyses [2,10], but findings in elderly subgroups by age and sex are still partly ambiguous [11] and separate risk estimations for them are rarely available. Furthermore the studies included in the meta-analyses used different endpoints as some studies included intracerebral hemorrhage and transitory ischemic attacks (TIA) [2,10].
Several prospective studies have investigated the relation between depressive symptoms and the incidence of stroke. However, studies using depression as a predictor have yielded mixed results. Whereas a recent large population-based study (n = 80 574 women aged 54 to 79 years, the Nurses' Health Study) showed an increased stroke risk in women with a previous diagnosis of depression [12], other studies found no clear evidence of depression being a significant risk factor for cerebrovascular diseases [13,14] or an increased risk only for patients younger than 65 years [11]. Furthermore sex differences in the clinical course and incidence of depression have been repeatedly shown [15,16].
In the present study, we investigated an association between depression and incident ischemic stroke among elderly patients with adjustment for the established risk factors. Because the incidence and characteristics of both depression and stroke changes with age, we follow the example of the Framingham study and performed separate analyses for subjects younger than 65 years and for subjects aged 65 years or older [11] and moreover for men and women respectively.
Subjects
This study is based on data from the INVADE trial (intervention project on cerebrovascular diseases and dementia in the district of Ebersberg), a population-based longitudinal study of general-practice patients. The study population is made up of the inhabitants of the district of Ebersberg, Bavaria, Germany, who were born before 1946 and were members of the public health insurance AOK (Allgemeine Ortskrankenkasse). In Bavaria the AOK is the biggest public health insurance with a market share over 40%. At the beginning of the year 2001, all members were invited to participate [17,18,19]. During the baseline period (2001)(2002)(2003), 3908 subjects accepted the invitation to participate. Ultimately, for n = 3852 complete data for both Geriatric Depression scale and antidepressant intake were available. From 533 subjects who dropped out within the study period, 477 dropped out because of death and only 56 (1.4% of the whole study population) because of other reasons, e.g. emigration or a change of health insurance. The median follow up time until either the occurrence of an event or the end of the study was 6.13 years.
Baseline Investigation
The baseline investigation was performed by 65 primary care physicians of the district of Ebersberg and included a standardized questionnaire, a physical examination, evaluation of several risk factors, medical and disease history, a 12-lead ECG, and an overnight fasting venous blood sample for laboratory analysis including serum glucose, lipids, and creatinine as well as high sensitivity C-reactive protein (hs-CRP). Information on medical history, current health status, cognitive status, mood disorders, previous cardiovascular risk factors and drug usage was obtained using a structured interview. A physical examination was carried out by the primary care physician including the following items: weight and height with calculation of body-mass index (BMI); hypertension (treatment with antihypertensive medication or documented blood pressure $140 mmHg systolic or $90 mmHg diastolic, measured in a standardized fashion); diabetes mellitus (treatment with antidiabetic drugs or overnight fasting serum glucose levels $126 mg/dL); hyperlipidemia (treatment with lipidlowering medication or total cholesterol level $200 mg/dL or triglyceride $150 mg/dL); 6-Item Cognitive Impairment Test [20]; Barthel Idex [21]; Rankin Scale [21]; medication; physical activity; history of stroke (neurological deficit that persisted longer than 24 hours, evaluated by a neurologist); history of ischemic heart disease (documented by previous myocardial infarction or angina pectoris, bypass surgery, or .50% angiographic stenosis of $1 major coronary artery); smoking status (never, former, or current); alcohol consumption; and living facility. The GDS was included in a self-administered patient questionnaire. All patients were monitored under best medical treatment conditions during the study period; the treatment by the primary care physicians followed the actual national and international guidelines. All subjects gave written informed consent before entering the study. The study was approved by the local ethics committee of the Technische Universität München. Depressive Symptoms, Geriatric Depression Scale Depressive symptoms were assessed using the Geriatric depression scale (GDS), which was developed to estimate depression especially in the elderly [22]. In this questionnaire patients are asked to respond with reference to how they felt over the previous week. We used the 15-item version [23]. The GDS was found to have a sensitivity of 92% and specificity of 89% when evaluated against diagnostic criteria [24]. We used the cut-off $5 in the GDS-15 which had a reported sensitivity for the detection of major depressive episode of .90% [25]. In addition, depressive symptoms were assumed if the general practitioner reported the prescription of any established antidepressant medication (selective serotonin reuptake inhibitors (SSRI), Tricyclic antidepressants (TCA), Monoamin-oxidase (MAO)-inhibitors, etc.).
Clinical End Point
The subjects enrolled in the study were monitored for all ischemic strokes according to the ICD classification (International Statistical Classification of Diseases and Related Health Problems, World Health Organization) through a linkage of the study database with claims data of the health insurance company. In case of the occurrence of an ischemic stroke the ICD code -coded as diagnosis by the physicians in hospitals -were registered in the AOK database. As we used insurance claim data in terms of ICD codes (I63.-for ischemic stroke) reported by the hospitals, information on clinical endpoints were completely available for every participant except the ones who died or changed the insurance (1.4%) over the entire observation period.
Statistics
All patients from the INVADE project with a completed baseline GDS score were included in this study. The dichotomized GDS score (GDS $5) as well as current prescription of antidepressant medication was investigated together as well as separately as predictors of stroke. For the combined analysis with GDS $5 and prescription of antidepressants only subjects with complete data were included. The analysis was performed for two groups (age: 55 to 64 years and $65 years) according to the analysis of the Framingham study [11] and further for men and women separately. Kaplan-Meier survival estimates were used to visualize time to stroke. Crude and multivariate Cox proportional hazards (PH) models were computed within each age and sex group. In multivariate models the associations between depressive symptoms and the clinical endpoints were adjusted for age and sex only as well as for age, sex, BMI, smoking, hypertension, diabetes, hyperlipidemia, physical activity, previous myocardial infarction, previous TIA, previous stroke, and history of atrial fibrillation following reports about established risk factors [1,26]. Survival curves were compared between sub-groups using the log-rank test. An interaction analysis was conducted using a Cox-PH regression model with the covariates age, sex and age x sex. All statistical tests were two-sided with significance level of 5%. All statistics were performed with IBM SPSS version 19.0 and 20.0 with guidance by an independent statistician (V.K.).
Baseline Characteristics
Baseline characteristics of the study population in relation to age-group and depressive symptoms are summarized in Table S1. The mean baseline GDS scores differed between age groups and were 2.21 for subjects aged 55-64 years versus 2.61 for $65 years. In the age group,65 years (n = 1667), 16.1% had a GDS score $5 or antidepressant intake, whereas in the age group$65 years (n = 2185), 20.7% had a GDS score $5 or antidepressant intake (p,0.001), indicating an age-dependent increase of depressive symptoms. The use of antidepressants did not differ between groups.
In the study population (N = 3852) 156 ischemic strokes occurred. In the age group,65 years (n = 1667) 24 ischemic strokes were observed, whereas in the group$65 years (n = 2185), 132 strokes were recognized. In 1587 men and 2264 women the event rate was 4.5% in men (n = 72) and 3.7% (n = 84) in women.
The multivariate Cox-PH model with adjustment for age and sex is presented in Table 2 while in comparison to the fully adjusted model presented in Table 3. The multivariate analysis with adjustment for age and sex only showed a significantly higher stroke risk in depressive patients below the age of 65 years (HR 2.59, 95% CI 1.07-6.28, P = 0.035, see Table 2) and also in the multivariate analysis with adjustment for the established risk factors (HR 2.84, 95% CI 1.11-7.29, P = 0.030, see Table 3) but not in the group of participants $65 years.
The multivariate analysis for GDS $5 only showed an HR of 2.81 for ,65 years (95% CI 1.11-7.11, P = 0.029) adjusted for sex and age (Table 2) and a HR of 3.23 (95% CI 1.19-8.81, P = 0.022) adjusted for the known risk factors (Table 3). No significant association was seen in the group of subjects $65 years as well as for the whole group in the multivariate analysis adjusted for the known risk factors (see Table 3).
Multivariate Analysis for Sex Subgroups (Tables 4 and 5)
The analyses for sex subgroups are presented in Tables 4 and 5. The multivariate Cox-PH model with adjustment for age showed a significantly higher stroke risk in depressive subjects only for women (HR 1.75, 95% CI 1.11-2.75, P = 0.015, see Table 4) and also in the multivariate analysis with adjustment for the established risk factors (HR 1.62, 95% CI 1.02-2.57, P = 0.043, see Table 5) but not in the group of men.
Survival Analysis
Kaplan-Meier estimates for ischemic stroke were conducted by age and sex-subgroups (Fig. 1). In the female group there was a significant difference in cumulative stroke free survival when comparing subjects with and without depressive symptoms (P,0.001). This observation could not be confirmed in the male group (P = 0.733). When based on age subgroups there was a significant difference in the older group (age $65 years) between subjects with and without depressive symptoms (P = 0.025). As well, there was a similar trend identified in the younger group (P = 0.078). (s. Figure 1).
Interaction Analysis in Relation to Stroke
The Cox-PH regression for the interaction of age x sex resulted a HR of 3.24 (95% CI 1.21-8.69) with p = 0 0.020. The results for age and sex in the regression model were p,0.001, HR 0.11 (95% CI 0.05-0.25) for age and p = 0.394, HR 1.16, (95% CI 0.82-1.65) for sex.
Study Findings and Design
Apart from the individual and socioeconomic burdens of depression itself [27] this study contributes further evidence that depression alters the risk of ischemic stroke in elderly subgroups. The relative risk for developing an ischemic stroke was significantly elevated in our study population with baseline depressive symptoms by a nearly threefold risk in the younger group,65 years (HR 2.81) generally confirming the results of the Framingham study, which showed an risk elevation only in younger patients [11]. In comparison with earlier studies on depression and stroke [2], our result in the group younger than 65 years in the multivariate analysis appears to be relatively high. In addition one recent meta-analysis by Pan et al. [2] actually reported also an elevated risk in a stratified analysis also for younger patients ,65 years (HR 1.77). Of note, only two previous original studies on depression and stroke have reported results separately by age subgroups: the Framingham study investigated people younger than 65 years and those 65 years or older [11] and the Established Populations for Epidemiologic Studies of the Elderly (EPESE) examined an older population with subgroups 65-74, and 75 years or older [28]. It is known from epidemiological studies that an increase in depressive symptoms appears above the age of 65 years in men and women [29]. Comparing our results with the metaanalysis of Pan et al. [2] the novel finding is that the multivariate analysis separated for sex also showed an increased risk only in women (HR 1.75, P = 0.015) with depressive symptoms but not in men. In a recent meta-analysis by Dong et al [10] which investigated the same studies for the most part as Pan et al. [2] no such stratified analyses for sex had been performed. Considering these different results, the lack of risk elevation in depressed subjects $65 years in the multivariate analysis noted in our study (Tables 2 and 3) might be caused by a lower association in men. This could be partly explained by an overbalance of other risk factors in this sub-group, but might also be influenced by a dissimulation of depressive mood in older men. Our data support the approach of an analysis grouped for age and sex as a significant interaction between age and sex has been shown in our regression model. Besides, there are sex differences in late-life depression that are not restricted to sex-dependent characteristics and behavior but seem to be associated with mortality [30]. In conclusion it seems entirely reasonable to analyze age and sex subgroups separately, particularly with regard future therapy studies.
Pathomechanisms and Antidepressant Treatment
Plausible causal mechanisms for an increased cardiovascular risk in depression have been put forward in the literature, such as the promotion of a mild inflammatory process induced by depression [31], triggering dysregulation of hormonal systems [32] and resulting greater platelet activation [33] and therefore causing atherosclerosis [34]. It can be argued that depression leads to mild inflammatory responses [35]. It has been revealed in population-based studies that antidepressant use can also be associated with elevated CRP levels [36], possibly leading to a systemic inflammation independently of the symptoms of mental illness. The exact role of antidepressant use is not yet clear, but some of these drugs are suspected to cause a mild increase of stroke risk with recently reported adjusted HRs between 1.05 (95% CI 0.95-1.17) for tricyclic antidepressants, 1.21 (95% CI 1.11-1.32) for selective serotonin reuptake inhibitors, and 1.44 (95% CI 1.24-1.67) for other antidepressants for other antidepressants [37]. It must be mentioned, that the risk caused by antidepressants could not easily been separated from the risk caused by depression. Furthermore, despite the results of our multivariate analysis, it may be argued that subjects with depressive symptoms had more vascular risk factors and altered risk behaviors particularly physical inactivity as shown for coronary heart disease [38]. From a clinical point of view, a possible slightly elevated risk of antidepressant use must of course be weighed against impairment of quality of life and established risks of cardiovascular disease and mortality associated with untreated depression.
Strengths and Limitations of the Study
One major advantage of our study is the completeness of our endpoint-data due to the use of insurance claim data which reduced the limitation of any drop outs. Furthermore, we used population based data which reduces the risk of selection biases. Another advantage comprises the application of a strictly defined clinical endpoint (ischemic stroke) in comparison to other studies which were included in the meta-analyses and partly used intracerebral hemorrhage as well as TIA as outcome measurements [2,10]. Taking into account the possible variety of stroke risk from different subclasses of antidepressants, one limitation of our study consists of the lack of separate analysis for subclasses of antidepressant medication. Furthermore, in our multivariate model both an elevated GDS-score and the intake of antidepressants were considered as indicators for depressive symptoms, thus it might be difficult to distinguish between the stroke risk of depression itself and the risk caused by antidepressant intake as mentioned. Moreover, the power of our statistical analysis might be diminished if some of the deceased have died from their first ever ischemic stroke without being transferred to hospital.
Conclusions
To date, there has been no extensive investigation on whether sufficient treatment of depression might influence and potentially improve not only clinically depressive symptoms but also the risk of stroke in depressive patients; this represents a possible target for future prospective studies. Our study suggests that differences exist in the depression-associated stroke risk in elderly sub-groups according to age and sex. Our observations need to be confirmed by further studies. Evidence-based recommendations from randomized clinical trials are still needed to elucidate depression treatment in depressive patients at risk for stroke.
Supporting Information
Table S1 Baseline Characteristics of Participants by Age Groups 55-64 and $65 Years and GDS ,5 and GDS $5 or Antidepressants (AD). (DOC)
|
v3-fos-license
|
2019-01-23T15:22:08.516Z
|
2015-11-30T00:00:00.000
|
57143856
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.4172/2329-9517.1000230",
"pdf_hash": "be4b4eaea88b12c1c75ae54611a29e07fac6b233",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41867",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0956e47a98f1036da1436c02ecbcf6c66235d40d",
"year": 2015
}
|
pes2o/s2orc
|
Quality of Life in Patients Under Investigation for Unclear Chest Pain; Before and After Coronary Angiography
Background: Patients with unclear chest pain experience more anxiety compared to those receiving a clear diagnosis, and they also report lower quality of life (QoL) than a general population. The aim was to investigate if there were differences in QoL before coronary angiography compared to six months later. Methods: This was a quantitative study using the questionnaire EQ-5D. The study population consisted of patients (N=150) with unclear chest pain, referred for elective coronary angiography. They were asked to complete a questionnaire the day before coronary angiography and six months later. Results: Significant improvements were seen regarding usual activities, pain/discomfort and total health status on the day before coronary angiography compared to at six months follow up. Conclusions: Patients with unclear chest pain seem to estimate their total health status before coronary angiography worse than both the general population and myocardial infarction patients. Those with coronary artery disease CAD rated better total health status in comparison to those with a final diagnosis of no CAD. However, six months later significant improvements were seen. Journal of Cardiovascular Diseases & Diagnosis J o u r n a l o f C ar dio vas cular Dases & ia g n o s i s ISSN: 2329-9517 Citation: Henriksson C, Hallberg T, Johnston N (2015) Quality of Life in Patients Under Investigation for Unclear Chest Pain; Before and After Coronary Angiography. J Cardiovasc Dis Diagn 3: 230. doi:10.4172/2329-9517.1000230
Introduction
Chest pain is a common reason for seeking medical evaluation. In Sweden approximately 3000 men and women undergo coronary angiography each year due to unclear chest pain [1]. Diagnostic workup prior to coronary angiography often takes several months, and some patients are put on sick-leave [2]. Patients with unclear chest pain are known to have higher levels of anxiety in comparison to those with a determined cause of chest pain [3]. It is also known that both anxiety and depression increases the risk of cardiovascular morbidity and mortality [4][5][6], while decreasing quality of life (QoL) [7,8].
Patients living with unclear chest pain [9], as well as myocardial infarction (MI) patients, report lower QoL than the general population [10]. However, for both MI patients [11] and those receiving percutaneous coronary intervention (PCI) QoL improves over time [12]. Whether there is a similar trend for patients with unclear chest pain is less well known. The purpose of this study was to compare QoL in patients with unclear chest pain before and six months after coronary angiography.
Methods Design
This was a quantitative study with a descriptive and comparative design.
Study population
Patients with unclear stable chest pain referred for first-time elective coronary angiography were included (N=150). All patients were referred for this investigation due to a high suspicion of angina and a non-invasive test suggestive of CAD. The definition of CAD in our study was significant atheromatous at coronary angiography. None of the patients had any previous diagnosis of heart disease (i.e. MI, angina and 16% non-anginal chest pain. More than half of the patients (59%) had a symptom duration of more than six months.
Total health status (EQ-VAS)
At six month follow-up, the study population had an improved median EQ-VAS compared to before coronary angiography (75 vs. 70, respectively, p=0.001). Women assessed their total health status at admission as worse than men, a significant difference (p=0.05), but six months later the difference was no longer significant (p=0.67). No differences were found in the typicality of symptoms with respect to EQ-VAS (p=0.50). Patients with a final diagnosis of CAD rated their total health status at admission (EQ-VAS 82) as better than patients with no CAD did (EQ-VAS 66, p=0.03); however, at six month followup no significant difference was observed (EQ-VAS 68 vs 66, p=0.76) ( Figure 1).
EQ-index
EQ-index comparisons at admission and at six month follow-up are presented in Table 2. At admission, 79% of the patients had no problems with mobility, no patients had problems with self-care, and symptoms, further diagnosis, number of follow-up visits at medical facilities, sick leave, rehospitalizations, medications and EuroQol 5 dimensions (EQ-5D) questionnaire [14].
All patients with CAD, at coronary angiography, received at discharge guideline recommended medical treatment (i.e statins, platelet inhibitors, nitroglycerin, blood pressure lowering treatment) unless contraindicated.
Quality of life questionnaire
QoL was assessed using the questionnaire EQ-5D. The questionnaire was developed by the EuroQol group [14] to measure health outcomes and includes five dimensions: mobility, self-care, usual activities, pain/ discomfort and anxiety/depression. For each of the five dimensions, there are three levels of answers: no problems, some problems, extreme problems.
In relation to the dimension Anxiety/depression, patients were instructed to assess their level of anxiety with regard to regular daily life and not specifically relating to the coronary angiography procedure.
For assessing the patient's total health status, the questionnaire also includes a vertical scale (EQ-VAS), with the end-points best imaginable health state and worst imaginable health state. The patients marked the number on the scale that best agreed with their perceived overall health status.
Statistics
All statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS) Version 21. The results were presented as numbers and percentages, median and mean values, i.e. for Background characteristics. For tests of differences between groups, we used the Chi square test for categorical variables (including the EQ-index variables) and the Mann-Whitney U for the VAS-scale. The Sign test was used to examine differences between baseline (before coronary angiography) and 6-month follow up regarding to the EQ-index variables. No index tariff system was used regarding the EQ-index.
The Wilcoxon test was used for paired comparisons of the VASscale between baseline and 6-month follow up. A p-value of 0.05 or less was considered significant. There were no missing data in patient characteristics except for one patient regarding smoking status. Fourteen patients were lost to follow-up at six months.
The study protocol conforms to the Declaration of Helsinki and was approved by the Regional Medical Ethics Committee in Uppsala, Sweden.
Background characteristics
In total, 150 patients with unclear chest pain were included in the study, whereof 66% men and 34% women. The median age for the total population was 64 years (women=63, men=64). Fifty-seven percent of the patients had an education level of more than nine years, 8% were current smokers, 41% ex-smokers and 49% had never smoked. Sixty-five percent of the patients had hypertension, 13% diabetes, 59% hyperlipidemia and 11% had a previous diagnosis of depression. Baseline characteristics for the population according to the diagnosis of CAD (n=92) or no CAD (n=58) are given in Table 1.
Symptoms
There were 51% of the patients with typical angina, 33% atypical 72% perceived they had no problems with usual activities. 22% had no pain/discomfort, and 65% perceived no anxiety/depression.
No differences were found with regard to gender and EQ-index, nor were differences found between CAD patients and no CAD patients (Tables 3A and 3B).
At six month follow-up, significant improvements were noted for the dimensions usual activities (p=0.003) and pain/discomfort (p<0.001) in the total population.
Discussion
The main findings from our study were the improvement over six months in total health status, usual activities and pain/discomfort in the total population. These results may reflect the benefits of coronary revascularization.
The total health status at admission in patients who received a diagnosis of no CAD was worse than for those who received a CAD diagnosis. Patients experiencing chest pain with no CAD diagnosis may be more anxious for a longer period of time and these feelings of anxiety may influence their daily life [15] and also QoL [16] negatively, emphasizing the importance of patients receiving a clear diagnosis. The group of patients with no CAD is at risk for re-hospitalization due to an unclear diagnosis to refer and relate to [16].
The majority of patients in our study population had a symptom duration of more than six months. This finding highlights the need for more expeditious medical evaluation of these patients.
Total health status (EQ-VAS)
When comparing the total health status of our study populations to that of the general population [17] or patients with confirmed CAD [18], VAS was lower. This result concurs with previous investigations showing that both physical and mental health were lower in a group of patients waiting for coronary angiograph [19]. However, at six month follow-up VAS ratings were improved, but still lower than those perceived by the general population and CAD patients.
In patients with unclear chest pain, the low EQ-VAS at admission might reflect anxiety regarding the underlying cause of the chest pain symptoms. However, patient education provided by a specialist nurse is shown to decrease anxiety during the waiting time before coronary angiography [20].
EQ-index
The majority of the patients reported few problems with mobility, self-care, usual activities or anxiety/depression, at admission and six months later. This may be explained by the relatively low age of our population and exclusion criteria (i.e. no previous heart disease).
Anxiety/depression was one of the dimensions with the least improvement at six months follow-up. At inclusion, 11% of patients reported previous bouts of depression. Although this may factor explain our finding, previous studies support the finding that patients with unclear chest pain have more symptoms of depression and anxiety compared to the general population. Although CAD has been excluded, anxiety may persist in these patients due to a continued lack of knowledge as to the actual cause of symptoms.
Method discussion
The EQ-5D questionnaire is rapid and easily understood, and has been validated and found to be reliable for various diseases. However, it is not disease-specific and is not sensitive enough to measure the five dimensions in detail.
Although the questionnaire was completed prior to coronary angiography, we are not able to assess the extent to which patients' possible anxiety regarding the investigation might affect their level of perceived anxiety/depression. We must also keep in mind that all patients may not have had problems on the day they completed the questionnaire, but may usually have problems in their daily life. EQindex and EQ-VAS are not completely comparable since EQ-index was developed to reflect health status based on the general population, whereas EQ-VAS is a self-assessment of the subject's individual state of health. This might explain some differences in the findings between EQ-index and EQ-VAS. Another limitation of the study was a lack of data on patients not included in the study.
Conclusions
Patients with unclear chest pain seem to rate their total health status as worse than both the general population and MI patients before coronary angiography. However, a significant improvement was seen over the six months of follow-up. Patients diagnosed with no CAD rated their overall health status as worse than did patients diagnosed with CAD. Treatment for CAD was a factor improving QoL in these patients.
Usual activities and pain/discomfort were significantly improved during the follow-up period, whereas no significant improvement was seen for anxiety/depression. Larger studies are needed to confirm these findings as well as the health-economic consequences associated with worse quality of life in this large patient group.
Implications for practice
In patients with chest pain suggestive of CAD, QoL is affected, especially in patients in whom CAD is excluded at coronary angiography. This group of patients needs further attention after discharge, perhaps by way of counselling and medical follow-up. Patients with significant stenosis were revascularized when indicated.
To check the patients Qol status, a short questionnaire containing QoL-questions would be helpful to use at the referral time.
Patients with unclear chest pain must have a more rapid path to coronary angiography. To decrease the time to coronary angiography, it is important to take a strict anamnesis for estimating the severity of symptoms. Both doctors and nurses have to provide clear information to patients and relatives about symptom origins, how to act with regard to symptoms, and diagnosis. This would help the patient to seek medical attention earlier, but the medical care system must also act faster.
|
v3-fos-license
|
2017-06-26T15:02:01.338Z
|
2014-01-29T00:00:00.000
|
1068155
|
{
"extfieldsofstudy": [
"Medicine",
"Computer Science",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcsystbiol.biomedcentral.com/track/pdf/10.1186/1752-0509-8-10",
"pdf_hash": "e7359ca17f0d9382c31883d64efc4accf32efa61",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41868",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "e7359ca17f0d9382c31883d64efc4accf32efa61",
"year": 2014
}
|
pes2o/s2orc
|
RNF14 is a regulator of mitochondrial and immune function in muscle
Background Muscle development and remodelling, mitochondrial physiology and inflammation are thought to be inter-related and to have implications for metabolism in both health and disease. However, our understanding of their molecular control is incomplete. Results In this study we have confirmed that the ring finger 14 protein (RNF14), a poorly understood transcriptional regulator, influences the expression of both mitochondrial and immune-related genes. The prediction was based on a combination of network connectivity and differential connectivity in cattle (a non-model organism) and mice data sets, with a focus on skeletal muscle. They assigned similar probability to mammalian RNF14 playing a regulatory role in mitochondrial and immune gene expression. To try and resolve this apparent ambiguity we performed a genome-wide microarray expression analysis on mouse C2C12 myoblasts transiently transfected with two Rnf14 transcript variants that encode 2 naturally occurring but different RNF14 protein isoforms. The effect of both constructs was significantly different to the control samples (untransfected cells and cells transfected with an empty vector). Cluster analyses revealed that transfection with the two Rnf14 constructs yielded discrete expression signatures from each other, but in both cases a substantial set of genes annotated as encoding proteins related to immune function were perturbed. These included cytokines and interferon regulatory factors. Additionally, transfection of the longer transcript variant 1 coordinately increased the expression of 12 (of the total 13) mitochondrial proteins encoded by the mitochondrial genome, 3 of which were significant in isolated pair-wise comparisons (Mt-coxII, Mt-nd2 and mt-nd4l). This apparent additional mitochondrial function may be attributable to the RWD protein domain that is present only in the longer RNF14 isoform. Conclusions RNF14 influences the expression of both mitochondrial and immune related genes in a skeletal muscle context, and has likely implications for the inter-relationship between bioenergetic status and inflammation.
Background
We are interested in understanding the regulation of muscle metabolism and its inter-relationships with development, exercise and ageing. A particular focus is the regulation of mitochondrial content which is reported to impact on metabolic syndrome in humans [1] and rats [2], feed efficiency in livestock [3] and mammalian ageing [4]. Mitochondrial content also contributes to athletic performance [5] and post-mortem meat quality [6] through the connection to muscle fibre type. The physiological connection between inflammation and muscle biology in the context of training, muscle remodelling and ageing is also of interest. For example, during exercise, muscle is routinely subject to various stressors, such as mechanical damage, hypoxia and pH decline that set in motion a proinflammatory cascade [7][8][9] which has implications for tissue remodelling. Previously, using a network science approach in cattle [10] we identified the product of the ring finger protein (RNF14) gene (aliases ARA54, TRIAD2), an incompletely characterised transcriptional regulator, as a factor that might influence not only mitochondrial transcription and function but also immune function. In this study we aimed to explore and validate the accuracy of this reverse-engineering under the tightly controlled experimental conditions afforded by in vitro cell culture.
A major foundation of our predicted function of the RNF14 protein was a bovine co-expression network [10]. Various metabolic and developmental processes were prioritised for further scrutiny on the basis of forming cohesive co-expression network gene sets or 'modules.' To build the network, bovine muscle sampled at different times during pre-and post-natal development, between genetically divergent breeds and following nutritional intervention, were subject to microarray analysis. By hunting in the module of interest for transcriptional regulators (DNA binding transcription factors and co-factors), or asking the related question "which transcriptional regulator has the highest absolute, average correlation to all the genes in the module?", we generated a ranked list of regulators predicted to control the processes in question.
These approaches correctly identified groups of genes already known to play a role in mammalian skeletal muscle biology, including master regulators of the cell cycle (E2F1), fast twitch muscle development (SIX1) and mitochondrial biogenesis (ESRRA) [10]. Several regulatory molecules of unassigned or poorly documented function were also implicated in some of these processes, and they became candidates for future gene function validation efforts. While most genes are clearly defined within the network, one prominent gene, Ring Zinc Finger 14 (RNF14) a transcriptional co-activator known to bind the androgen receptor [11], gave apparently ambiguous results. Namely, it was assigned likely roles in two different processes by our analysisimmune function and mitochondrial function [10]. Given that the majority of connections in the co-expression network are positive, the prediction is that an increase in the activity of the 'regulator' will increase the expression of the 'target' genes.
A separate analysis on a different data set used a differential network strategy called Regulatory Impact Factor analysis (RIF) [12] to contrast mitochondrial-rich brown fat versus mitochondrial-poor white fat in mice. This analysis independently assigned high likelihood to a causal role for Rnf14 in driving the phenotype differences between these cell types, also suggestive of a role in mitochondrial function and content [13]. Little is known of the function of the RNF14 protein, other than it is broadly expressed across tissues [14] and is a transcriptional co-activator that interacts with the androgen receptor transcription factor in pathways relating to sex steroid signalling. From a structural perspective, there are six RNF14 transcript variants in humans and three in mouse, in both cases producing two different protein isoforms.
The objective of this study was to characterise the regulatory role of two RNF14 isoforms in mouse muscle.
We achieved this via experimental upregulation followed by functional analysis of the subsequent genome-wide transcriptional readout. We performed a transient transfection of transcript variants encoding the two different isoforms in mouse C2C12 cells. The resultant gene expression perturbations in a number of chemokines, Interferon Regulatory Factors and related interferon signalling molecules support a role for Rnf14 in skeletal musclemediated immune and inflammatory function. Additionally, the longer transcript variant 1, which encodes a protein isoform containing an RWD domain, yielded a coordinate upregulation trend of all mitochondriallyencoded mitochondrial proteins present on the array platform (12 of the total 13) reinforcing the proposed link to the mitochondrion.
Expression constructs
PCR resulted in two differently sized Rnf14 amplicons from mouse muscle cDNA. These amplicons were individually cloned and sequenced. In both cases the sequences exhibited >99% sequence identity to the GenBank Rnf14 sequence. The longer sequence was 1457 bp and BLASTN aligned this sequence to the Mus musculus Rnf14 transcript variant 1. The shorter sequence we amplified was 1306 bp, and this was aligned by BLASTN to the Mus musculus Rnf14 transcript variant 3 (summarised in Figure 1).
Rnf14 transcript variant 1 contains an ORF capable of producing E3 ubiquitin protein ligase RNF14 protein isoform A and Rnf14 transcript variant 3 the ORF for E3 ubiquitin protein ligase RNF14 protein isoform B. The two isoforms are identical at the C-terminus end of the protein, while the longer isoforms has an RWD domain at the N-terminus end not present on the shorter isoform. Variant 2 does not encode a protein.
Microarray expression measurements
The array platform used to interrogate the C2C12 response to transfection measures genome-wide transcriptional changes using 18,129 probes. This platform contains three probes predicted to bind Rnf14; ILMN_2675078, ILMN_2682811, ILMN_2868579. Their proposed binding sites are illustrated on Figure 1. Probe ILMN_2675078 was elevated~3.4-fold in the variant 3 transfected cells and 1.1-fold in the variant 1 transfected cells. This 3.4-fold change made Rnf14 the 9th most differentially expressed (DE) gene found in cells transfected with variant 3 out of the 18,129 probes with detectable signals in at least one treatment. Correcting for an overall transfection efficiency of~10% implies that an individual transfected cell showed an increase in expression of 34-fold and 11-fold respectively.
The other two Rnf14 probes did not report a change in expression of Rnf14 following transfection, including ILMN_2682811, predicted to bind variants 1 and 2 but not transcript variant 3 (Table 1). It is not clear whether these 2 probes are reporting correctly. To unravel these observations and to further document the technical implentation of the Rnf14 transfections we performed qRT-PCR on the RNA prepared from the transfected C2C12 cultures using primers designed to detect the transfection construct produced mRNA and endogenous forms of Rnf14 mRNA.
qRT-PCR expression measurements
To compare and contrast the Rnf14 transcript variants in the different treatment groups, we designed a set of discriminatory qRT-PCR primers ( Table 2). The ANOVA model for the analysis of Ct values from the qPCR experiments accounted for 93.03% of the total variation. The two main effects (primer and treatment) and their interaction were all highly significant (P < 0.0001).
The normalised expression results are summarised in Table 3. All primer sets yielded unique dissociation curves indicating the presence of Rnf14 transcripts in the anticipated samples. The no template controls yielded no product in all cases. The transfection of Rnf14 variant 3 was clearly evident in the variant 3 transfected cells but not the other C2C12 cultures (P < 0.0001). The transcriptional output of this expression construct was also clearly detectable in the primer set that amplifies all Rnf14 mRNA species (i.e. endogenous and construct-based) yielding a 30-fold increase (P < 0.0001). The disparity between this result and the array fold-change is presumably attributable to the enhanced sensitivity of qRT-PCR.
Similarly, the transfection of the Rnf14 variant 1 was also clearly detectable in the variant 1 transfected cells but not the other cell cultures (P < 0.0001). The expression by the construct was not as effective as for variant 3 and was not reflected in increased amplification by the all Rnf14 species (endogenous plus construct based) primer set, presumably because it forms a much more modest proportion of all the Rnf14 mRNA species in the system. There was very little variability in native Rnf14 expression (i.e. all three endogenous variants summed) across the three treatment groups. Overall, these findings imply the global transcriptional readout in the transfected C2C12 cells described by the microarray platform can be attributed to the impact of the transfection of the Rnf14 variants.
Functional enrichment analyses
With regard to the microarray analysis, a mixed-model normalisation procedure was applied to the raw intensity values, as previously described [15]. Clustering on columns discriminates the treatments based on global considerations of the gene expression patterns. The visualisation of the clustering analysis indicated that two of the six control samples were outliers and hence were removed for the subsequent DE analysis. Overall, the global gene expression patterns in the two control groups (no transfection and empty construct transfection) could not Assuming a transfection efficiency of~10%, the fold-changes for the transfected cells are~33-fold and 12-fold on a per cell basis (based on ILMN_2675078). The location of the probes in the different Rnf14 transcripts is given in Figure 1.
be discriminated from each other. Consequently, the gene expression values for the two control groups were combined to form a single control for the purposes of computing DE between controls and treatments. The Rnf14 variant 3 and 1 transfected cells were both clustered separately to the controls (more so) and separate to each other (less so).
We computed a list of DE genes (Tables 4 and 5) as previously described [16], performing the analysis at the probe level. To explore the genome-wide expression output for functional enrichment we determined statistically significant DE for each treatment contrast. We then submitted the DE lists versus a background list of all genes present on the array to the GOrilla webtool [17], which uses hypergeometric statistics to determine functional enrichments.
With regard to upregulation following over-expression of the shorter variant 3, "response to biotic stimulus", "chemokine activity" and "extracellular region" gave Pvalues of 8.44E -12 , 2.93E -7 and 7.4E -14 for the Process, Function and Component ontological levels respectively.
With regard to upregulation following over-expression of the longer variant 1, the top functional enrichments were "immune response," chemokine activity" and "extracellular region" with hypergeometric P-values of 1.98 E -9 , 3.83 E -6 and 2.06 E -10 .
Various extracellular region components were clearly among the most downregulated genes following transfection with both variants ( Table 5). The enrichment was highly significant in both cases, but more significant in the transcript 3 (P = 3.69 E -19 ) compared to the transcript 1 transfected cells (P = 1.32 E -14 ).
Contrasting effects of the variant 1 transfection with that of the variant 3 transfection yielded similar functional enrichments of "response to biotic stimulus" (3.94E -8 ), signal transducer activity (2.15E -4 ) and extracellular region (1.77E -5 ). The identity of the perturbed genes is further illustrated on Figure 2 and tabulated on Tables 4 and 5. The normalised mean expression results for the entire data set are in Additional file 1. Transfection with the empty construct could not be discriminated from untransfected cells, ruling out the possibility that these responses are an experimental artefact relating to the presence of the construct. The overall spread of the perturbed transcripts is greater (4-fold DE in both directions) in the variant 3 transfected cells, which may reflect the greater abundance of the variant 3 based construct.
We also computed a modified DE metric called Phenotypic Impact Factors (PIF) [12,13], a product of the average abundance of the gene and its DE. We have previously found that this accounts for the increased noise of the rarer transcripts and increases the sensitivity for detecting DE of the more abundant transcripts [12]. Figure 3 illustrates those genes either DE or awarded a high PIF score in at least one of the two transfections. Immune and mitochondrial genes are highlighted based on functional annotations performed by importing the list into the DAVID web tool [18]. Immune genes and nuclear-encoded mitochondrial genes are prominent among the DE genes but the direction of change is not consistent.
On the other hand, among the significant PIF transcripts in the cells transfected with the long transcript variant are three of the 13 mitochondrially-encoded mitochondrial proteins (Mt-coxII, Mt-nd2 and Mt-nd4l).
Moreover, a deeper exploration shows that all the mitochondrially-encoded genes represented on the array (12 of the 13) display a coherent trend of upregulation Transciption from the Actb gene was used as a control. A schematic of the known mouse Rnf14 mRNA species is presented in Figure 1. The standard error is 0.447 in all cases because we performed a system-wide normalization. 'No amplification' denotes multiple dissociation peaks i.e. non-specific amplification.
in the variant 1 transfected cells based on at least one probe ( Figure 3; Table 6; Additional file 2). The standard error bars for Figure 3 were calculated using the standard curve method [19]. A subset of these mitochondrially encoded genes that are significantly DE as assayed in pairwise comparisons are also highlighted on Figure 2. The upregulation of mitochondrially-encoded mitochondrial proteins was not observed in the Rnf14 variant 3 transfected cells (Additional file 2). The full list of DE and differentially PIF genes can be found in Additional file 3.
Motif analysis and bioinformatics
We next attempted to identify regulatory motifs that are conserved among the RNF14 responsive genes in an attempt to determine the cellular pathway linking Rnf14 upregulation to the observed mitochondrial and inflammatory output. Hunting for conserved Transcription Factor Binding Sites 1000 bp upstream of the DE genes using Whole Genome RVista for mouse, enriched for Hfh3 in both the variant 1 (P = 9.29-log 10 ) and variant 3 transfection experiments (P = 6.57-log 10 ). We also undertook a combination of bioinformatic analyses and literature mining. A protein motif analysis of RNF14 within UNIPROTKB indicated the presence of 1) an N-terminal destruction box which could act as a recognition signal for ubiquitin proteosome degradation and 2) RING type zinc finger essential for interaction with UBE2E2.
Discussion
In an attempt to infer the functional role of the RNF14 protein, previously found to be co-expressed with mitochondrial and immune genes in developing bovine longissimus muscle, we transfected each of two Rnf14 transcript variants into a mouse myoblast cell line. Interestingly, and despite low expression levels of that transcript, transfection with transcript variant 1 culminated in a significant upregulation in the transcription of three of the 13 mitochondrially-encoded mitochondrial genes (i.e. Mt-nd4L, Mt-coxII and Mt-nd2) [12]. Furthermore, while we only have microarray expression data for 12 of the 13 of these genes (Mt-coxI is missing) a deeper exploration shows that the remaining nine all display a modest but coherent trend of upregulation. The direction of this observation is consistent with the initial network connections being based on positive rather than negative co-expression values. While the fold-change is Genes may be represented by more than one probe. The controls are a combination of untransfected cells and cells transfected with an empty construct. only 1.1 to 1.4-fold, all these transcripts are very abundantly expressed which provides a favourable signal to noise ratio for reliable detection. The array does not report on the 22 mitochondrially encoded tRNAs and two ribosomal RNAs that make up the remaining transcriptional output of the mammalian mitochondrion, which encodes 37 different genes in total. The expression of a number of nuclear-encoded mitochondrial proteins was also upregulated following transfection of Rnf14 variant 1 (Figure 2). For example, Cmpk2 (alias Tyki) (2.4-fold up regulated) is a nucleoside monophosphate kinase that localises to the mitochondria and has previously been found to be tightly correlated with macrophage activation and inflammation [20]. A very recent publication has documented Rnf14 as a positive regulator of canonical Wnt signalling in human cells [21], with canonical Wnt signalling previously reported to be a potent activator of mitochondrial biogenesis [22]. This recent body of work clearly complements our findings linking Rnf14 with mitochondrial physiology.
While the broad transcriptional impact of Rnf14 variant 1 transfection on our samples was in line with our functional prediction, there were some interesting deviations. Firstly, the nuclear and mitochondrially-encoded mitochondrial proteins occupy distinct parts of the original bovine muscle co-expression network [10]. While Rnf14 sits in the nuclear-encoded portion of the in vivo bovine network, transfection with variant 1 appears to exert the most coherent transcriptional influence on the mitochondrially-encoded mitochondrial proteins. By way of contrast, Rnf14 variant 3 transfection did not lead to a detectable change on the expression of mitochondriallyencoded mitochondrial genes, despite an overabundance of the variant 3 transcript in the variant 3 transfected cells.
Both Rnf14 variants influenced expression of genes encoding proteins relating to immune function. Unlike the mitochondrially-encoded mitochondrial proteins upregulated observed in Rnf14 variant 1 transfected cells, genes belonging to inflammatory processes were both up-and downregulated. Prominent among the perturbed immune genes were chemokines (e.g. Ccl2, Ccl4, Ccl5, Ccl7, Cxcl10, Cxcl12), interferon regulatory factors and related interferon responsive and signalling genes (e.g. Irf1, Irf7, Irf9, Isg15, Isg20, Ifit2, Ifit3, Psmb8, Usp18, Adar, Gbp2). In humans following eccentric exercise, the in vivo inflammatory response includes activation of chemokines [23]. A number of these genes also imply apoptosis, a mitochondrial phenomenon [24]. These data go some way towards resolving the question posed by our apparently ambiguous (i.e. strong co-expression to both mitochondrial and immune genes) observations from the bovine muscle co-expression network, and imply that both mitochondrial and immune predictions are supported, depending on the particular transcript variant under consideration. Both RNF14 motifs (N-terminal destruction box and RING type zinc finger) indicate some involvement in ubiquitin mediated proteolysis which ties in with apoptosis, and UBE2E2 is known to play a specific role in adaptive immunity signalling. The motif analysis shows that most of the large transcription factor motifs (Zinc fingers and RNA binding domains) of the protein reside in the C-terminus shared by both isoforms, while the missing amino acids in the shorter isoform result in the loss of an RWD domain. We hypothesise that the RWD domain accounts for the mitochondrial response observed after transfection with Rnf14 transcript variant 1. Recent work has emphasised deep functional connections between mitochondria and innate immunity in general [25,26], and mitochondria and antiviral processing in particular [27], which is clearly of interest given the very same dual roles outlined here for RNF14.
The downregulation of a set of extracellular region and extracellular matrix transcripts following transfection with both Rnf14 transcript variants was unexpected. Example downregulated molecules common to both transfections included multiple collagen isoforms (Col14a1, Col6a2, Col6a1, Col8a2, Col16a1), other matrix structural components (Dcn, Mglap) and matrix remodelers (Mmp2, Adamts2). We ascribe these observations to one of two phenomena. On the one hand, it may reinforce the transmission of the immune signals we have observed, as it has been documented that the extracellular matrix plays a crucial role in the inflammatory process [28]. Alternatively, the signal may correspond to differences in myocyte progression through proliferation and differentiation, the transition through which is known to be accompanied by various changes in matrix-mediated adhesion [29].
Interpreting the various lines of evidence linking RNF14 protein to immune and mitochondrial functions is complicated by the cross-species sources of data. The original co-expression prediction of RNF14's gene function was made mainly from bovine expression data. Disentangling the various pieces of information is challenging given cattle are a non-model organism and we have incomplete knowledge of bovine functional genomics. For example, it is not clear how many bovine RNF14 transcript variants exist in total, which clearly complicates our (co-expression) interpretation of the Agilent probe that provided the original foundation for some of the predictions.
Nevertheless, the outcome of this validation experiment supports three of our systems biology gene discovery approaches: 1) Partial Correlation and Information Theory (PCIT) [30] 2) Module-to-Regulator analysis [10] and 3) Regulatory Impact Factors [12,13]. While there is some overlap in the exact molecules present in the coexpression modules and those perturbed in this subsequent transfection experiment (Prsmb8 and Irf1 being common to both in an immune context), the overlap is very patchy. This implies that predictions based on considerations of co-expression or differential co-expression are perhaps best made in terms of broad function rather than specific molecules.
Conclusions
Rnf14, which encodes a transcriptional co-factor, acts via two differentially spliced transcripts to modulate innate immune responses, tissue energy homeostasis and the tissue matrix. We do not know under what stimuli either transcript is produced by cells of different lineages. We have shown that gene expression information generated in a non-model organism can be used to develop hypotheses that can be validated in more conventional systems. The validation of this approach enhances the intrinsic value of many existing datasets generated in a range of species as it provides a methodology for detailed analysis of fundamental biological phenomena, such as mitochondrial transcription and biogenesis. Future work could aim to tease out whether one or more of the RNF14 protein isoforms are mitochondrially colocalised in addition to the mRNA transcripts being mitochondrially co-expressed, and/or whether in vivo evidence can be detected for a role in immune function and mitochondrial transcription or biogenesis to supplement the in vitro data we have presented.
Prioritisation of RNF14 for experimental analysis
Three separate lines of evidence were used to make a prediction of RNF14's possible role in immune versus mitochondrial function. Firstly, a co-expression network on developing bovine muscle samples -inferred using the PCIT algorithmconnected RNF14 to a dense nuclear-encoded mitochondrial module (comprising the following with correlations >0.85 MDH1, PDHA1, NDUFS2, ES1, UQCRC1, NDUFV1, MDH2, GOT2, NDUFA9, SOD2, CYCS, NDUFV2, SDHA, BRP44, ACO2, APOO) [10]. The position of RNF14 in the mitochondrial module of the co-expression network made a prediction about a role in mitochondrial function.
Thirdly, application of the Regulatory Impact Factor (RIF) algorithm to an entirely independent data set (comparing high-mitochondrial content brown fat versus low-mitochondrial content white fat) awarded high likelihood of phenotype causality to the RNF14 protein [13].
Amplification, cloning and transformation of two transcript variants of RNF14
The mouse ortholog for RNF14 (GenBank accession NM_020012.1) was identified through sequence alignment to the bovine RNF14 protein. Mouse muscle cDNA was used as template for a PCR designed to amplify the full length mRNA encoding the Rnf14 gene. To increase recognition of the first ATG by the ribosome, the forward primer incorporated the mammalian Kozak sequence (CCATGG) prior to the start of the Open Reading Frame. PCR primers also incorporated vector sequence, restriction enzyme sites used for cloning (NcoI at 5' and XhoI 3') and gene specific sequence. The forward primer for PCR was 5'-GAGATATGCCACCATGGAGT CGGCAGAAGACCTGGAAGCCCAG-3' and the reverse primer 5'-GTGATGGTGGTGCTCGAGGTCGTCGTCG TCGTCTTCATCATCGT-3'.
A conventional PCR with a 50°C annealing temperature using high fidelity Taq polymerase, amplified a band of 1457 base pairs. This was gel purified and cloned into the pTANDEM-1 vector (EMD4 Biosciences, Merck, USA) using a fusion homologous recombination method. The resultant expression plasmid contained a His 6 tag at the C terminus of the RNF14 gene for protein purification and detection by antibodies.
The two variants are referred to as Rnf14 transcript variant 1 and Rnf14 transcript variant 3. Following The homeobox genes encode a highly conserved family of transcription factors that play an important role in morphogenesis transformation into TOP10 chemically competent E.coli, clones were picked and diagnosed for the presence of the insert in the correct orientation using restriction digests (XhoI and NcoI) followed by gel electrophoresis. Clones predicted to contain the correct insert were sequenced from both ends of the expression construct using the BGH, T7 and pTandem1 Tandem DOWN1 primer and aligned to the GenBank mouse RNF14 sequence. Expression constructs containing the correct sequence were midi-prepped (Qiagen) following the manufacturer's instructions and stored at 1 μg/μl at −20°C in preparation for transfection. The Qiagen midi-prepping protocol removes LPS and provides transfection-ready reagents.
A control construct containing no insert (hereon referred to as "empty") was also produced and subsequently used for transfection. A separate control was derived from untransfected cells.
Cell culture and transfection conditions
The murine myoblast cell line, C2C12, was obtained from the American Type Culture Collection and cultured in DMEM supplemented with penicillin/streptomycin and 10% fetal bovine serum. Transfection conditions for the C2C12 cells (passage numbers 15 -20) were explored for a range of confluences and Lipofectamine 2000 (Invitrogen) transfection reagent: DNA ratios. A pEGFP.C3 vector containing Green Fluorescent Protein (GFP) was used to visualise transfection efficiency under a Zeiss AX10 fluorescence microscope fitted with an Axiocam camera. As previously reviewed [31], we found that C2C12 cells transfected with a low transfection efficiency of~10%. Optimal cell number was found to be 5 × 10 4 cells/well transfected with 5 μl of Lipofectamine 2000 and 800 ng DNA per well for 48 hours.
The experiment was run in two 24-well Geltrex-coated plates using the optimised procedure. Cells were cultured for 24 hours prior to transfection. At the time of transfection the cells were~50% confluent. Two independent controls were used, one with no transfection and the other through transfection of the empty construct.
Twelve independent wells were run for each of the four treatment groups (no transfection, empty construct, Rnf14 short transcript variant and Rnf14 long transcript variant) giving a 48 well cell culture experiment in total. At 48 hours post transfection the cells from each well were harvested for RNA, while a GFP transfection run in parallel was fixed and stained to provide an estimate of transfection efficiency.
RNA extraction and microarray hybridisation
Total RNA was independently extracted from each of the 48 cell culture wells using QIAshredder homogenates and QIAgen RNeasy columns (Qiagen), incorporating the oncolumn DNase treatment (Ambion) to remove genomic DNA contamination. RNA integrity and purity was assayed visually by gel electrophoresis and additionally by A 260/280 spectrophotometry. All the RNA samples resolved into discrete 16S and 26S ribosomal RNA bands.
Because each of the four treatment groups comprised 12 independent RNA samples, these 12 were pooled into 3 major groups of 4 replicates with each replicate contributing 0.5 μg. This process yielded 12 by 2.0 μg pooled RNA samples as follows (no transfection 1, no transfection 2, no transfection 3, empty construct 1, empty construct 2, empty construct 3, short transcript 1, short transcript 2, short transcript 3, long transcript 1, long transcript 2, long transcript 3).
The 12 RNA pools were submitted to the Gene Expression Centre at the University of Queensland's Institute for Molecular Bioscience for cDNA synthesis and hybridisation to the Illumina WG6 mouse microarray platform. This facility performed an additional set of quality checks based on possession of an RNA Integrity Number (RIN) >8 assayed on a Bioanalyzer.
Statistical analysis
The analytical procedures to normalize the data and to identify differentially expressed genes were based on methods developed by our group and published elsewhere. In particular, we followed the methodology described in [15,32,33].
The raw data from the Illumina Microarray System contained expression signals from 45,281 probes across the 12 hybridizations.
Initial normalization
We base-2 Log-transformed the expression signals to stabilize the variance, capturing only those probes with a "detectable" signal (P < 0.01) in at least one of the 12 hybridizations. This produced a "cleaned" dataset of 18,129 Illumina probes.
Chip normalization
The "cleaned" dataset was subject to a within chip normalization in which each signal was normalized by subtracting the within-chip mean and dividing by the within-chip standard deviation. The chips were therefore standardized to have a mean (μ) of 0 and a standard deviation (σ) of 1, enabling data from different chips to be combined without the risk of being influenced by differences in mean or SD. This allows subsequent linear functions to be more reasonably derived, in our case computation of differential expression. The resulting normalized dataset was subjected to hierarchical cluster analyses using the PermutMatrix software [34].
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-01-01T00:00:00.000
|
17915745
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0000590&type=printable",
"pdf_hash": "ce37298b0fb05937a68cb7ca26dfbf0de23dfc13",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41869",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ce37298b0fb05937a68cb7ca26dfbf0de23dfc13",
"year": 2010
}
|
pes2o/s2orc
|
A CATT Negative Result after Treatment for Human African Trypanosomiasis Is No Indication for Cure
Background Cure after treatment for human African trypanosomiasis (HAT) is assessed by examination of the cerebrospinal fluid every 6 months, for a total period of 2 years. So far, no markers for cure or treatment failure have been identified in blood. Trypanosome-specific antibodies are detectable in blood by the Card Agglutination Test for Trypanosomiasis (CATT). We studied the value of a normalising, negative post-treatment CATT result in treated Trypanosoma brucei (T.b.) gambiense sleeping sickness patients as a marker of cure. Methodology/Principal Findings The CATT/T.b. gambiense was performed on serum of a cohort of 360 T.b. gambiense patients, consisting of 242 primary and 118 retreatment cases. The CATT results during 2 years of post-treatment follow-up were studied in function of cure or treatment failure. At inclusion, sensitivity of CATT was 98% (234/238) in primary cases and only 78% (91/117) in retreatment cases. After treatment, the CATT titre decreased both in cured patients and in patients experiencing treatment failure. Conclusions/Significance Though CATT is a good test to detect HAT in primary cases, a normalising or negative CATT result after treatment for HAT does not indicate cure, therefore CATT cannot be used to monitor treatment outcome.
Introduction
Since none of the drugs for human African trypanosomiasis (HAT) is 100% efficacious, it is recommended to follow-up sleeping sickness patients every 6 months after treatment, for a period of 2 years. Parasites may be difficult to detect in blood of HAT patients experiencing treatment failure, therefore assessment at follow-up visits relies mainly on lumbar puncture and examination of the cerebrospinal fluid (CSF) for presence of trypanosomes and white blood cell count. A patient is declared cured when, within 2 years, no trypanosomes have been detected and the CSF white blood cell count returned to normal [1]. Complete follow-up is seldom achieved because, when patients feel well, they are reluctant to comply to the follow-up examinations [2][3][4][5]. So far, no markers for cure or treatment failure after HAT treatment have been identified in blood.
The card agglutination test for trypanosomiasis (CATT) is a fast and simple agglutination test for detection of trypanosome specific antibodies in blood of Trypanosoma brucei (T.b.) gambiense infected patients [6]. With sensitivities between 87 and 98% and specificities of around 95%, the CATT test is extensively used in almost all HAT endemic areas for population screening, and has contributed to the current success of HAT control programs [7,8]. Given the fact that drugs for HAT are toxic, and the specificity of CATT is limited, a confirmation step by parasitological techniques is needed [7]. Trypanosome specific antibodies, detectable by CATT have been demonstrated even 24 months after successful treatment in no less than 47% of gambiense HAT patients [3,9,10]. A positive post-treatment CATT result is therefore not indicative of treatment failure, but the predictive value of a negative CATT after treatment has hitherto not been evaluated. We explored the hypothesis that a normalising, negative post-treatment CATT result indicates cure in gambiense HAT and rules out treatment failure. If such CATT-normalising patients could be released from further follow-up, this would lead to major clinical and public health benefits as less lumbar punctures would be required and less patients should be followed for up to 24 months.
We report here on the pre-and post-treatment CATT serum results in a cohort of primary and retreatment HAT cases infected with T.b. gambiense.
Ethics statement
Sleeping sickness patients originate from a prospective observational study (THARSAT) [11]. The Commission for Medical Ethics of the Prince Leopold Institute of Tropical Medicine, Antwerp, Belgium and the Ethical Commission of the Ministry of Public Health, Democratic Republic of the Congo approved the study. Written informed consent was given by all study participants prior to enrolment.
Patients
The cohort consisted of 242 primary HAT cases that had never been treated for HAT and of 118 retreatment cases previously treated for HAT, but with trypanosomes detected in CSF at inclusion. All cases were parasitologically confirmed before enrolment and were (re)treated according to the national guidelines: primary cases in first stage (n = 41) were treated with pentamidine, primary cases in second stage were treated with melarsoprol (n = 192) or eflornithine (n = 9). Retreatment cases were treated with melarsoprol (n = 7), eflornithine (n = 52), melarsoprol nifurtimox combination therapy (n = 57), melarsoprol eflornithine combination therapy (n = 1) or eflornithine nifurtimox combination therapy (n = 1). Patients were monitored for treatment outcome during 2 years. The detailed description of the clinical outcomes in the cohort is given elsewhere [11]. In brief, out of 242 primary cases, the final outcome was cure in 90 (cure or probable cure) and treatment failure in 118 (relapse, probable relapse, or HAT related death during follow-up). 34 primary cases were excluded from the analyses of post-treatment results since they could not be classified as cured or treatment failure because they were lost to follow-up, died during treatment or died over the following 2 years from non-HAT related causes. Out of the 118 retreatment cases, 85 were cured and 16 experienced a new treatment failure. Seventeen retreatment cases were lost to followup, died during treatment or died over the following 2 years from non-HAT related causes and were also excluded from the analyses of post-treatment results.
CATT test
CATT/T.b. gambiense was performed following the titrationmethod as described by the manufacturers [6] on serum taken before treatment and at 3, 6, 12, 18 and 24 months posttreatment. The end titre (highest dilution giving agglutination) was determined. Patients with end titres $1:4 were considered CATT positive, end titres ,1:4 were considered CATT negative.
Data analysis
The Chi square test or Fisher exact test (when the number of observations in a cell was ,5) was performed for comparison of proportions using a 95% confidence limit. Odds ratios (OR) with binomial 95% confidence intervals (CI) were computed. STATA version 10 was used for data analysis.
CATT before treatment
The distribution of CATT end titres in primary and retreatment cases at inclusion is presented in figure 1
CATT after treatment
The CATT results after treatment -in function of cure or treatment failure-are shown in figure 2.
No significant relationship between CATT positivity and occurrence of treatment failure (p.0.05) could be observed at 3, 6 and 18 months post-treatment. A significantly higher proportion of treatment failures cases tested positive with the CATT compared to the cases that were cured 12 months (Chi square test, p = 0.040) and 24 months (Fisher exact test, p = 0.027) after treatment. The odds of a treatment failure case being CATT positive are 2.76 (95% CI 1.03-7.4) and 13.6 (95% CI 1.32-140) times greater than the odds of a cured case being CATT positive 12 and 24 months after treatment. In 7/113 primary cases trypanosomes were detected in blood at time of relapse. Two of them relapsed at 3 months with CATT titres 1:8 and 1:16 ; three
Author Summary
The 2 year follow-up period required after treatment of human African trypanosomiasis (HAT) patients is a major challenge for patients and control programmes alike. The patient should return every 6 months for lumbar puncture and cerebrospinal fluid examination since, so far, no markers for cure have been identified in blood. The Card Agglutination Test for Trypanosomiasis (CATT) is a simple, rapid test for trypanosome-specific antibody detection in blood that is extensively used in endemic areas to screen for HAT. We examined the value of a normalising CATT as a marker for treatment outcome. We observed that CATT titres decreased after treatment both in patients who experienced treatment failure as well as in cured patients. We conclude that CATT, though a good screening test, is unreliable for monitoring treatment outcome. We also showed that the sensitivity of CATT in relapse cases was as low as 78%, and as a consequence some relapse cases might be missed in screening programs if they have no clinical signs yet.
Discussion
We demonstrate for the first time that CATT sensitivity is low in retreatment cases, and that CATT titres decrease after treatment both in patients who experience treatment failure as well as in cured patients.
Before treatment, the CATT sensitivity in primary cases falls within the sensitivities previously reported for CATT in the Democratic Republic of the Congo, and for HAT in general [7,10]. The low sensitivity of 78% observed in retreatment cases is explained by the decrease in CATT titre after a previous treatment, and largely corresponds to the proportion of CATT positives observed 6 and 12 months post-treatment within the groups of treatment failures. The observed end titers are relatively low, being in respectively 39% and 83% of the primary and retreatment cases below 1:16. Treating serological cases based on a CATT end titer $1:16, without parasitological evidence, might miss some HAT cases.
Although it has been shown that trypanosome specific antibody concentrations in blood of cured patients may persist up to 2 years or longer after treatment [3,9,10,12], reports about the concentrations of specific antibodies in serum of sleeping sickness patients who experience treatment failure are rare. In 22 relapsing patients, Frézil et al. [12] describe that the immunofluorescence test remains positive in the majority of relapsing cases, but doubtful/ negative in only 1 case. In a small cohort of 32 relapse cases, Miézan et al. [3] describe a decreasing antibody concentration and a CATT positivity rate of 94% at the moment relapse is diagnosed.
A negative CATT after unsuccessful treatment might be explained by trypanosomes that are cleared from peripheral tissues, such as lymph and blood, but that survive in the brain and thus do not trigger specific antibody production in the blood.
Our study has a number of limitations. As a consequence of the diagnostic procedure used by the HAT control program to detect HAT, the observed sensitivity of CATT of 98% in our group of primary cases might be higher than in other patient cohorts. Indeed, the patients in our cohort were identified as follows: CATT on whole blood, alongside cervical lymph node palpation, was used as a screening test and only those persons with a CATT positive result on whole blood, or having enlarged cervical lymph nodes underwent parasitological examinations for case confirmation. Although part of the false negatives in CATT will be found by cervical lymph node palpation, the true sensitivity of CATT in the primary cases might be lower than 98%. The number of treatment failures detected after $12 months is low, which prevents us from making reliable estimates of a further de-or increase in CATT titres after that time point, nor of the proportion of CATT positives. Moreover, follow-up examinations in this cohort-as in routine clinical care-were focused on cerebrospinal fluid examination and the need for blood examinations may have been given less importance by the nursing staff. As a consequence the cohort does not allow us to check if relapsing patients with trypanosomes in the blood had higher CATT titres than those without, since in the majority, relapse was confirmed by finding of trypanosomes in the CSF and no further blood examinations were performed. Finally, the majority of primary cases in this study were treated with the trypanolytic drug melarsoprol in an area of high treatment failure rates. Although a similar trend was observed in retreatment cases, who were treated differently, we cannot exclude that results could differ in primary HAT patients treated with other drugs.
Our findings have 2 practical implications. First, the considerable proportion of CATT negative results in cases experiencing CATT in HAT Follow-Up www.plosntds.org treatment failure, which increases over time, implies that a posttreatment CATT negative result does not necessarily indicate cure. Knowing, moreover that a post-treatment CATT positive result does not indicate treatment failure, makes us conclude that CATT is unreliable for monitoring treatment outcome. Secondly, screening programs for HAT should take into consideration that a careful history about past HAT episodes is paramount, as the sensitivity of CATT in relapse cases is not optimal. Cases experiencing treatment failure are more likely to be false negative in CATT than new cases and, as a consequence, might be missed (i.e. not offered parasitological investigations) if they show no clinical signs. These data might cast some doubt on the performance of CATT as a screening test in the detection process, given the fact that some relapse cases appear to be negative in the CATT. Molecular -or other-diagnostics might eventually be taken up in an improved algorithm for diagnosis or follow-up but further investigation of these tests is necessary.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2016-02-17T00:00:00.000
|
15615783
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s40560-016-0135-6",
"pdf_hash": "3fbb758bd598788c720f1902f988ccd482a2efa5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41870",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3fbb758bd598788c720f1902f988ccd482a2efa5",
"year": 2016
}
|
pes2o/s2orc
|
Circulating nucleosomes as predictive markers of severe acute pancreatitis
Background The components of nucleosomes, which contain DNA and histones, are released into the circulation from damaged cells and can promote inflammation. We studied whether the on-admission levels of circulating nucleosomes predict the development of severe acute pancreatitis (AP), in particular among the patients who present without clinical signs of organ dysfunction. Methods This is a prospective study of 74 AP patients admitted to Helsinki University Hospital from 2003 to 2007. Twenty-three patients had mild, 27 moderately severe, and 24 severe AP as defined by the revised Atlanta criteria. 14/24 severe AP patients had no sign of organ dysfunction on admission (modified marshall score <2). Blood samples were obtained on admission and the plasma levels of nucleosomes were measured using enzyme-linked immunosorbent assay. Results The on-admission levels of nucleosomes were significantly higher in severe AP than in mild or moderately severe AP (p < 0.001 for all), higher in non-survivors (n = 8) than in survivors (p = 0.019), and correlated with the on-admission levels of C-reactive protein (p < 0.001) and creatinine (p < 0.001). Among the AP patients who presented without organ dysfunction, the on-admission nucleosome level was an independent predictor of severe AP (p = 0.038, gender-adjusted forward-stepping logistic regression). Conclusions Circulating nucleosome levels may be helpful in identifying, on admission to hospital, the AP patients who present without clinical signs of organ dysfunction, and, yet, are bound to develop organ dysfunction during hospitalization.
Background
Acute pancreatitis (AP) is usually a mild disease with favorable outcome. However, about 20 % of the patients develop moderately severe or severe disease, as defined by the revised Atlanta classification [1]. Moderately severe AP is characterized by the presence of local complications and/or transient (<48 h) organ dysfunction (OD) and very low mortality [2]. In severe AP, OD is persistent and mortality high, up to 70 % [2][3][4][5]. Evidence has accumulated to show that early aggressive intravenous hydration decreases morbidity and mortality [6,7]. In addition, the patients at risk to develop severe AP, particularly those who present without OD, might benefit from immunomodulatory treatment [8][9][10]. About half of the AP patients with OD do not have clinical signs of OD at presentation [8,11,12]. At present, there are no means to identify these patients on admission to the hospital.
The inflammatory reaction in AP is considered to have its origin in premature activation of pancreatic proteases promoting acinar cell apoptosis and necrosis. Damaged or dying pancreatic acinar cells release intracellular contents including nuclear damage-associated molecular patterns (nDAMPs), such as DNA and histones, which promote the accumulation of innate immune cells into the pancreas and generation of cytokines, among other soluble mediators of inflammation. The release of phlogistic mediators into the circulation elicits systemic inflammation, which is considered to contribute to the development of remote organ injury (for reviews, see refs [13,14]).
Nucleosome, a subunit of nuclear chromatin, consists of a central core protein formed by an octamer of the double-represented histone and 147 base pairs of doublestranded DNA [15]. Cellular damage, such as apoptosis and necrosis, promotes the release of nucleosomes, among other nDAMPS, into the extracellular space, where DNA and histone exhibit pro-inflammatory activity [14,16]. Nucleosomes can also be exported within neutrophil extracellular traps (NETs) during NETosis, a unique form of neutrophil cell death at sites of infection and inflammation [17,18]. Although elevated levels of circulating nucleosomes are detected in patients with sepsis [19,20], in other disorders characterized of systemic inflammation [21][22][23], and in experimental AP [24], to our knowledge, nucleosome levels have not been systematically studied in patients with AP. This prompted us to investigate whether the on-admission plasma nucleosome levels associate with the severity of AP and predict the development of severe AP, in other words persistent OD.
Patients
A cohort of 74 prospectively collected non-consecutive patients with AP admitted to Helsinki University Hospital between June 2003 and December 2007 were included in the study. Exclusion criteria were previous history of chronic pancreatitis and the onset of symptoms more than 72 h before admittance to the hospital.
The diagnosis of AP was made if two of the following three features were present: acute onset of upper epigastric pain, serum or plasma amylase level at least three times greater than the upper limit of normal, and characteristic findings of AP in imaging studies (computed tomography or magnetic resonance imaging). The patients were treated according to the international guidelines [25] with, e.g., early aggressive intravenous hydration, no routine use of prophylactic antibiotics, nasojejunal tube for enteral feeding in severe AP, and endoscopic retrograde cholangiopancreatography if concurrent cholangitis was present.
After inclusion, demographic and clinical characteristics of patients were collected from medical charts. The severity of AP was graded retrospectively according to the revised Atlanta classification [1] into mild (no systemic or local complication), moderately severe (local and/or systemic complication without persistent OD), and severe (persistent OD). Acute physiology and chronic health evaluation (APACHE) II score, sepsis-related organ failure assessment (SOFA) score, and modified marshall score (MMS) were determined to evaluate the severity of OD on admission. MMS [26] was used for assessing the presence of OD on admission, as recommended in the revised Atlanta classification [1]. In MMS, three organ systems (respiratory, renal, and cardiac) are assessed, and if the score is ≥2 for one of those organ systems, OD is present. The flow chart of the patients is presented in Fig. 1.
Each patient, or next to kin, gave informed consent. Ethics Committee of Helsinki University Hospital (Department of Surgery) approved the study.
Samples and sample analyses
The plasma samples were taken 0-12 h after admission, collected into EDTA-treated tubes and stored at −80°C until they were assayed. Nucleosomes were quantified with Cell Death Detection ELISA PLUS (Roche, Basel, Switzerland) according to the instructions of the manufacturer. The results are presented as absorbance units (AU). Negative values were computed to zero.
Plasma levels of C-reactive protein (CRP) (normal reference range less than 10 mg/L) and creatinine (normal reference range 50-90 μmol/L) were determined in accordance with the hospital laboratory routine. CRP and creatinine levels were used as reference markers because they belong to routine follow-up blood chemistry of AP patients and have prognostic value in AP [7,34].
The median storage time of the plasma samples was long, in mild AP group 9.3 years (range 6.2-9.8 years), moderately severe AP group 8.3 years (range 5.8-9.8 years), and severe AP group 7.3 years (range 5.3-9.8 years), (p < 0.001, Jonckheere-Terpstra test for trend). However, the sample age did not correlate with nucleosome level in mild, moderately severe or severe AP (p = 0.153, p = 0.928, and p = 0.631, respectively), and in multivariate logistic regression analysis nucleosome level remained as an independent predictor of OD regardless of the storage time. Fig. 1 Flow chart of the patients. Patients' classification according to admission modified marshall score (MMS) and patients' outcome according to the revised Atlanta criteria [1]. OD organ dysfunction, AP acute pancreatitis
Statistics
Statistical analysis was performed using IBM SPSS® Statistic version 19 (SPSS, Chicago, Illinois, USA) statistical software. Nonparametric tests were used because of the skewness of the data. The results are given as medians and interquartile ranges (IQRs) or number of patients and percentages. Comparisons between two groups were made using the Mann-Whitney U test for continuous variables or using Fisher's exact test for binary variables. Comparisons between three ordered groups were tested with the Jonckheere-Terpstra test for trend. Correlations between two continuous variables were done using Spearman rank correlation. P values of less than 0.05 were considered significant, and double-sided tests were used. Receiver operator characteristic (ROC) curve analysis was used to find a clinically optimal cutoff value for each biomarker. In this study, we determined the specificity >90 % and chose the point on the curve where the longest increase in the sensitivity of the slope declines. Areas under the ROC curves (AUC) were calculated, as well as corresponding sensitivities, specificities, positive likelihood ratios (+LR), negative likelihood ratios (−LR), and diagnostic odds ratios (DOR) for cutoff values, with 95 % confidence intervals [27]. DOR is the ratio of the odds of positive test result among patients with OD to the odds of a positive test result among the patients without OD. The higher the value, the better the discriminatory test performance is [28]. Finally, logistic regression analysis was performed to identify independent markers predicting severe AP. Forward conditional stepping was used to select variables into the post hoc model with p < 0.05 inclusion criteria. Interactions were considered, but no significant interactions were found.
Patients
Characteristics of the patients are shown in Table 1. All patients with severe AP developed either respiratory or renal failure needing mechanical invasive ventilation and/or haemodialysis. Seven of them (29 %) died, four of whom during the first hospital week (range 1-6 days), and the other three patients 11-90 days after admission. One patient, already recovering from moderately severe AP, experienced sudden death of unknown immediate cause.
Nucleosome, CRP, and creatinine levels as predictors of severe AP All patients Classification criteria, the Atlanta criteria, and their revised form [1] have been commonly used also in predicting the outcome of AP. We therefore first determined if the on-admission level of circulating nucleosomes predicts the development of severe AP among all patients categorized according to the revised Atlanta criteria [1].
The predictive value was measured by determining the AUCs from the ROC curve (Fig. 2a). AUCs were 0.718 for nucleosomes, 0.770 for creatinine, and 0.673 for CRP ( Table 4), indicating that the predictive values of the variables were comparable. We then chose clinically optimal cutoff values (specificity >90 %) from the ROC curves to analyze the corresponding statistical parameters for the biomarkers to predict severe AP. Of the three variables studied, creatinine (cutoff ≥110 μmol/L) had the highest predictive power of OD with the sensitivity of 46 %, the specificity of 91 % (Table 4).
In univariate logistic regression analysis nucleosomes, CRP and creatinine, both as continuous and binary variables, predicted the development of severe AP. The stepwise forward logistic regression analysis of CRP, creatinine, and nucleosomes (gender adjusted) revealed that creatinine, as a binary variable (cutoff ≥110 μmol/L), was an independent, significant predictor of severe AP (Table 5). Table 4 Moderately severe AP and severe AP Because most patients with mild AP recover uneventfully within a few days, we excluded these patients to reveal whether moderately severe AP (n = 27) can be distinguished from severe AP (n = 24) using the onadmission level of circulating nucleosomes. AUCs were 0.661 for nucleosomes, 0.717 for creatinine, and 0.550 for CRP (Table 4). We then chose the new clinically optimal cutoff values to predict severe AP with high specificity (>90 %) optimized for moderately severe and severe AP patients from the ROC curves (Fig. 2b). The cutoff points were ≥386 mg/L for CRP, ≥287 μmol/L for creatinine, and ≥0.57 AU for nucleosomes. The specificity and sensitivity for nucleosomes and those for creatinine were comparable (Table 4).
In univariate logistic regression analysis, however, only male gender was a significant predictor of severe AP. Using the gender-adjusted stepwise forward logistic regression analysis of nucleosomes and creatinine, only male gender was an independent predictor of severe AP (Table 5).
Predicting severe AP in patients with OD on admission (n = 16)
A question of clinical interest is whether nucleosome levels distinguish, on admission, transient OD patients from persistent OD patients. Six of the 16 patients who presented with OD had transient OD, in other words, OD resolved within 48 h, and were ultimately allocated into the moderately severe AP group. The nucleosome, CRP, or creatinine levels of the six transient OD patients did not differ significantly from those of the ten persistent OD patients (Table 3).
Nucleosome levels predict severe AP among patients without OD on admission (n = 58)
A total of 14/24 patients with severe AP and another 44 patients with mild or moderately severe AP had MMS <2 on admission (Fig. 1). Thus, we analyzed if the variable studied predicted the development of OD of the 14 patients who presented without OD. The AUCs were 0.648 for nucleosome, 0.670 for creatinine, and 0.539 for CRP (Table 4). We then determined the clinically optimal cutoff values (specificity >90 %) using the ROC curves of the 58 patients without OD on admission (Fig. 2c). The new cutoff values were much alike compared to the cutoff values of all the patients (Table 4).
In univariate logistic regression analysis, only nucleosome, as a continuous or a binary variable, was a significant predictor of severe AP. Using the gender adjusted stepwise forward logistic regression analysis model of nucleosomes and creatinine, nucleosome as a continuous variable served as an independent predictor of severe AP (Table 5).
Discussion
The results show that the circulating nucleosome levels in patients with AP are elevated, associate with the severity of AP and predict, on admission to hospital, the development of severe AP among the patients who present without clinical signs of OD (MMS <2). Our results are in accordance with the finding that circulating DNA levels are elevated in patients with severe AP [31,32] and that nucleosome levels are elevated in experimental pancreatitis [24]. To our knowledge, this study demonstrates, for the first time, the predictive value of circulating nucleosomes in AP.
Several biomarkers have been evaluated as predictors of the course of AP [33][34][35][36][37]. However, in these studies, OD group consistently comprised all patients with OD, in other words, the patients who have OD already at presentation and the patients who present without OD but are bound to develop it. Including the former may distort the results. Accordingly, in the present study, the analysis of all OD patients revealed that nucleosome levels predict OD; the analysis confined to patients with moderately severe and severe AP, excluding mild AP, showed that nucleosome levels did not predict OD, while only the nucleosome levels proved to predict OD among the patients presenting without OD. To our knowledge, nucleosomes, as demonstrated in the present study, and the adenosine-generating ecto-5′-nucleotidase/CD73 [11] and the cytokines interleukin 8, hepatocyte growth factor, and granulocyte-colony stimulating factor [12], are so far the only markers that may aid to identify the patients who present without signs of OD but are bound to develop it during the course of AP.
In the present study, we used highly specific cutoff values (>90 %) instead of maximizing the sum of sensitivity and specificity. The former setting, resulting in low sensitivity of the markers, is, we think, more real in the clinical work with limited ICU capacity. With the maximized sum of sensitivity and specificity (Figs. 2a and 2c), the sensitivity of circulating nucleosomes in predicting OD would have reached 83 % in the whole patient population (specificity 72 %) and 71 % among the patients without OD on admission (specificity 79.5 %). The analysis of moderately severe AP and severe AP patients was performed, because mild AP patients may distort the results in the whole patient population, since they form the majority of AP patients and most of them recover uneventfully [11]. When the patients with mild AP were excluded from the analysis, we could not identify the patients with severe AP from those with moderately severe AP with any of the markers analyzed. In the analysis of patients who presented with OD (admission MMS ≥2), we tried to reveal if it was possible to predict the persistence of OD already on admission using circulating nucleosome, CRP, or creatinine levels. However, no difference between transient or persistent OD group was found ( Table 3).
The finding may be explained, at least in part, by the origins of circulating nucleosomes in AP, which are not known in detail but are likely to be diverse. Neutrophils are an intriguing possibility as they are the most abundant leukocytes, are activated in patients with severe AP [38], and upon stimulation with cytokines, make extracellular traps [17] comprising DNA and core histone. Other sources of nucleosomes at least in experimental AP include apoptosis and necroptosis [39,40] and tissue injury associated with circulatory shock/hypoperfusion [41]. As to the clinical point of view, it is impossible to say if the quickly, within 48 h, resolving OD is due to intensive treatment or represents the natural course of AP. Therefore, if the patient presents with OD, optimal treatment of severe AP needs to be started immediately, preferably in the ICU [29,30].
The possibility that impaired renal function would contribute significantly to the increased nucleosome levels is not evident because nucleosome clearance appears to be mediated mostly by the liver [42][43][44]. In the present study, the major finding was that nucleosome levels predict the development of OD among the 14 patients who presented without OD. Creatinine levels of the patients were ≤170 μmol/L, as defined by MMS criteria [1]. Consequently, the predictive value of nucleosomes may not be explained by impaired renal function.
Identifying the patients who present without OD (MMS < 2) but are bound to develop severe AP is a great clinical challenge. Indeed, such patients form about half of the AP patients with OD [8,11,12]. The findings in the present study suggest that the on-admission levels of circulating nucleosomes aid to identify, on admission to hospital, the patients who present without OD but are bound to develop it. Among such patients, the levels of creatinine or CRP did not predict the development of severe AP in the present study or our previous studies 95 % confidence intervals are given in parentheses AU absorbance unit, AUC area under the curve, CRP C-reactive protein, DOR diagnostic odds ratio, LR likelihood ratio [11,12]. The present study, however, has limitations. The number of OD patients studied was limited, and the cutoff values were optimized. In addition, the storage time of the plasma samples was up to 10 years. Long-term stability investigations have revealed a 7 % decrease per year in serum levels of nucleosomes during sample storage at −70° [45]. However, the differences in the sample storage time may not explain our findings because the storage time did not correlate with nucleosome levels, and, furthermore, nucleosome level was an independent predictor of OD regardless of the sample age. The release of DAMPs is considered to play central role in the pathogenesis of AP linking local tissue damage and death to systemic inflammatory response. Therefore, DAMPs might offer several novel therapeutic strategies in AP, such as preventing DAMP release [14], neutralizing or blocking DAMPs [46], or blocking the DAMP receptors or their signaling [47,48]. The novel therapeutic modalities may be beneficial for the AP patients who present with OD, and, in particular, for the patients who present without OD but are bound to develop it.
Conclusions
Our results show that the on-admission levels of circulating nucleosomes are elevated in AP and associated with the severity of the disease. In addition, our data show, for the first time, that nucleosome levels may serve as an independent predictor of severe AP among the patients who present without signs of OD (MMS < 2), the patient group which may be an optimal target for immunomodulatory treatment modalities.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions AP collected clinical data, participated in data analysis, and drafted the manuscript. AR participated in designing the study, quantified the circulating nucleosome levels, and participated in the drafting of the manuscript. HM performed statistical analysis and participated in the drafting of the manuscript. LK, PP, and HRa participated in designing and coordinating the study and provided supervision. HRe participated in designing and coordinating the study, helped draft the manuscript, and provided supervision. All authors critically revised the manuscript and read and approved the final version.
|
v3-fos-license
|
2023-05-28T15:04:34.499Z
|
2023-05-01T00:00:00.000
|
272803156
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "CLOSED",
"oa_url": "https://oapub.org/soc/index.php/EJPSS/article/download/1467/2046",
"pdf_hash": "ea5116dd3c5a104099a81170d7dc4421b71c3fd8",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41871",
"s2fieldsofstudy": [
"Economics",
"Political Science"
],
"sha1": "cc4d0189a6e7e552e5d53e345d41378fda1dc796",
"year": 2023
}
|
pes2o/s2orc
|
THE ECONOMIC COST OF CRIMINALITY: AN ANALYSIS OF ITS IMPACT ON DEVELOPMENT
: Maintaining peace and harmony is a crucial challenge for developing nations, and the crime rate is a significant factor in achieving this goal. This study is intended to examine the relationship between a developing country's crime rate and the economic factors that influence it. This quantitative study aims to test the objective hypotheses through multiple regression analysis. The study utilized secondary data from the World Bank from 1990 to 2018 to assess the Philippines' crime rate and economic factors. The analysis reveals that economic factors, such as urbanization, GDP per capita, financial development, and labor force, substantially impact the national crime rate. This study provides policymakers with vital insights for implementing evidence-based peace and development initiative strategies.
Introduction
The intricate nature of a developing country's economy in addressing criminal activities has been a crucial consideration for policymakers seeking to promote peace and stability.Strengthening a country's laws and regulations is essential for creating a secure and healthy environment, which is attractive to foreign and domestic investors, visitors, tourists (Pilapil-Añasco & C. Lizada, 2014), and other stakeholders.This, in turn, can lead to economic growth.
In the previous year, crime has been challenging in addressing peace and security taking for example in 2010, where the estimated crime recorded jumped from around 4 million to over 5 million cases in 2011 (PSA, 2013).The safety and security to do business and the ease of doing business have always been hampered in succeeding years due to crime and criminality (Plaza, 2020).
The problem of crime and criminality affects the economy of a country as this is one of the considerations to engage in economic activities both locally and internationally (Jonathan et al., 2021).Furthermore, crimes in any form affect everyone living in society and will continue to destabilise the economy and the development of the country (Paredes, 2002).
Crime may affect economic development and social factors which illuminate growth and prosperity.Measuring these economic factors to understand how crime disrupts and destabilizes the country which is the primary target of the study.
Objective of the Study
The primary aim of the study is to examine the relationship between the economic factors that influence the crime rate in the country.
Review of Related Literature
This section explores the information related to the particular research that helps to analyze and decide the essence of the study by theoretically defining knowledge and concepts, strengths, and limitations as a guiding principle to express the study's intent.Thotakura (2011) defines crime as a socially unacceptable act that goes against the norms and values of society.Criminal behavior is not inherent in individuals; rather, it is influenced by a range of factors, including social, economic, biological, and psychological factors.During the 19th century, crime was a topic of significant interest and concern for both the government and the general public in Europe.According to Malik (2016), the rise in urban-industrial crime can be attributed to social changes and complex processes associated with urban development.
Population
The United Nations Office for West Africa (UNOWA) argued in 2016 that a rapid and disorganized shift of people from rural areas to urban centers is taking place in many developing countries.This phenomenon hinders the ability of national governments and local authorities to provide security and essential social infrastructure in urban areas.Consequently, this results in the growth of slums or shantytowns that engulf and overwhelm the already compromised urban infrastructure, which further exacerbates security and crime challenges (Owusu et al., 2015).
In the Philippines, plenty of crimes directly related to urbanization that airs serious concern for the government and civil society, foremost of these are street crimes, illegal drug trafficking, robbery and theft, violent crimes against women and children, and terrorism (Leones, 2004).
Foreign Direct Investments
Foreign Direct Investments (FDI) have played a fundamental role in reshaping the business environment in CEE countries over the last 20 years and helped in the development of a well-functioning market economy (Cazacu, et al., 2021).
In the assessment conducted by Brown & Hibbert (2019), it has been observed that crime has an effect on the service industry sub-sector in financial services.Thus, policymakers are interested in boosting FDI in the affected sectors in the reduction of criminal activities.
Gross Domestic Product per Capita
The Economist (2011) looks at the relationship between GDP per capita and crime at the state level during the recession from 2007 until 2010.At that time, crime rates had been dropping for two decades nationwide and the Economist wanted to investigate whether this trend would continue through harsh economic times.
Roman (2013) conducted research to examine the relationship between GDP and violent and property crime rates from 1960 until 2013.He begins by outlining the difficulty in testing the hypothesis that big macroeconomic factors explain crime trends.'Crime obviously affects macroeconomic factors.In the thesis conducted by Thapa (2021), was concluded that economic growth (state's GDPs) positively affects economic offenses and crime against women.
Financial Development
The growing digital economy together with the period of crisis like COVID-19, create challenges for criminals to engage in new crimes, cyber cams, fraud, disinformation, and other cyber-enabled crimes (Violeta Achim et al., 2021).
In the study conducted by Barua & Mahesh (2018), it was observed that in the presence of high-income inequality, the states have witnessed an increase in crime rates.Thus, financial development needs to be accompanied by other policies that reduce inequality and prioritize inclusivity, so that as income inequality falls, the benefits of financial development may be realized.
Total Unemployment
In Sweden, in a study conducted by Lundqvist (2018), the results suggested at best a weak effect from unemployment on violent crime and no effect from unemployment on property crime which goes against the established crime theory.
Draca & Machin (2015) mentioned that if the loot value from crime increases, this will necessarily increase the crime rate.Cited by Becker (1968) and Ehrlich (1973) recognize that the risk appetite of a criminal is another element of the type of individuals who are willing to commit a crime.
Labor Force Participation Rate
Those who were working were less likely to have reported that they had engaged in criminal behavior in the year prior to their interview.Young adults who are employed in secondary sector jobs, that are more marginal to the labor market, are more likely to have committed criminal violations.We found these effects in urban areas but not among the rural sub-samples (Crutchfield et al., 2006;Crutchfield & Pitchford, 1997).A paper written by Gustavsson & Österholm (2012), presents strong evidence against mean reversion in disaggregated participation rates of subpopulations of the US labor force.Thus, the major implication is that resorting to unemployment rates for subpopulations does not overcome the informational problems of a non-stationary aggregate participation rate.
Methodology
The approach employed in the study is presented, including the research design, data source, and statistical treatment of the data.
Research Design
This study employed a quantitative method (Creswell & Creswell, 2018;Greene, 2013;Perreault, 2011) which examines the link between factors that may be quantified to evaluate objective hypotheses.Multiple Regression Analysis is used to measure the relationship between the dependent and independent variables.Independent variables are variables whose values are known that can explain the dependent variable (Dhakal, 2018;Frieman et al., 2022).In other words, multiple regression is a statistical method for examining the connection between numerous independent variables and a single dependent variable.The goal of multiple regression analysis is to predict the value of a single dependent variable by using known independent variables (Moore et al., 2006).
Likewise, several studies have used multiple regression to measure the relationship between the crime rate and economic factors which elaborated the validity used in the study (Abdullah, 2015;Hosseini et al., 2019;Wijaya, 2021).
Data Sources
The study makes use of the available secondary data from World Bank.These secondary data obtained was from the year 1990 to 2018 (a 29-year observation) in measuring the crime rate of the Philippines and the economic factors thereof (Abdouli & Hammami, 2020).Furthermore, the variables used in the study and its descriptions are as follows:
Statistical Treatment of Data
The gathered data was analyzed through multiple linear regression.This is the efficient statistical tool used to regress and measure the relationship between the variables.Each predictor value is weighed, the weights denoting their relative contribution to the overall prediction.
Here, Y is the dependent variable, and X1,…,Xn are the n independent variables.In calculating the weights, a, b1,…,bn, regression analysis ensures maximal prediction of the dependent variable from the set of independent variables.This is usually done by least squares estimation (Moore et al., 2006).
Model Specification
The regressor econometric equation model of the study is as follows: Where: = Crime rate of the i-th; Β0 = Intercept term; βis = Efficiency parameters to be estimated; X1i = Urban population; X2i = Foreign direct investments; X3i = Gross domestic product per capita; X4i = Financial development; X5i = Total unemployment; X6i = Labor force participation rate; and μi = represents the error term.
Results and Discussions
A multiple regression was processed to predict Crime rate from Urban, Foreign Direct Investment, GDP per Capita, Financial Development, Total Unemployment and Labor Force Participation.The r2 value indicates that the proportion of variance in the dependent variable that can be explained by the independent variables with a value of 0.8205 these independent variables explain 82% of the variability of the dependent variable.Furthermore, the output shows that the independent variables statistically significantly predict the dependent variable, F (6, 22) = 16.76,p < 0.000, thus statistically significant to the prediction, p < 0.05.Moreover, the result revealed that there is a significant relationship between urban population (2.18 < 0.001) and crime rate where an increase in the population would translate to an increase in crime rate in the country.Consonance with the findings of (Battin & Crowl, 2017;Chang et al., 2021;Hu et al., 2018) stating that the crime mostly happened in an urbanize area with many available places to hang out with.Moreso, different crimes happened in urban such as property crime, collective violence, robbery and aggravated assault, where highly urbanized cities and migrant people (from rural to urban) have a strong relationship with crime rate (Lodhi & Tilly, 1973;Qi, 2020).Conversely, not all crimes happen in one specific location but rather crime happens in "crime hotspot" where offenders venture into unknown territory, and frequently select targets in or near places they are most familiar with as part of their activity space (Tayebi et al., 2016).
Likewise, the GDP per Capita (0.0033882 < 0.001) has shown highly significant towards the Crime rate which indicates that a 1% increase in GDP per Capita would slightly increase the crime rate to 0.33%.The findings resonate with the study of (Andresen, 2015;Cui & Hazra, 2017) stating that macroeconomic variables matter which significantly affect the crime level.The latest statistics have shown that there is a reduction in the crime rate year per year (2019 -5.7, 2020 -4.8 and 2021 -4.4) in the Philippines (PSA, 2021) while the GDP per capita accelerated (2018 -4.87, 2019 -4.68 and 2020 --10.78)higher than the target (WORLD BANK, 2021).Although there was a problem due to the pandemic, the country has bounced back with an 8.3% growth rate (PSA, 2022).However, this finding was refuted as there is no sufficient evidence that crime is related to real GDP per Capita and unemployment (Cui & Hazra, 2017).Also, an increase in the crime rate causes a ripple reduction in GDP (Plotnikov, 2021).
In addition, the financial development towards the crime rate shows a highly significant relationship which may interpret as the crime rate would decrease by 21.94% (p = 0.000) when there is an increase in financial development."Money is the root of all evil" where money is the motivation in conducting crime and violence (Coleman, 1992) -the truth be told that as people have more money crime may arise, however, different in the findings as the more money the person has the less chances of crime occurs.The findings were supported by (Fajnzylber et al., 2002) stating that the less inequality occurs the lesser the crime may happen.As refuted by Moran (2020), the situation of abundance did not translate to decreasing crime in Latin America, this is the direct result of weak criminal justice making a profit out of illegal enterprise.Furthermore, another study has refuted the claim that financial development decreases crime but rather it has a positive relationship with crime.This broaden the state that the main factor was income inequality where an increase in financial development also increases the crime for inciting criminal behavior (Barua & Mahesh, 2018).Indeed, inequality and poverty would lead to problems such as crime and violence (Fajnzylber et al., 2002).
Similarly, the result has shown a positive relationship between the crime rate and labor force (p < 0.005) which indicate that an increase in labor force (people who have work) would result in an increase in the crime rate.The findings of the study are in agreement with (Harun et al., 2021) in which, more crime mostly happened among a female who is working compared to their male counterparts (Harun et al., 2021), the stigma of women being weak and vulnerable has caused an increase in crime in labor force while decreasing the labor force participation (Chakraborty et al., 2018).Furthermore, hiring workers may reduce unemployment but this creates an increase in the crime rate of employed workers (Engelhardt et al., 2008).The inverse relationship between employment and crime rate created a clear picture as to why many preferred to not work than be employed which tends to be a stronger crime rate than those who are not employed (Wang & Minor, 2002).Conversely, whatever it may seem the government intervention whether small or enough subsidies may increase employment would translate to an increase in social welfare causing a decrease in crime rates which raise society's welfare.Likewise, an auxiliary study refuted the findings that those people who were employed were less likely to report committing a crime (Crutchfield et al., 2006).
Nevertheless, the Foreign Direct Investment and Total Unemployment indicate no significant relationship with the crime rate which means that it is not a factor that can increase/decrease the crime rate of a country.Although other studies found that FDI has a negative relationship with the crime rate (Cabral et al., 2019) others also found out that FDI does not significantly affect the crime rate (Afriyanto, 2017).So much so, unemployment has an inverse effect on the crime rate which tells us that an increase in unemployment, decreases the crime rate (Lee, 2018).
Conclusions and Recommendations
The following presents the conclusions and recommendations of the study, obtained from the analysis of the results.
Conclusions
Based on the findings of the study, the following conclusions were drawn: 1) Crime rate is positively significantly related to the urban population where an increase in the number of people in the urban cities/ places would implicate an increase in crime.
2) The Philippines with its accelerated GDP per capita would have an associated increase in crime rate in the country.Peace and security are the key factors to ensure consistent growth while preventing crime-related violence.3) With the financial development of the country, an increase in financial status and stability translates to a negative effect on crime-related incidence.The more financially capable individual the lesser the likelihood of a crime incident.4) People who are employed would have a higher possibility of an increase in crime rate where environmental factors be considered especially people going to and from work might be involved in crime-related incidents.
Recommendations
From the conclusions rendered above by the researcher, would like to recommend the following: To the government and attached agencies/departments: to impose policies on the implementation as well as on the structure of each of the Philippine National Police and attached agencies in the country.Specifically on the technology budget allocation and intelligence information of the National Police on par with other developed countries.
To the policymakers: to strengthen the existing law on criminality and implement and create new laws pertaining to criminal offenses which may hamper the economic development of the country.In addition, lifting the restrictions and limitations of national police and attached agencies in the access of information and the jurisdiction of the law.
To future researchers, there are many aspects of the study that need further investigation to fill in the gap, including the updated data on the crime rate.Exploring other statistical tools in addressing social issues related to crime and economic development of the country such as forecasting or Granger-causality.Hence, the next generation of researchers, to conduct an exploration valuing the economic development of the country and its safety and security.
Figure 1 :
Figure 1: Conceptual Framework of the Study
Table 1 :
The variables used in the study
Table 1 .
Multiple Regression Output
|
v3-fos-license
|
2022-03-29T15:12:44.240Z
|
2022-03-27T00:00:00.000
|
247769988
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/cjidmm/2022/4503964.pdf",
"pdf_hash": "81cc66c191cd5baaeab7c142b474dcbfe2d46ba0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41872",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "e1c7c4c02b9f19fa2d1bdfa45bce2541f5906286",
"year": 2022
}
|
pes2o/s2orc
|
Decreased Susceptibility of Shigella Isolates to Azithromycin in Children in Tehran, Iran
Azithromycin (AZT) has widely been used for the treatment of shigellosis in children. Recent studies showed a high rate of decreased susceptibility to azithromycin due to different mechanisms of resistance in Shigella isolates. Accordingly, the purpose of this study was to investigate the role of azithromycin resistance mechanisms of Shigella isolates in Iran during a two-year period. In this study, we investigated the mechanisms of resistance among Shigella spp. that were isolated from children with shigellosis. The minimum inhibitory concentration (MIC) of Shigella isolates to azithromycin was determined by the agar dilution method in the presence and absence of Phe-Arg-β-naphthylamide inhibitor. The presence of 12 macrolide resistance genes was investigated for all isolates by PCR for the first time in Tehran province in Iran. Among the 120 Shigella spp., only the mph(A) gene (49.2%) was detected and other macrolide resistance genes were absent. The phenotypic activity of efflux pump was observed in 1.9% of isolates which were associated with over expression of both omp(A) and omp(W) genes. The high prevalence of the mph(A) gene among DSA isolates may indicate that azithromycin resistance has evolved as a result of antimicrobial selection pressures and inappropriate use of azithromycin.
Introduction
Shigella species are Gram-negative and nonmotile rods in the family Enterobacteriaceae that cause shigellosis. Shigellosis was the third leading cause of diarrheal death in children under the age of 5 in 2015 (40,000 deaths per year) [1][2][3], and it mostly affects children living in developing countries [4][5][6]. Shigella can be classified into four serogroups or species based on O lipopolysaccharide antigen type: S. dysenteriae (subgroup A), S. flexneri (subgroup B), S. boydii (subgroup C), and S. sonnei (subgroup D) [5,7]. Based on epidemiological studies, S. flexneri and S. sonnei are the most common species in developing countries, but S. sonnei is the predominant species in developed countries [5,[8][9][10]. Recent studies have revealed a species shift from S. flexneri to S. sonnei in Iran, and S. sonnei has been the dominant species in most parts of the country [11][12][13][14]. Shigella spp. are highly infectious and are transmitted by the fecal-oral route or ingestion of contaminated food or water [5,8,15]. While shigellosis is endemic in developing countries with poor water and sanitation conditions, it is usually associated with either returned travelers or men who have sex with men (MSM) in developing countries (3)(4)(5). Shigellosis is transmitted in the developing countries as fecal-oral and contaminated food and water, and in developed countries, it is transmitted by traveling to disease-endemic regions and men who having sex with men [8,10,16]. Symptoms appear abruptly after an incubation period of 12 hours to approximately 2 days and include high fever, crampy abdominal pain, and diarrhea [17]. e disease is self-limiting, but antibiotic treatment is required in children, elderly, and people with weakened immune systems [18].
Shigella spp. have become resistant to first-line drugs (trimethoprim-sulfamethoxazole and ampicillin) and are no longer prescribed to treat shigellosis due to the emergence of multidrug resistance and has challenged the treatment of the disease in children [4,5,19,20]. ese drugs are replaced by ciprofloxacin (CIP), and azithromycin (AZT) for shigellosis treatment in adults and children [4,5,21]. Owing to its oral administration and affordability, AZT is recommended by a number of international guidelines for the treatment of shigellosis in children [4]. In Iran, AZT is the most commonly prescribed antibiotic for treatment of children suffering from shigellosis [14,22]. Reports of Shigella isolates with decreased susceptibility to azithromycin (DSA) are increasing globally, raising concerns about its usefulness as the second-line treatment for children with shigellosis [4,23]. e most common types of macrolide resistance in Enterobacteriaceae are those encoded in mobile genetic elements, such as target site modification by methylases encoded by erm genes (erm(A), erm(B), erm(C), erm(F), erm(T), erm(X)), inactivation of macrolides, mediated by esterases such as those encoded by ere genes (ereA and ereB), and phosphotransferases encoded by mph genes (mph(A) and mph(B)). Additionally, the macrolide efflux pumps encoded by mef genes (mef(A) or mef(B) and msr(A)) and chromosomal efflux pumps (omp(A) and omp(W)) have been reported to confer resistance to macrolides [24]. To a lesser extent, mutations in the L4 (rplD) and L22 (rplV) ribosomal proteins and in 23S rRNA (rrlH) have been shown to be responsible for macrolide resistance [24,25].
Recent studies have reported a relatively high frequency of resistance to azithromycin among Shigella isolates from children with dysentery in Iran [12,14,26]. ere has been no detailed investigation of the mechanisms of macrolides resistance among the DSA-Shigella isolates in Iran. In this study, we determined the azithromycin MICs for a collection of Shigella isolates recovered from children with shigellosis in Tehran, Iran. en, we investigated the presence of macrolide resistance genes associated with mobile genetic elements and the expression levels of outer membrane proteins A and W (ompA and ompW) genes related to efflux pump in isolates.
Bacterial Isolates and Identification. Shigella isolates
were collected between March 2017 and September 2019 from the feces of children under 14 who were suspected to have shigellosis and were referred to Children's Medical Center in Tehran. Initial identification was performed using microbiological and biochemical analysis and Shigella serogroups were determined using latex agglutination serotyping (Figure 1). is study was evaluated by the Local Ethics Committee of Shahid Beheshti University of Medical Sciences (IR.SBMU.MSP.REC.1399.490).
Antibiotic Susceptibility Test and MICs of Azithromycin.
e antibiotic susceptibility pattern of all isolates had been previously described [12]. Briefly, antimicrobial susceptibility testing to nine antibiotics was conducted using Kirby-Bauer disk diffusion method.
e MICs of DSA isolates were confirmed (ranging from 2 to 512 µg/ml) using the agar dilution method according to the Clinical and Laboratory Standards Institute (CLSI) guidelines (Clinical and Laboratory Standards Institute (CLSI), Performance standards for antimicrobial susceptibility testing, 29th ed., CLSI supplement M100-S29, Wayne, PA; [27]).
MICs of Azithromycin in the Presence of Efflux Pump
Inhibitor.
e MICs of DSA isolates were examined by adding efflux pump inhibitor Phe-Arg-β-naphthylamide (PAβN) (20 mg/ml) (Sigma, St. Louis, Mo., USA) to determine the impact of efflux pumps activity on azithromycin resistance. A ≥4-fold reduction in azithromycin MIC in the presence of PAβN suggested the existence of an efflux pump (27,28).
e Presence of Macrolide Resistance Genes.
Genomic DNA was extracted using the High Pure Isolation Kit (Roche, Mannheim, Germany) according to the manufacturer's instructions. Macrolide resistance genes, including,
Quantitative Real-Time PCR (qRT-PCR) for Evaluation of Efflux Pumps Genes Expression.
Total RNA was extracted using a BioFACT TM Total RNA Prep Kit (Biofact, South Korea) following the manufacturer's instructions. All extracted RNAs were treated with DNase I (CinnaGen Co., Iran) in order to remove the remaining genomic DNA. Reverse transcription was performed using the Add Script cDNA Synthesis Kit (Add bio, South Korea) with an input of 200 ng/µl of total RNA in a final reaction volume of 20 µL under standard reverse transcription PCR conditions following the manufacturer's instructions. e expression level of efflux pump genes among phenotypically active isolates was determined by quantitative real-time-PCR (qRT-PCR) using primers targeting omp(A) and omp(W) genes as described previously [30] (Table 1). All reactions were conducted in duplicate, and the 16S rRNA was used as the endogenous control gene. e 2 −ΔΔCT method was used to determine the relative expression of the target genes, and a value of ≥4-fold compared to that of S. flexneri ATCC12022 was considered as overexpression [31].
Statistical Analysis.
Pearson's chi-squared test was used to investigate the relationship between antibiotic resistances with regard to the age group of the patients, gender and the species of Shigella isolated from the patients. A p value of <0.05 was considered significant. Data were analyzed using JMP, version16 (SAS Institute Inc., 2021).
Characteristics of the Patients and Isolates. A total of 120
Shigella isolates were collected from the fecal samples of children with shigellosis Sixty percent of patients were male (n � 72), and 40% were female (n � 48) ( Table 2). Overall, 55% of patients (n � 66) aged 5 years old or younger, 35% (n � 42) aged 6 to 10, and 10% (n � 12) aged 11 to 14 years old. Among 120 Shigella isolates, S. sonnei was the most common species with 80.8% of the total isolates (n � 97), followed by S. flexneri with 17.5% (n � 21) and S. boydii with 1.7% (n � 2), respectively. e type of Shigella spp. detected in a patient did not vary with respect to age group and gender of the patients (p > 0.05). e azithromycin MICs among the S. sonnei isolates ranged from 32 to 512 µg/ml, and the only S. flexneri isolates had MIC � 32 µg/ml. Of the 54 DSA-Shigella isolates, only one isolate (1.9%) was S. flexneri, and the other 53 isolates (98.1%) were S. sonnei. All DSA isolates were resistant to Trimethoprim/sulfamethoxazole. A high frequency of isolates was resistant to ampicillin (96.2%), nalidixic acid (94.4%), cefotaxime (90.7%), cefixime (90.7%), and minocycline (79.6%). e frequency of resistance to ciprofloxacin and levofloxacin was comparatively low and was 3.7% and 16.6%, respectively. e probability of detecting DSA isolates varied with respect to the age group of the patients (p < 0.05), and children between 11 and 14 years old showed a higher prevalence of DSA isolates. However, the probability of detecting DSA-Shigella isolates did not vary with regard to the gender of the patients (p > 0.05). (54.2%) exhibited high levels of resistance to azithromycin (MICs ≥64 µg/ml). Demographics and clinical features of pediatric patients are accessible in detail in Table S1.
Discussion
In the collection of 120 Shigella isolates in this study, 54 isolates (45%) were confirmed to be DSA, indicating more considerations should be taken in prescribing this drug for the treatment of children with shigellosis. Azithromycin MICs of the DSA-Shigella isolates ranged from 32 to 512 µg/ ml, and 59.2% of the isolates demonstrated high levels of resistance to azithromycin (MICs ≥64 µg/ml). Previous studies have also reported a relatively high frequency of azithromycin resistance in Shigella isolates. For example, the rate of DSA was 42% in Palestine [32], 20.4% in China [31], 20% in the US [33], 13% in Australia [34], and 5% in Southeast Asia [4]. e lower rate of DSA reported from Southeast Asia has been associated with limited azithromycin usage in the region [4]. A previous study by Ezernitchi et al. [1] found that DSA-Shigella isolates were obtained mainly from children under 9 years of age. e results of our study showed that the age of the patients had a significant impact on the prevalence of DSA isolates among children with shigellosis. e 11-to 14-years-old patients were more likely to harbor DSA-Shigella even though this age group represented only 10% of cases with shigellosis. Previous studies have established the importance of acquired mobile genetic elements in conferring resistance to macrolides in Shigella and other Enterobacteriaceae [24,25,31]. We investigated the presence of 12 mobile genetic elements associated with azithromycin resistance, and 49.2% (59/120) of the isolates were positive for the mph(A) gene. However, other mobile genetic elements associated with the azithromycin resistance were not detected in our Shigella isolates. Five isolates (8.5%) were found to carry the mph(A) gene, but they were susceptible to azithromycin (MICs <16).
is heterogeneity could be explained by the differential expression levels in individual cells due to the variation in mph(A) copy numbers, leading to differences in azithromycin resistance levels [24]. Overall, this finding is consistent with that of the previous studies, which reported the role of the mph(A) gene as the principal mechanism for azithromycin resistance in Shigella isolates [4,29]. For example, Zhang et al. [31] found that 55% of DSA-Shigella isolates were mph(A) positive, and no other resistance gene was detected. Liu et al. [29] reported that 57.8% and 40.7% of S. flexneri and S. sonnei isolates carried the mph(A) gene, but other azithromycin resistance genes were not detected. A very low frequency (0.6%) of DSA-Shigella from Southeast Asia were positive for the erm(B) gene [4]. Likewise, erm(B)-associated azithromycin resistance was detected in 3.4% of the E. coli isolates with DSA in Peru [24].
PAβN is an efflux pump inhibitor, which competes with macrolides for its specific binding point. e role of PAβNinhibitable efflux pumps in azithromycin resistance has been demonstrated in Shigella spp. and E. coli [24,30]. In this study, one S. sonnei isolate (1.9%) demonstrated azithromycin resistance associated with the efflux pump activity. is isolate contained omp(A), omp(W) and mph(A) genes. Several studies have reported that mutations in the ribosomal proteins L4 (rplD) and ribosomal proteins L22 (rplV) and in 23S rRNA (rrlH) can confer macrolide resistance [24,35]. Unfortunately, we did not determine the nucleotide sequence changes of the specific regions of the three genes, and we could not determine the azithromycin resistance mechanism in one S. sonnei isolate. Further studies are required to understand the possible additional mechanisms responsible for the DSA in Shigella spp. e present study demonstrated that the plasmid-mediated mph(A) gene is the most common macrolide resistance gene in Shigella isolates collected from children with shigellosis in Tehran, Iran. e high prevalence of the mph(A) gene among DSA isolates may indicate that azithromycin resistance has evolved as a result of antimicrobial selection pressures and inappropriate use of azithromycin. e plasmid-mediated mph(A) gene can spread quickly among different members of the Enterobacteriaceae and yield either the same or different strains with DSA [36].
Conclusion
Contrary to most studies, which have shown that efflux pump has no role in azithromycin resistance, our study showed that one of our DSA isolates increased omp(A) and omp(W) expression levels and consequently, efflux pump can play a role in resistance.
Data Availability
All the data generated or analyzed during this study were included in this article.
Conflicts of Interest
e authors declare that there are no conflicts of interest.
|
v3-fos-license
|
2019-04-22T13:12:58.380Z
|
2018-11-01T00:00:00.000
|
126018811
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2018/29/epjconf_nsrt2018_05001.pdf",
"pdf_hash": "dd15ff176e36f65ed43e93c69af888f513f7e693",
"pdf_src": "Unpaywall",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41873",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "dd15ff176e36f65ed43e93c69af888f513f7e693",
"year": 2018
}
|
pes2o/s2orc
|
Some new chapters of the long history of SU(3)
The SU(3) symmetry of nuclear structure has its 60’th birthday this year. In this contribution we recall some of its historical aspects, including several generalizations, furthermore, we discuss a few new features of this symmetry.
Introduction
In 1958 Elliott published two papers [1] on the application of the SU(3) group for the description of nuclear spectra. His model turned out to be very successful, and in addition it opened the way for many subsequent algebraic structure models.
In this contribution first, we review some basic features of this description, then mention a few extensions of the original concept. We also show some aspects of the SU(3) model, which have not been discussed so much, as follows. i) Spontaneous symmetry-breaking in the Elliott model. ii) The first steps towards the development of local gauge-invariant theory of nuclear collective motion. iii) The multichannel dynamical symmetry (MUSY), which extends the SU(3) connection (from 1958) between the shell collective and cluster models to the multi major shell problem, and in addition seems to have a considerable predictive power.
Some interesting moments of the past 2.1 The birth of SU(3)
Elliott considered a Hamiltonian, consisting of a harmonic oscillator (HO) interaction and the sum of the quadrupolequadrupole two-nucleon interaction: This Hamiltonian is an SU(3) symmetry-preserving operator, therefore, its eigenvectors have good SU(3) quantum numbers (λ, µ), in addtition to the angular momentum (L). The mathematical reason is that the Hamiltonian can be expressed in terms of the invariant operators of a single group-chain: as follows where C refers to Casimir-invariant of the group indicated as a subscript (and of the order indicated as a superscript). The eigenvalues of this Hamiltonian are: E=n ω+α λ 2 +µ 2 +λµ+3λ+3µ +δL(L + 1).
Here ω is the energy of the HO excitation quantum, and n is the number of quanta. Within this framework Elliott described the quadrupole deformation and the collective rotation in terms of the spherical shell model. The SU(3) symmetry determines the quadrupole shape: λ and µ are uniquely related to the β and γ parameters (for a detailed discussion see in Ref. [2]). A rotational band consists of shell-model states of a well-defined SU(3) symmetry, involving different L values. This was the first connection between the two fundamental structure models of nuclei: the shell model and the liquid drop model.
Elliott's SU(3) model works very well for the description of the light nuclei.
In the same year Wildermuth and Kanellopoulos published a convenient formulation of the cluster model [3], which also presented a transparent relation between the shell and cluster models. In particular, they have shown that in the harmonic oscillator approximation the Hamiltonian of the two models can be rewritten into each other: Soon afterwards Bayman and Bohr [4] has reformulated this relation in terms of the SU(3) symmetry, therefore, by the end of 1958 the specific cluster states (just like the specific quadrupole bands) could be selected from the sea of the shell model states by their SU(3) symmetry. The cluster-shell connection, established in Ref. [3] for the harmonic oscillator interaction is valid also for more general Hamiltonians. In particular, energy-operators with the dynamical symmetry, Eq. (2), keep these relations intact [5]. Therefore, the basic connection among the three fundamental (shell, collective and cluster) models can be formulated (for a single-shell problem) by saying that their common intersection is given by the SU(3) dynamical symmetry of group-chain, Eq. (2).
Extensions
Soon afterwards it turned out that the applicability of the Elliott model is rather limited; the SU(3) symmetry breaks down beyond the sd shell (as well as with increasing energy). But it did such a beautiful job, where it was working, that much effort has been concentrated on exporting the nice features of its algebraic methods.
New models were invented, which are direct or indirect extensions of the Elliott model.
One important direction of the extension was the incorporation of many major shells instead of the single shell model [1]. The symplectic shell model [6] contains (any number of) 2 ω major shell excitations (having the same parity), and in this way it is able to describe the electromagnetic transitions without introducing effective charge. Its simplified version is the contracted symplectic model [7], which has a simpler mathematical structure (e.g. compact groups instead of the noncompact symplectic one), and can be considered as a multi-shell microscopic background of the collective model. The cluster model also involves many major shell excitations, and when the internal structure of the cluster is described with the Elliott model, it can also be considered as its extension. It comes in two different versions. In the fully microscopic and semialgebraic formulation [8], only the basis states carry group symmetries, and effective nucleon-nucleon interactions are applied. On the other hand, in the semimicroscopic algebraic cluster model [9] both the basis sates and the physical operators carry group symmetries (therefore, group-theoretical methods are applied in the calculations), but phenomenological interactions are used, which are expressed in terms of group-generators. (The model space of the two approaches are identical.) It is very remarkable that these three extensions of the Elliott model, i.e. the symplectic shell model, the contracted symplectic (collective) model and the cluster model have basis states characterized by the groupchain [10] Here U s (3) stands for the shell symmetry in case of the symplectic and contracted symplectic models, and for the shell structure of the cluster for the cluster model. U x (3) describes the major shell excitations in the symplectic and contracted symplectic models in 2 ω steps, and the relative motion in case of the cluster model in 1 ω steps. Another important direction of the extension of the SU(3) symmetry is along the axis of the mass number. Different approximate (or partial) symmetries have been invented, as illustrated by Fig. 1. Some of them are based on the truncation and/or rearrangement of the harmonic oscillator shell model scheme, like the pseudo-SU(3) [11], the quasi-SU(3) [12], or the proxy-SU(3) schemes [13]. Others are based on some general symmetry-breaking mechanism, like e.g. the quasidynamical symmetry [14], which is applicable also in case of other models and other symmetries.
A third route is defined by the models, which apply the algebraic method based on model assumptions, differ- ent from that of the shell model. A very successful example is the interacting boson model [15], in which the basic building blocks are nucleon pairs. A further example is the quartet approach (of two protons and two neutrons) to be mentioned below.
Based on the symmetry-adapted no-core shell model approach [16] the SU(3) has recently been exported also to the territory of the very light nuclei, i.e. to the province of the ab initio methods and QCD-inspired interactions.
A further remarkable observation is that the exact SU(3) symmetry of the harmonic oscillator spherical shell model recovers for the deformed harmonic oscillator of commensurable axes [17], like, e.g., for the 2:1:1 superdeformed and for the 3:1:1 hyperdeformed shapes. For the realistic Nilsson-Hamiltonian the same kind of stability can be seen from the systematic studies based on the quasidynamical symmetry [18].
An interesting omission
In the history of the Elliott-model an interesting moment is related to the spontaneous symmetry-breaking. It is interesting partly because it did not appear, to the best of the knowledge of the author.
It is a frequently cited statement that the nuclear deformation is a result of the spontaneous symmetry-breaking. It has been discussed in detail in different models (see, e.g., Refs. [15,19,20]), but not within the Elliott-model, though it provides us with a simple and transparent framework for such an analysis.
A symmetry is spontaneously broken, if the Hamiltonian is symmetric, but its eigenstate is not invariant. Mathematically it means that the eigenvector does not transform according to the identity representation of the symmetry group. Usually the symmetry-breaking state does not even transform according to any single irrep of the symmetry group.
The deformed ground state of an atomic nucleus with spherically symmetric Hamiltonian is a well-known example for the spontaneous breaking of a symmetry.
In the Hamiltonian, Eq. (3), of the Elliott model the last term: is the rotational, i.e. collective part. The first part: determines the bandheads, while the rotational term splits (and shifts) the bands. In this model the Hamiltonian is completely separated into an intrinsic and a collective part The intrinsic Hamiltonian H intr is invariant under the rotation; the Casimir-operators of the U(3) and SU(3) groups commute with the operators of the angular momentum (they are generators of these groups, too, and the invariant operators commute with all the generators). Nevertheless, most of their eigenstates, e.g. the ground states of many nuclei have deformed shapes. The only exceptions are the SU(3)-scalar states of λ=0 and µ=0 quantum numbers, which are spherically symmetric.
Therefore, the SO(3) symmetry of the Schrödingerequation of the intrinsic system is spontaneously broken in many cases: a spherically symmetric Hamiltonian has a deformed eigenstate.
An interesting promise of the future: gauge-theory of nuclear collectivity
In spite of its great success, the collective model cannot describe all the collective features of the nuclei. In particular, the moment of inertia obtained from this model is too small, approximately a factor of five smaller than the experimental value. The reason is that the model describes the motion of an irrotational flow. It contains no vorticity. But the real nuclei are not liquid of irrotational flow, they correspond to an intermediate situation in between the limiting cases of the irrotational flow and rigid body. From the microscopic viewpoint the situation is well understood. In the contracted symplectic model (which is the multi major shell microscopic picture behind the collective model), the vorticity appears due to the coupling to the valence shell structure: U s (3)⊗U x (3)⊃U(3). When there is no coupling to the valence shell structure (U s (3)scalar case) the irrotational flow is obtained.
One can ask the question: how about the vorticity degree of freedom in the liquid drop model? Recently Rosensteel and Sparks made an interesting proposal [21]. They suggest that the vorticity can be included in the collective model, when it is transformed to a local gaugeinvariant theory.
In gauge theories the invariance of the eigenvalueequation appears due to applying two transformations simultaneously. Let us illustrate the situation with the simplest and best-known example: electromagnetism as a gauge theory. The wave function undergoes a spacedependent (local) gauge transformation: ψ→e iα(x) ψ, and the derivative (the operator) is substituted as: ∂→D, where D=∂+ iq c A. The operator D is called covariant derivative, and q is the electric charge. The term arising from A is cancelled through the action of ∂ on the phase factor. (Under the global phase transformation of the wave function: ψ→e iα ψ, of constant α, the equation is invariant without any changes in the operators, due to the fact that the phase factor slips through the differentiation: ∂(e iα ψ)=e iα (∂ψ). This is called global gauge invariance.) We can say that by changing the global gauge invariance into a local one the vector potential (gauge field) A is introduced. The electromagnetism appears as a consequence of the requirement of the local gauge invariance. The symmetry group is the Abelian U(1), a rank-1 group, with a single generator: the electric charge [22]. In the Yang-Mills theories higher rank symmetry groups are applied [23].
In order to make the nuclear collective model gaugeinvariant, two essential ingredients are needed: the space on which the local transformation of the wave function depends on, and the transformation of the operator, i.e. the covariant derivative. (In the gauge-theories of fundamental interactions the space is the 4-dimensional spacetime.) In Ref. [21] the authors show that in case of the liquid drop model of nuclei the proper space is the 6dimensional manifold, defined by the nuclear orientation and quadrupole and monopole deformation, while the covariant derivative is: i.e. the angular momentum I α is substituted by a sum of the angular momentum and a second term. The latter one contains the circulation C α , which has an SO(3) algebraic structure, i.e. from the mathematical viewpoint it is isomorphic to the rotational group. The coupling to circulation introduces the vorticity in the liquid drop model. (The trivial connection E α =0 determines the rigid body moment of inertia.) As a result the collective model with local gauge invariance gives the moment of inertia in agreement with experiment.
As for the future of the gauge-invariant collective model the authors have an interesting promise: by applying an SU(3) gauge instead of the SO(3), additional degrees of freedom are included, resulting in mixing of different circulation values within the yrast band.
An interesting moment of the present: multichannel dynamical symmetry (MUSY)
As mentioned ahead, the connection between the shell, collective and cluster models for a single-shell problem is provided by the U(3)⊃SU(3)⊃SO(3) dynamical symmetry. However, the more realistic description of the structure problems requires a multi-major-shell approach. The symplectic (shell), the contracted symplectic (collective) and the (semimicroscopic algebraic) cluster models not only offer this possibility, but they do so based on a symmetryadapted formalism, with sets of basis sates characterized by the group-chain, Eq. (6). Therefore, the common intersection of the three fundamental structure models of the multi-major-shell problem is again a dynamical symmetry [10]. In particular, the basis sates are defined by the representation labels of the groups in chain, Eq. (6), and the interactions are provided by the last part of the chain: U(3)⊃SU(3)⊃SO(3). This symmetry was first discovered between different cluster configurations and is called multichannel dynamical symmetry (where the "channel" refers to the reaction channel which defines the cluster configuration) [24]. It was invented by requiring plausible relations between the eigenvalues of the Hamiltonians. More specifically: when the (SU(3) basis) wave functions of two different configurations have 100% overlap, i.e. they are identical (due to the effect of the antisymmetryzation), then it is natural to require that their energies should be the same.
The MUSY can incorporate the shell model configuration as well, as a special 1-cluster state. The relation has been discussed in detail in Ref. [25] in terms of the quartet model, which is a symmetry-truncated version of the no-core shell model. In particular, a quartet consists of two protons and two neutrons with {4} permutational, and {1, 1, 1, 1} Wigner spin-isospin symmetry [26].
The MUSY is a composite symmetry of a composite system. The system is composite because it contains two or more different configurations. The symmetry is composite in the following sense. Each configuration has its own U(3) dynamical symmetry, and there is a further symmetry which connects them. This latter one acts in the pseudo-space of particle indices, and transforms one configuration to another one.
The formulation of the MUSY on the basis of the relations between the energy-eigenvalues is heuristic, and general, it can be applied to any configurations. However, it does not provide us with the exact mathematical details of the MUSY, e.g. the symmetry-transformations between different configurations. For the simplest case: two-channel symmetry of binary cluster configurations, the formalism is worked out in detail in Ref. [27]. It turns out that in this case the composite two-channel symmetry appears as a projection of a simple U(3) dynamical symmetry of an underlying three-cluster configuration.
The multichannel dynamical symmetry has a great predictive power. We illustrate the situation with the example of the 28 Si nucleus. This nucleus has a well-established band-structure in the low-energy region, and in addition good-resolution detailed spectra are known in some highlying energy windows, determined by different reactions, e.g. 16 O+ 12 C and 24 Mg+ 4 He. The U(3) dynamical symmetry of the semimicroscopic algebraic quartet model has been applied for the description of the low-energy part [25]. Thus the (three) parameters of the Hamiltonian, and the (single) parameter of the E2 transitions have been determined from the fit to the low-energy spectrum. Then a parameter-and ambiguity-free prediction can be made for the high-lying cluster spectra of different configurations. Figure 2 shows the situation for the 16 O+ 12 C case, in comparison with the spectrum of molecular resonances found in the 16 O+ 12 C experiments.
Another clusterization of considerable experimental interest is the 24 Mg+ 4 He. In a recent scattering study (see, e.g., Ref. [28]) the high-lying 0 + states were investigated (in order to identify the band-head of the newly-found superdeformed band [29]). This spectrum can also be predicted from the MUSY (Fig. 3), and its comparison with the experimental finding is fairly good (Fig. 4).
Summary
During its 60 years history the SU(3) symmetry did a nice job in nuclear structure studies. It described the spectrum of light nuclei, established a connection between the shell collective and cluster models, showed the way for many later algebraic structure models, served as a starting point for extensions in different directions.
In addition to its well-known features here we discussed very briefly some of its aspects which are not so frequently cited. E.g., the Elliott model illustrates nicely the spontaneous symmetry-breaking. The local SU(3) may serve as a gauge symmetry of the Yang-Mills theory of nuclear collectivity. The multichannel SU(3) symmetry connects the fundamental structure models of the many major shell problem, and seems to have a considerable predictive power. In particular, it may describe in a unified way the detailed spectra of different energy-windows and configurations, defined by the nuclear reactions.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2013-11-05T00:00:00.000
|
3994307
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0079175&type=printable",
"pdf_hash": "cce5aeb4359986fe767113b62ea08f686af765f9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41876",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "cce5aeb4359986fe767113b62ea08f686af765f9",
"year": 2013
}
|
pes2o/s2orc
|
Improving Glyphosate Oxidation Activity of Glycine Oxidase from Bacillus cereus by Directed Evolution
Glyphosate, a broad spectrum herbicide widely used in agriculture all over the world, inhibits 5-enolpyruvylshikimate-3-phosphate synthase in the shikimate pathway, and glycine oxidase (GO) has been reported to be able to catalyze the oxidative deamination of various amines and cleave the C-N bond in glyphosate. Here, in an effort to improve the catalytic activity of the glycine oxidase that was cloned from a glyphosate-degrading marine strain of Bacillus cereus (BceGO), we used a bacteriophage T7 lysis-based method for high-throughput screening of oxidase activity and engineered the gene encoding BceGO by directed evolution. Six mutants exhibiting enhanced activity toward glyphosate were screened from two rounds of error-prone PCR combined with site directed mutagenesis, and the beneficial mutations of the six evolved variants were recombined by DNA shuffling. Four recombinants were generated and, when compared with the wild-type BceGO, the most active mutant B3S1 showed the highest activity, exhibiting a 160-fold increase in substrate affinity, a 326-fold enhancement in catalytic efficiency against glyphosate, with little difference between their pH and temperature stabilities. The role of these mutations was explored through structure modeling and molecular docking, revealing that the Arg51 mutation is near the active site and could be an important residue contributing to the stabilization of glyphosate binding, while the role of the remaining mutations is unclear. These results provide insight into the application of directed evolution in optimizing glycine oxidase function and have laid a foundation for the development of glyphosate-tolerant crops.
Glyphosate
(N-phosphonomethylglycine) has been extensively applied worldwide as a broad-spectrum herbicide since 1974 [1]. Glyphosate inhibits 5-enolpyruvylshikimate-3phosphate synthase (EPSPS) of the shikimate pathway [2], catalyzing the transfer of the enolpyruvyl moiety of phosphoenolpyruvate (PEP) to shikimate-3-phosphate (S3P). The mechanism by which glyphosate inhibits EPSPS in a reversible reaction indicates that glyphosate acts as a competitive inhibitor of phosphoenolpyruvate (PEP) to occupy the PEP binding site of EPSPS [3]. Currently, three glyphosateresistant strategies have been used in transgenic crops: (i) the overproduction of EPSPS in plants such as a series of tolerant cell lines of Nicotiana tabacum [4], and Petunia hybrida [5,6], or a foreign EPSPS from bacteria with high glyphosate tolerance, for instance, through the expression of Agrobacterium sp. strain CP4 EPSPS [7][8][9] and a mutant of EPSPS from Ochrobactrum anthropi in transgenic plants [10]; (ii) Nacetylation of glyphosate by an evolved glyphosate Nacetyltransferase (GAT) from Bacillus licheniformis, conferring glyphosate-resistance to transgenic plants of Arabidopsis, tobacco, and maize by introducing the gat genes into them [11,12]; and (iii) expression of a glyphosate detoxification enzyme metabolized glyphosate in transgenic plants, such as the Monsanto's patent of glyphosate oxidoreductase (GOX) [13] and the evolved glycine oxidase from Bacillus subtilis (GO, EC 1.4.3.19) [14,15]. GOX and GO can both cleave the carbon-nitrogen bond in glyphosate and yield aminomethylphosphonic acid (AMPA) which is considered to be much less phytotoxic than glyphosate for most plant species [16]. Additionally, the mode of action of GOX and GO can endow with glyphosate-resistance to transgenic crops and predicted to reduce herbicide glyphosate residues [17].
Glycine oxidase (GO) is a FAD-dependent flavoprotein that catalyzes the oxidative deamination of glycine, short chain Damino acids (e.g. D-alanine, D-proline, D-valine, etc.) and primary or secondary amines to yield the corresponding α-keto acid and hydrogen peroxide. GO, the first enzyme, plays a role in the biosynthesis of the thiazole ring of thiamine pyrophosphate [18]. The three dimensional structure of GO from Bacillus subtilis (BsuGO) was known and provided insights into its active sites as well as the mode of interaction with its substrates [18,19]. Despite showing a modest sequence similarity with sarcosine oxidase (MSOX, EC 1.5.3.1) [20], D-amino acid oxidase (DAAO, EC 1.4.3.3) and Daspartate oxidase (DASPO, EC 1.4.3.1) [21], GO shares substrate specificity with these flavooxidases and seems to have a substrate preference for amines of a small size, such as sarcosine and glycine. Based on high resolution threedimensional structures of BsuGO (PDB: 1RYI), Pollegioni et al. used rational design and site saturation mutagenesis to improve the properties of BsuGO to oxidize glyphosate by modulating the substrate preference exerted upon the entrance of the active site residues, and obtained the evolved variant G51S/A54R/H244A with a 175-fold decrease in K m,app and a 210-fold increase in catalytic efficiency (k cat /K m ) against glyphosate over the wild-type GO [14].
In the present study, we used glyphosate as a sole nitrogen source and isolated a glyphosate-degrading strain of Bacillus cereus HYC-7, and based on the report by Pollegioni et al. [14], cloned and characterized the wild-type BceGO, which, however, only showed a low oxidase activity on glyphosate. Then in the absence of accurate structural information of BceGO, we utilized directed evolution to engineer its substrate preference and activity on glyphosate. Molecular diversity was generated by two rounds of error-prone PCR random mutagenesis, and the beneficial mutations were combined and recombined with site-directed mutagenesis and DNA shuffling, together with a bacteriophage T7 lysis-based method for highthroughput screening of oxidase activity. Finally thirteen mutants with higher oxidase activity on glyphosate than the wild-type BceGO were obtained, and mutant B3S1 was found to possess the maximum activity, with a 160-fold increase in substrate affinity and a 326-fold enhancement in catalytic efficiency.
Chemicals, strains, plasmids and culture conditions
Glyphosate, glycine, sarcosine, D-alanine, o-dianisidine dihydrochloride, horseradish peroxidase and FAD were obtained from Sigma (U.S.A.). Strains, bacteriophage and plasmids used in this study are listed in Table 1. Bacillus cereus HYC-7 was isolated from the marine sediment sand supplied by the Marine Culture Collection of China, and has been deposited in China Center for Type Culture Collection (CCTCC AB 2013009). E. coli was cultured in Luria-Bertani medium at 37°C, and the bacteriophage T7 was grown as described previously [22].
Construction of the BceGO mutant libraries
(i): Random mutagenesis. The error-prone PCR was performed as previously described with some modifications [23]. The sequences of the two primers used in random mutagenesis were BceGO-F (5'-CGCGGATCCATGTGTAAGAAGTATGATGTAGCGAT-3') and BceGO-R (5'-CCGCTCGAGCTAAACTCTCCTAGAAAGCAATGAAT-3'), the BamHI and XhoI sites are in italic and underlined. The amplification mixture (100 µl) was composed of 20 nM primers, 0.2 mM dGTP and dCTP, 0.1 mM dATP and dTTP, 2 U Taq DNA polymerase and Taq buffer containing 5 mM MgCl 2 and 0.5 mM MnCl 2 . The PCR procedure was performed in a thermal cycler (Bio-Rad Laboratories Inc.) for 30 cycles of (94°C for 30 sec, 57°C for 30 sec, and 72°C for 70 sec). PCR products were purified, digested with BamHI and XhoI, cloned into pGEX-6P-1, and transformed into E.coli DH5α to create the random mutant library. In the first round of PCR-based random mutagenesis, pGEX-GO was used as the template, and a new mutant with improved catalytic efficiency against glyphosate was obtained by combining the beneficial mutation sites of variants, which was used as the starting-point for the second round of random mutagenesis.
(ii): Site-directed mutagenesis. Two of single-point BceGO variants (G51R and D60G) were combined by a rapid PCRbased site-directed mutagenesis [24] using mutant D60G as the template. The mutagenesis primers used were: G51R-F (5'-GCTGCTGGTTTACTTCGTGTTCAGGC-3'), and G51R-R (5'-ACGAAGTAAACCAGCAGCTGCTTTTG-3'), which were designed according to mutant G51R. The mutation positions are underlined, which were designed according to the mutant G51R. The two-point mutant was designated as B1R and validated by DNA sequencing.
(iii): DNA shuffling. DNA shuffling was performed following the procedure described by Stemmer with some modifications [25,26]. The variants with improved oxidase activities on glyphosate were selected from the second round of random mutant library, used as DNA shuffling templates and amplified and 6P-1R: 5'-GGCAGATCGTCAGTCAGTCACG-3'). The purified PCR products were mixed equally, and then prepared for DNA fragmentation by ultrasonic treatment at 0°C for 40 min [27,28]. The fragments between 100~200 bp were purified using gel purification column (Axygen), and were reassembled by primerless PCR, which was performed in a thermal cycler (Bio-Rad Laboratories Inc.) as follows: 94°C for 3 min, 60 cycles of (30 sec 94°C, 30 sec 40°C, 20 sec + 1 sec per cycle 72°C), and 72°C for 10 min. After that, 5 µL of unpurified reassembly reaction mixture was used as the template to amplify the fulllength sequence with primers BceGO-F/BceGO-R. The PCR amplification products were required to be observed at a single band of 1.1 kb, then were purified by DNA gel purification kit, digested by BamHI and XhoI before being purified again. The purified products were ligated into the expression vector pGEX-6P-1, which was digested with BamHI and XhoI sites, and then the resulting constructs were transformed into E. coli DH5α for screening.
Screening of the evolved BceGO variants
A rapid and sensitive enzyme-coupled colorimetric assay was performed for high-throughput screening of evolved BceGO mutants toward glyphosate from the mutant library. The resulting library of BceGO mutants were expressed into 96 deep-well plates (containing 0.6 ml Luria-Bertani medium) and transferred onto Luria-Bertani agar plates as corresponding copies, followed by an overnight growth at 37°C. When the cultures grew to saturation, both IPTG (at a final concentration of 0.1 mM) and the bacteriophage T7 (above 100 particles per cell) [22] were added into 96 deep-well plates to synchronize the induction of recombinant mutants with the release of the lysis of the host E.coli DH5α at 37°C with shaking for 6 h.
To screen for the improved mutants on glyphosate, the oxidase activity of BceGO mutants was accessed as follows: an aliquot of 159 µL of lysis cell extracts was transferred to the corresponding well of a microtiter plate, followed by the addition of 20 µL of 50 mM glyphosate (at a decreasing substrate concentration gradient in sequential rounds of screening), 20 µL of 0.32 mg/mL o-dianisidine dihydrochloride, and 1 µL of 5 unit/mL horseradish peroxidase in 50 mM disodum pyrophosphate buffer at pH 8.5, and an overnight incubation at 25°C. The absorbance change at 450 nm for each well in the microtiter plates was measured and compared with the control (harboring wild-type BceGO or containing the empty vector pGEX-6P-1) [14]. Mutants that outperformed the wild-type were selected for further activity analysis.
Enzyme expression and purification
The pGEX-6P-1 expression plasmids inserted into wild-type BceGO and variants were transferred to the E.coli BL21 (DE3). The recombinant strains were grown at 37 °C in Luria-Bertani medium containing 100 µg/mL ampicillin until the exponential phase, followed by the addition of IPTG to a final concentration of 0.1 mM, induction in 22 °C for 8 h, and then the collection of the cells by centrifugation. Then the cell pellets were suspended with 50 mM disodum pyrophosphate buffer at pH 8.5 and crude extracts were lysed with a high pressure homogenizer (NiroSoavi, Italy), followed by the addition of 5 µM FAD, 2 mM 2-mercaptoethanol, and 50 U deoxyribonuclease I into the supernatant of the lysate. Subsequently, the cellular debris was removed by centrifugation and the supernatant was incubated for 1 h at 4 °C with 1 mL GST•Bind Resin. After the column materials were washed with 50 mM disodum pyrophosphate buffer at pH 7.5, the recombinant GO proteins were treated with 200 U PreScission protease at 4°C overnight. Finally, the purified GO and variants were eluted from the beads with 50 mM disodum pyrophosphate buffer at pH 8.5 and 10% glycerol. The purity of GOs was tested by SDS-PAGE and Coomassie brilliant blue staining. The concentration of GOs was determined with Bradford assay [29].
Enzyme characterization and kinetic parameters
Wild-type BceGO and mutants activities were measured spectrophotometrically via determination of H 2 O 2 produced by the BceGO reaction with an enzyme-coupled assay using horseradish peroxidase and o-dianisidine dihydrochloride [30]. One unit of GO corresponds to the amount of enzyme that converts 1 µmol of substrate (glycine or oxygen) or that produces 1 µmol of hydrogen peroxide per minute at 25°C. The specific activity of wild-type BceGO and mutants was assayed using four different substrates in 200 µL reactions mixtures in a 96-well microtitre plate, i.e. each well contained 20 µL 100 mM substrate solution, 20 µL 0.32 mg/ml o-dianisidine dihydrochloride, 1 µL 5 unit/ml horseradish peroxidase in disodum pyrophosphate buffer (50 mM, pH 8.5), and a fixed amount of enzyme, followed by the supplementation of 50 mM disodum pyrophosphate buffer at pH 8.5 to reach a final volume of 200 µL. After the mixture was incubated at 25°C for 60 min, the change in absorbance at 450 nm was recorded by using a Thermo Multiskan Spectrum plate reader.
The kinetic parameters of wild-type BceGO and variants were measured using a fixed amount of enzyme and four substrates at a different concentrations (glycine, 0~300 mM; glyphosate, 0~600 mM; sarcosine, 0~300 mM; D-alanine, 0~600 mM ). Activity was assayed using the H 2 O 2 produced in the GO reaction as reported [30]. The values of V max and K m were calculated using the program GraphPad Prism version 5.00 for Windows (GraphPad Software, San Diego, CA), and the V max values were converted to k cat values for normalization with respect to the kinetic parameters.
The effects of temperature and pH on the activities of the wild-type BceGO and variants were evaluated at different temperatures (0~70°C) and at different pH values by using 0.2 mM Na 2 HPO 4 -0.1 mM citrate buffer (pH 4.0~8.0), and 50 mM disodum pyrophosphate─NaOH buffer (pH 8.0~11.0). The thermal and pH stabilities GOs were assayed by incubating the enzyme at a temperatures range of (0~70°C) for 1 h, and with different pH buffers (pH 4.0~11.0) at 0 °C for 6 h, respectively, and then its residual activity and relative activity were determined and calculated.
Molecular modeling and docking analysis
The homology module in MOE 2010.10 (Chemical Computing Group Inc., Montreal, Canada) was applied to build the 3D structure of mutant B3S1. The glycine oxidase structures from Bacillus subtilis (PDB code: 1RYI and 3IF9) [14,19] exhibited the highest identity (31%), and were thus considered to be the most appropriate templates. The docking, refinement of docked poses and the binding mode analysis of B3S1-glyphosate complex were performed with the docking and LigX module in MOE. The hydrogen bond pattern, solvent accessibility were assessed by analyzing the structural features of the B3S1 model, which analyses were carried out using ligand interaction analysis in MOE [31] and ASAView [32], respectively. The structural states were classified into three types based on the calculated solvent accessibility values (cutoff values, CV) of each residue, and the three-state model was created as previously described [33]: (i) the buried state (B), where the solvent accessibility value of each residue is 0≤CV≤9%; (ii) the intermediate state (I), where the value is 9%≤CV≤36%; and (iii) the exposed state (E), where the value is 36%≤CV≤100%.
Directed evolution and screening of evolved BceGO
A key process in directed evolution is the generation of genetic diversity using 'irrational' design approaches, such as random mutagenesis, DNA recombination, and a rapid and sensitive screening method so that the desired properties produced by residue substitutions can be detected [34]. In the present work, based on an insightful report about the application of rational design and site saturation mutagenesis in the evolution of BsuGO to obtain a variant with a high catalytic efficiency on glyphosate [14], we have altered BceGO's substrate specificity towards glyphosate by using 'irrational' design methods focusing on sequential rounds of random mutagenesis and recombination. Firstly, we used a bacteriophage T7 lysis-based method for high-throughput spectrophotometric assay of oxidase activity, which facilitates the screening process and is able to handle massive clones.
Secondly, we employed a directed evolution approach of sequential random mutagenesis and DNA shuffling to modulate the substrate specificity of BceGO towards the herbicide glyphosate.
Error-prone PCR was used to create the first generation random mutant library of BceGO with an average of 1 to 2 amino acid substitutions per mutant for the majority of library as a whole. The resulting 14000 clones were screened for oxidase activity toward glyphosate at a screening concentration of 50 mM, with the absorbance of oxidized o-dianisidine dihydrochloride at 450 nm as an indicator of enzymatic oxidizability potential. The two active clones, 22D11 (G51R) and 23B1 (D60G), showed a redder colour than the wild-type BceGO, and then the two mutant sites were combined by sitedirected mutagenesis and designated as B1R, which was used as the starting template for the second round of error-prone PCR. For evaluating the mutation frequency of the second round of random mutagenesis, inserts from 10 randomly picked clones were sequenced, resulting in 1~3 amino acid substitutions. Of approximately 16000 clones screened with the substrate glyphosate concentration at 10 mM, six mutants with a potential increase in oxidase activity were selected and identified by DNA sequencing before DNA shuffling (Table 2).
To recombine the beneficial mutations generated by errorprone PCR and enhance the enzyme's activity against glyphosate, the coding genes of the six improved mutants were subjected to DNA shuffling generating a recombinant DNA library of about 20,000 clones, which were preliminarily screened for activity toward glyphosate at a 2 mM concentration in the procedure described above. After screening about 10000 clones in the microtiter plates, ten recombinants with improved activity were obtained, and after second screening with 0.5 mM glyphosate, four shuffled mutants (B3S1, B3S4, B3S6 and B3S7) were selected for purification ( Table 2). Four recombinants contained 3.5 ± 0.5 crossovers on average at a range of 3~4 [35], in which, only three point mutations arose during shuffling, at a mutation frequency of 0.27%. For instance, the most improved variant
Expression and characterization of wild-type BceGO and evolved enzymes
After the first round of error-prone PCR was performed, two mutants selected from these improved variants and wild-type BceGO were purified and characterized as described above, all of which exhibited increased catalytic efficiency on glyphosate (Table 3). When compared with wild-type BceGO, the two single point mutants of 22D11 (G51R) and 23B1 (D60G) that were screened from the first round of random mutant library showed a 5.27-and 5.61-fold increase in activity against glyphosate, respectively, while mutant B1R exhibited a 17-fold increase in activity on glyphosate, a 321-fold decrease in the catalytic efficiency toward glycine, and a 151-fold enhancement in the specificity constant (the k cat /K m ratio between glyphosate and glycine, see last column in Table 3).
Starting from mutant B1R, a second round of error-prone PCR was performed, and six mutants (B2R3, B2R6, B2R11, B2R14, B2R23 and B2R81) showed improved activity on glyphosate as compared with wild-type BceGO. For all the six variants, the changes of k cat,app parameters were prominent for the substitution of G60S introduced in the second round of random mutagenesis, and the k cat,app value of B2R23 was 11.08-fold that of the starting template (mutant B1R). These results demonstrated that BceGO had a considerable evolutionary landscape for improving kinetic efficiency on glyphosate. Further promotion of glyphosate oxidase activity by DNA shuffling was apparent in the third generation, obtaining four recombinants (B3S1, B3S4, B3S6 and B3S7) with a higher activity than those in the second generation. As shown in Table 3 and Figure 1, variation in the kinetic parameters of To assay the substrate specificity of the wild-type BceGO and the most improved variant B3S1, the activities of BceGO and B3S1 were determined on four substrates at a fixed concentration(100 mM) using the spectrophotometric method coupled with horseradish peroxidase and o-dianisidine. BceGO presented a specific activity of 0.28 U mg -1 on glycine, which was less than BsuGO (0.8 U mg -1 ) [20] and the glycine oxidase from Geobacillus kaustophilus (GOXK) (11.85 U mg -1 ) [36]. But it showed a relatively higher activity on sarcosine (0.83 U mg -1 ) than other substrates, which was in line with a previous finding about BsuGO [20]. Among the tested substrates of glycine and glyphosate, variant B3S1 exhibited a 1-fold increase in specific activity on glyphosate, but a lower specific activity than the wild-type BceGO on glycine, sarcosine and D-alanine (Table 4).
Effects of temperature and pH on enzyme activity and stability
The effects of pH and temperature on the enzyme activity and stability were examined, and the results are shown in Figure 2. Wild-type BceGO and B3S1 both had the same optimal activity at pH 8.5, and also exhibited similar pH stability profiles after 6 h incubation at 0 °C (Figure 2A). The optimal pH of BceGO (pH 8.5) was identical with GOXK [36] and close to BsuGO (pH 8.0) [20]. Similar to BsuGO (pH 6.5~9.5) [19] and GOXK (pH 6.0~9.0) [36], the BceGO retained more than 80% of the original activity after 6 h incubation at 0 °C over a pH range from 6.5 to 11.0, but showed a significant decrease in activity below pH 5.0 ( Figure 2B). Both wild-type BceGO and B3S1 exhibited the maximum activity at 60 °C ( Figure 2C), and retained it up to 50 °C after 1 h incubation. They also showed a similar curve in thermal stability, with the activity decreasing sharply above 60 °C ( Figure 2D), indicating that they had a lower thermal stability at the high temperature than GOXK from an extremophile microorganism Geobacillus kaustophilus [36].
Structure modeling analysis of evolved variant B3S1
To identify the possible molecular basis for the enhancement of oxidase activity against glyphosate, we constructed a docking model of the B3S1-glyphosate complex based on the homology model ( Figure 3A). Combined with the data of secondary structure predicted by the PSIPRED server [37], we found that two of the valuable mutations introduced into G51R/ D60S were located on the loop connecting α2-α3 helix, and Arg 51 was close to the active site, which established an electrostatic interaction and hydrogen bonds with the phosphonate group of glyphosate ( Figure 3C). On the one hand, the guanidinium group of Arg 51 contributed to the stabilization of glyphosate binding, which might enhance the affinity for glyphosate, but decrease the affinity for glycine. On the other hand, this polar residue was prone to provide partially positive charge to neutralize negative charge in the active site, thus increasing the cofactor's redox potential [38]. The substitution of D60G was generated in evolved mutant 23B1 by replacing an acidic residue with a neutral residue without side chain, which contribute to the improved catalytic activity of GO, mainly because the loop connecting α2-α3 helix could possess The optimal pH. Enzyme activity was determined with 100 mM glyphosate at 25 °C and within a pH gradient range of 4.0~11.0 with the following buffers: 0.2 mM Na 2 HPO 4 -0.1 mM citric acid buffer for pH 4.0~8.0, and 50 mM sodium pyrophosphate buffer for pH 8.0~11.0. The maximum activity observed was taken as 100%. B. The pH stability. Enzymes were incubated at 0 °C for 6 h over a pH buffer range of 4.0~11.0, then the enzyme activity was determined with 100 mM glyphosate at 25 °C and the optimal pH. The maximum activity observed was taken as 100%. C. The optimal temperature. The enzymes were added to the reaction mixture and the reaction was carried out at an indicated temperature from 0 to 70 °C. Then the enzyme activity was determined with 100 mM glyphosate at 25 °C and the optimal pH. The maximum activity observed was taken as 100%. D. The temperature stability. Enzymes were incubated for 1h at indicated temperature from 0 to 70 °C and then the enzyme activity was determined with 100 mM glyphosate at 25 °C and in the optimal pH. The activity without treatment was taken as 100%. Error bars represent the SD of the mean calculated for three replicates. Solid dots represent wild-type BceGO, solid blocks represent variant B3S1. doi: 10.1371/journal.pone.0079175.g002 a high mobility and bring a corresponding slight conformation change in the proximity of active site [14]. The mutations (G51R/D60G) close to the active site improved the catalytic efficiency of BceGO on glyphosate mainly by decreasing the K m value (up to 35-fold in mutant B1R).
In the second random mutagenesis, the replacement of G60S with a short polar side chain residue resulted in a corresponding increase in k cat (mutants B2R23 and B2R81 in Table 3), which could be assumed that the new substitution of G60S can optimize the conformation at the active site entrance, thus improving the catalysis of BceGO on glyphosate. Besides, the mutations of T118A, K133R, I198V, V262I, I284L, V262I and E357G were far away from the active site and introduced in the second error-prone PCR, which may cause a slight conformational change and contribute to the promotion of catalytic activity on glyphosate.. While mutation of L307S adjoined the catalytic residue Arg 308 (the corresponding residue is Arg 302 in BsuGO), they were both located in a conservative loop WAGLRP, and they might be involved in the interaction with the substrate and formed a lid to cover the substrate binding site according to prior studies [18,19].
To analyze the location and accessibility of the mutations identified, and clarify the topological distribution of these mutated residues in variant B3S1, the relative solvent accessibility scores were predicted by ASAView [32]. The results showed that the three residues of Arg 133 , Val 198 , and Gly 357 were located on the surface with a clearly higher calculated solvent accessibility value (75.9%, 64.4% and 47% respectively), and the four residues of Ala 118 , Ile 262 , Leu 284 , and Ser 307 were in the buried state in the enzyme with a lower solvent accessibility value (from 0 to 4.8%). Only Arg 51 and Ser 60 were in the intermediate state in B3S1 structure whose the predicted solvent accessibility value was 12.7% and 29%, respectively, suggesting that from the structural point of view, except that Arg 51 and Ser 60 were in the vicinity of the entrance to the active site, and the other mutations were far away from the active centre, but these mutations also play a role in improving catalytic activity. Comparison of wild-type BceGO with B3S1 and other recombinants such as B3S4, B3S6 and B3S7 revealed some different kinetic properties toward glyphosate, i.e., mutations both close to and far away from the active site can effectively improve catalytic activity (Table 3). Our results have confirmed the observation that the substrate specificity of an enzyme can be modulated by a few mutations of residues [39,40]. Although random mutagenesis is targeting the entire coding sequence of the enzyme, only a few mutated residues form the substrate binding site, most mutated residues lie far away from active site [40]. Can these distant mutations improve the catalytic efficiency? The answer is that they might not only cause a subtle disruption in the spatial configuration of the active site, but also some fine alterations in the protein backbone and side chain, which can produce an effect on the protein secondary structure, cause subtle changes in the arrangement of the protein tertiary structure or the shape of the binding pocket, and finally lead to dramatic changes in the catalytic power of enzyme [41]. Therefore, based on the structural model, it could be deduced that while the mutations close to the active site appeared to be more useful in altering an enzyme's substrate selectivity and catalytic activity, these distant mutations could also play an auxiliary role in improving or modifying the catalytic properties of the enzyme.
The schematic 2D representation of B3S1-glyphosate complex is shown in Figure 3C. As can be seen from the 2D depiction of B3S1-glyphosate complex, the side chain of Arg 308 forms two H-bonds with the carboxylic group of glyphosate and this residue plays a primary role in substrate binding for enzymatic activity [14]. In addition to Arg 308 , the carboxylic group of glyphosate might also form H-bonds with Tyr 252 and Arg 335 side chains. The arginine introduced at position 51 (instead of glycine) is at a suitable location to interact with the phosphonate group of glyphosate, with the corresponding substitution of G51R in BsuGO close to the active site entrance in GO [14]. The guanidinium group of Arg 335 also contributes to the stabilization of the phosphonate group and the carboxylate group of glyphosate ( Figure 3B), and possesses the largest solvent-accessible surface areas ( Figure 3C).
Conclusions
Here, in the absence of detailed structure information of BceGO, we conducted a rapid and sensitive screening for improved variants activity against glyphosate using a directed evolution approach of sequential random mutagenesis, sitedirected mutagenesis and DNA shuffling, together with a bacteriophage T7 lysis-based method for high-throughput spectrophotometric assay coupled with horseradish peroxidase/o-dianisidine. A total of thirteen evolved variants were isolated from the mutant libraries and, when compared with wild-type BceGO, the most active mutant B3S1 possessed a 160-fold higher substrate affinity for glyphosate, a 326-fold higher catalytic efficiency against glyphosate and a 6,017-fold increase in the specificity constant (the k cat /K m ratio between glyphosate and glycine), indicating that the BceGO substrate specificity and catalytic activity have been successfully engineered by using sequential evolution and selection to exploit the sequence space. The k cat /K m value of G51S/A54R/ H244A BsuGO variants for glyphosate is ≈ 120 mM•min -1 [14], which was expressed in Medicago sativa and acquired resistance to glyphosate [15]. Therefore, we basically achieved the goal for engineering the substrate specificity of BceGO toward the degradation of the herbicide glyphosate. Although BceGO and glyphosate oxidoreductase (GOX) are different in catalytic mechanism of glyphosate oxidation [13,42], the two enzymes share some similarities including (i) breakage of the C-N bond in glyphosate to generate the same product (AMPA and glyoxylate), (ii) the property of FAD-containing flavoenzymes, and (iii) a low sequence identity (20%) with BceGO and GOX. The evolved B3S1 we reported here shows a 5-fold lower K m value for glyphosate than GOX (0.53 versus 2.6 mM, respectively), but compared with the diffusion-limited maximal value (10 9 M -1 s -1 ), the k cat /K m value of variant B3S1 for glyphosate is ≈ 22 mM•min -1 , which is still of a great potential for further optimization by directed evolution.
|
v3-fos-license
|
2023-06-24T06:17:36.964Z
|
2023-06-23T00:00:00.000
|
259233109
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40620-023-01658-0.pdf",
"pdf_hash": "5a1d5c146b6a93b9e824f4ce04a0f3821d9072fe",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41878",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ab01cc732ad7ed80ed845ea447cbde00ff15a33c",
"year": 2023
}
|
pes2o/s2orc
|
Parathyroidectomy and survival in a cohort of Italian dialysis patients: results of a multicenter, observational, prospective study
Background Severe secondary hyperparathyroidism (SHPT) is associated with mortality in end stage kidney disease (ESKD). Parathyroidectomy (PTX) becomes necessary when medical therapy fails, thus highlighting the interest to compare biochemical and clinical outcomes of patients receiving either medical treatment or surgery. Methods We aimed to compare overall survival and biochemical control of hemodialysis patients with severe hyperparathyroidism, treated by surgery or medical therapy followed-up for 36 months. Inclusion criteria were age older than 18 years, renal failure requiring dialysis treatment (hemodialysis or peritoneal dialysis) and ability to sign the consent form. A control group of 418 patients treated in the same centers, who did not undergo parathyroidectomy was selected after matching for age, sex, and dialysis vintage. Results From 82 Dialysis units in Italy, we prospectively collected data of 257 prevalent patients who underwent parathyroidectomy (age 58.2 ± 12.8 years; M/F: 44%/56%, dialysis vintage: 15.5 ± 8.4 years) and of 418 control patients who did not undergo parathyroidectomy (age 60.3 ± 14.4 years; M/F 44%/56%; dialysis vintage 11.2 ± 7.6 y). The survival rate was higher in the group that underwent parathyroidectomy (Kaplan–Meier log rank test = 0.002). Univariable analysis (HR 0.556, CI: 0.387–0.800, p = 0.002) and multivariable analysis (HR 0.671, CI:0.465–0.970, p = 0.034), identified parathyroidectomy as a protective factor of overall survival. The prevalence of patients at KDOQI targets for PTH was lower in patients who underwent parathyroidectomy compared to controls (PTX vs non-PTX: PTH < 150 pg/ml: 59% vs 21%, p = 0.001; PTH at target: 18% vs 37% p = 0.001; PTH > 300 pg/ml 23% vs 42% p = 0.001). The control group received more intensive medical treatment with higher prevalence of vitamin D (65% vs 41%, p = 0.0001), calcimimetics (34% vs 14%, p = 0.0001) and phosphate binders (77% vs 66%, p = 0.002). Conclusions Our data suggest that parathyroidectomy is associated with survival rate at 36 months, independently of biochemical control. Lower exposure to high PTH levels could represent an advantage in the long term. Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1007/s40620-023-01658-0.
Introduction
Secondary hyperparathyroidism (SHPT) in end-stage renal disease (ESRD) is associated with disturbances in mineral metabolism, metabolic bone disease and renal osteodystrophy, bone fractures, vascular calcifications [1-4] and the eventual increase of cardiovascular disease and mortality.Conventional treatment of SHPT with phosphate binders, vitamin D receptor activators (VDRAs) and calcimimetics [5][6][7] may not allow adequate biochemical control, and parathyroidectomy (PTX) is still recommended in severe cases failing to respond to medical therapy [8].
Parathyroidectomy rapidly lowers parathyroid hormone (PTH) serum levels with improvement of serum calcium and phosphate control, and has potentially favorable effects on cardiovascular survival.Indeed, a lower risk of mortality is reported when all three standard biochemical indicators of metabolic control (namely Ca, P and PTH) reach the target levels recommended by K-DOQI at least once [9].However, targeting all three biomarkers is not easily accomplished after PTX [10][11][12].In fact, in the long term after surgery, hypoparathyroidism is frequent and both low and high levels of PTH are associated with increased cardiovascular morbidity and mortality, in a typically U-shaped modality [13].Notwithstanding, available observational studies in hemodialysis (HD) patients describe reduced all-cause and cardiovascular mortality rates after PTX in the long term [14][15][16][17], apparently regardless of sub-optimal biochemical control.The pathophysiological link between PTX and improved survival is not clear but may include the reported effects of PTH on left ventricular hypertrophy, blood pressure control, erythropoietin-resistant anemia, nutritional status and humoral and/or cellular immunity, independently of calcium and phosphate control and of prescribed specific therapies [18][19][20][21].Regrettably, prospective, randomizedcontrolled trials comparing the mortality rates of HD patients receiving either medical or surgical therapy for severe SHPT are not available, and will never be carried out due to ethical issues [22].Therefore, observational studies, despite suffering selection bias are still the main source of data that provide information on the relationship between PTX, biochemical control and mortality rates in HD patients.This paper reports the results of a multicenter, observational, prospective cohort study aimed at evaluating the impact of PTX on survival in an Italian cohort of HD patients.
Study population and data collection
In this paper, we report the prospective, observational part of a multicenter cohort study on PTX that involved 149 Italian dialysis Units, whose protocol was approved by the Ethics Committee of the Policlinico Umberto I in Rome (prot.N° 888/09), and whose baseline data have already been published [12].Briefly, in this study, inclusion criteria were age older than 18 years, renal failure requiring dialysis treatment (hemodialysis or peritoneal dialysis) and ability to sign the consent form.Data from each Unit of the 528 enrolled patients with PTX history were provided by a referent physician who recorded medical history, timing of PTX, type of surgery, laboratory data, and prescribed SHPT medications in a dedicated data sheet.In addition, information about sex, age and dialysis vintage of all 12,515 patients receiving treatment in the involved Units provided a population from which a control group could be selected.
Follow-up data
Further to the baseline descriptive phase, the protocol also included a prospective observational follow-up, lasting three years, which, however, did not include 67 units.Thus, as schematically reported in Fig. 1, for the follow up phase of the study (the results of which are reported in this paper) we had 257 PTX patients and 4897 controls, among whom we selected, in 2011, 418 non-PTX cases that were similar in terms of age, sex, and dialysis vintage to the study group.Clinical and therapeutic updates were then collected prospectively for the selected patients for three consecutive years (from 01.01.2012 to 31.12.2014).We recorded fatal events from any cause, prescribed medications for SHPT control (vitamin D and calcium-based therapies, calcimimetics, phosphate binders) and laboratory data pertinent to mineral metabolism (PTH, calcium and phosphate) during the three years of follow-up.
The study complied with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Endpoints
The primary endpoint was the overall survival rate of PTX and non-PTX patients during the 36 months of follow-up.The secondary endpoint was the prevalence of patients reaching biochemical targets for mineral metabolism, as defined by the K-DOQI ranges for Ca, P and PTH [9], in the two groups of PTX and non-PTX patients.
Statistical analysis
Data are expressed as mean ± SD for variables with a Gaussian distribution or median [25-75th percentiles] when the distribution was non Gaussian.We used the Kolmogorov-Smirnov test to evaluate normality of continuous measurements.Parametric tests, chi squared test for qualitative and t-test for quantitative variables, were used to compare measurements between the groups.When the normality assumption was not tenable, Mann-Whitney was used to test for significant differences.All tests were two-tailed and (adjusted) P-values < 0.05 were considered as statistically significant.When general r-byc contingency tables yielded statistical significance, we proceeded to the evaluation of two-by-two sub-tables of interest.In that case, significance levels were Bonferroni-adjusted by multiplication by the number of two-by-two tables evaluated.
The family-wise significance level was fixed at 5%, so that a Bonferroni adjusted p value below 0.05 was considered as statistically significant after taking into account multiplicity.The time-to-event outcomes association in PTX and non-PTX patients was evaluated starting from the date of hemodialysis inception, through stratified Kaplan-Meier curves and associated log-rank tests and/or univariable Cox regression models.As multivariable analyses, we used Cox regression models, where the final set of predictors was selected by means of forward selection based on Akaike Information Criterion.We further evaluated the effect of PTX through a propensity-score matched analysis.First we estimated the probability of obtaining the treatment based on gender, age, diabetes, albumin and hemoglobin levels.One-to-one matching was then performed based on the estimated propensity score, and the matched subset was used in a Cox regression model for estimation of the Average Treatment effect for the Treated (under selectionon-observables assumptions).Balance was evaluated through Standardized Mean Differences (SMD), where an SMD < 10% indicated a good balance.
Patient characteristics
Table 1 describes the main clinical and biochemical characteristics of the PTX and non-PTX groups which were similar with regard to age and sex distribution, but different concerning dialysis vintage (PTX = 15.5 ± 8.4 vs non-PTX = 11.2 ± 7.6 years, p < 0.0001) (Table 1).Patients in the PTX group, who underwent surgery on average 8 years after dialysis inception, showed a higher prevalence of glomerular diseases and tubulointerstitial nephropathies, as compared to the non-PTX group (Table 1).History of comorbidities did not differ between the two groups, in particular regarding the incidence of cardiovascular diseases (peripheral vascular disease, ischemic heart disease, and/or heart failure).In addition, no difference was observed in the prevalence of arterial hypertension (identified as current antihypertensive drug prescription), while diabetes was less frequently in PTX patients (6% vs 14%, p = 0.002) (Table 1).
Survival analysis
The survival curves in the two groups of patients, evaluated from the date of hemodialysis inception, clearly show the long-term lower mortality rate of the PTX group (Kaplan-Meier log rank test = 0.002 Fig. 2).This result was also confirmed when the survival curves were considered from the beginning of the three years of follow-up (Kaplan-Meier log rank test = 0.023, Fig. 1 supplemental material).250, IQR: 163-400 pg/ml, p < 0.0001) were significantly lower in the PTX compared to the non-PTX group of patients, while phosphate was not different (Table 1).We then compared biochemical values, therapies and any-cause mortality in the two groups during the three consecutive years of follow-up.During the observation period, fifty-four patients (21%) in the PTX group and 181 (43%) in the non-PTX group were lost to follow-up, leaving 203 and 237 cases, respectively, for comparison.As illustrated in Fig. 3, at baseline, the prevalence of cases below, at target or above the PTH KDOQI targets was different between the two groups.In particular the percentage of patients below 150 was higher in the PTX group (59 vs 21%, p = 0.001), while the percentage at target or above was significantly higher in the non-PTX group (18 vs 37%; p < 0.0001and 23 vs 42%, p = 0.0001, respectively).Notably, during follow-up, these differences were systematically confirmed (Fig. 4).No difference was evident for serum calcium and phosphate (Fig. 4).As for SHPT therapies, the non-PTX group received more aggressive treatment characterized by significantly higher prescriptions of vitamin D, phosphate binders and calcimimetics (Fig. 5).Moreover, PTX patients received more calcitriol and calcium-based phosphate binders than the control group, which, by comparison, received more vitamin D receptor activators and non-calcium-based phosphate binders (Table 2).Univariate analysis carried forward in the population of patients as a whole and adjusted for dialysis vintage, identified PTX as a protective factor for overall survival (Table 3; p = 0.002).Similarly, higher serum albumin (p = 0.0001), higher hemoglobin levels (p = 0.0001) and younger age (p = 0.0001) were associated with better survival.As reported in Table 3, multivariable analysis confirmed PTX as an independent factor of better survival while age and dialysis vintage were associated with worse outcome.On a subset of propensity-score matched patients, well balanced for sex, age, dialysis vintage, diabetes, albumin, hemoglobin, Calcium, Phospahte, PTH and therapies (Table 4), PTX was confirmed to be a protective factor for overall survival (HR: 0.404 [0.254-0.643];p = 0.00132).
Discussion
The main result of our study is that PTX was associated with a better survival rate in our HD population of 257 PTX patients compared with 418 matched non-PTX patients, prospectively followed-up for three years.This result was confirmed in both univariable and multivariable-adjusted survival analyses (Table 3).Interestingly, the percentage of patients with adequate calcium and phosphate control did not differ between the PTX and non-PTX groups, while PTH levels were less frequently at target during the three years of follow-up in the PTX group (Fig. 4).In particular, during the three years of follow-up, the PTX group was mostly and invariably exposed to very low PTH levels.In our study, given the time frame, we used the KDOQI target ranges.As a comparison, we also used the more recent KDIGO PTH targets, which confirmed the significant differences between PTX and non-PTX patients (below target: 65 vs 24%, p < 0.0001; at target 30 vs 66%, p < 0.0001; above target 5 vs 10%, p < 0.01.Supplemental material, Fig. 2).Therefore, the association between PTX and a better overall survival rate appears to be independent of the attained biochemical profile.Notably, the similar biochemical control of calcium, phosphate and PTH resulted from a lower prescription of active vitamin D, phosphate binders and calcimimetics in the PTX group (Fig. 5), thus pointing to the role of still poorly known but commonly recognized limitations of the widely employed therapeutic strategies for SHPT.The Kaplan-Meier survival analysis showed better survival rates of the PTX group, in particular in the long term (Fig. 2).Indeed, the PTX group had undergone surgery on average 8 years after hemodialysis inception and the survival curves progressively diverged over time, remaining significant even after 30 years of follow-up.It is also interesting to notice that although multivariable analysis identified dialysis vintage as a risk factor of mortality, the PTX group had better survival, despite a longer dialysis vintage.Overall, our data suggest that PTX has a long-term protective effect on survival in HD patients.Our results are in agreement with the available evidence in the literature showing reduced allcause and cardiovascular mortality in the long term after PTX [13][14][15]21], again independently of the biochemical control of mineral metabolism.We can therefore look back at the concept of PTH as a uremic toxin [23].In fact, we are aware that PTH may have several extra-mineral negative effects in dialysis patients, spanning from increased left ventricular hypertrophy and higher blood pressure to erythropoietin-resistant anemia and poor nutrition and quality of life [24][25][26].In our opinion, it is possible that PTX, by shortening exposure to high PTH levels, reduces the effects of extra-mineral damage.
The main strength of our paper is the multicenter, observational, prospective study design which allows the evaluation of real-life therapeutic strategies.On the other hand, we acknowledge that the enrollment of prevalent PTX hemodialysis patients that underwent surgery during their dialytic history represents a limitation.In fact, enrolling prevalent instead of incident PTX patients carries a number of potential selection biases, as clearly reported in the literature [22].However, randomized controlled trials comparing patients receiving surgical or medical therapy at the time of surgical indication for severe SHPT do not exist and, most likely, will never be carried out [22].In conclusion, PTX can be regarded as an effective and safe therapy for refractory SHPT in dialysis patients even though the metabolic control reached after surgery may not be optimal.
Table 1
Baseline characteristics in PTX and non-PTX patientsData are shown as mean ± standard deviation, median (IQR) or percentage.Chi-squared test was used to test for any significant differences between qualitative variables.T-test or Mann-Whitney were used to test for any significant differences between quantitative variables Abbreviations: D dialysis; PTX Parathyroidectomy; ESRD end stage renal disease; Ca calcium; P phosphate; PTH parathyroid hormone, ADPKD autosomal dominant polycystic kidney
Table 2
Prevalence of drug prescriptions during follow-up
Table 3
Univariate and multivariate survival analyses Abbreviations: PTX Parathyroidectomy; Ca calcium; P phosphate; PTH parathyroid hormone; Hb hemoglobin; CI confidence interval
Table 4
Balance measures pre-and post-matching
|
v3-fos-license
|
2023-02-24T16:19:22.142Z
|
2023-02-22T00:00:00.000
|
257133954
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/ast/2023/8532316.pdf",
"pdf_hash": "eda524a8abe31410f8c51ea6c95fadb4170404e7",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41881",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Engineering"
],
"sha1": "38245ff41aa23f58b557a4e330644a3e5bbce3c7",
"year": 2023
}
|
pes2o/s2orc
|
Recovering the Soybean Hulls after Peroxidase Extraction and Their Application as Adsorbent for Metal Ions and Dyes
This study is aimed at extending the soybean hulls’ lifetime by their utilization as an adsorbent for metal ions (Cd2+ and Cu2+) and dyes (Reactive Yellow 39 (RY 39) and Acid Blue 225 (AB 225)). ATR-FTIR spectroscopy, FE-SEM microscopy, and zeta potential measurements were used for adsorbent characterization. The effect of the solution’s pH, peroxidase extraction, adsorbent particle size, contact time, the pollutant’s initial concentration, and temperature on the soybean hulls’ adsorption potential was studied. Before peroxidase extraction, soybean hulls were capable of removing 72% Cd2+, 71% Cu2+ (at a pH of 5.00) or 81% RY 39, and 73% AB 225 (at a pH of 3.00). For further experiments, soybean hulls without peroxidase were used for several reasons: (1) due to their observed higher metal ion removal, (2) in order to reduce the waste disposal cost after the peroxidase (usually used for wastewater decolorization) extraction, and (3) since the soybean hulls without peroxidase possessed significantly lower secondary pollution than those with peroxidase. Cd2+ and Cu2+ removal was slightly increased when the smaller adsorbent fraction (710-1000 μm) was used, while the adsorbent particle size did not have an impact on dye removal. After 30 min of contact time, 92% and 88% of RY 39 and AB 225 were removed, respectively, while after the same contact time, 80% and 69% of Cd2+ and Cu2+ were removed, respectively. Adsorption of all tested pollutants follows a pseudo-second-order reaction through the fast adsorption, intraparticle diffusion, and final equilibrium stage. The maximal adsorption capacities determined by the Langmuir model were 21.10, 20.54, 16.54, and 17.23 mg/g for Cd2+, Cu2+, RY 39, and AB 225, respectively. Calculated thermodynamic parameters suggested that the adsorption of all pollutants is spontaneous and of endothermic character. Moreover, different binary mixtures were prepared, and the competitive adsorptions revealed that the soybean hulls are the most efficient adsorbent for the mixture of AB 225 and Cu2+. The findings of this study contribute to the soybean hulls’ recovery after the peroxidase extraction and bring them into the circular economy concept.
Introduction
Water quality represents one of the major concerns of the twenty-first century, and therefore, surface and deep-water pollution is a topic of high social and scientific interest in both developing and already developed countries.It is well known that water pollution is closely related to various anthropogenic activities (such as mining, dyeing, municipal and industrial solid waste disposal, incineration and/or openly burning waste, and agricultural soil fertilization [1]), unplanned urbanization, and rapid industrialization.Among many inorganic and organic pollutants, heavy metals and dyes receive widespread attention; their presence in wastewater is a major problem since most of them are potentially hazardous to the environment and human health.Namely, heavy metals are toxic, nondegradable, carcinogenic, and persistent; they enter our body system through the ingestion of food and water and air [2].The relationship between environmental exposure to heavy metals and various human diseases was recently investigated by Đukić-Ćosić et al. [3], Baralić et al. [4], Buha et al. [5], and many other researchers all over the world.On the other hand, dyes are complex organic pollutants mainly present in textile, cosmetic, paper, leather, rubber, and printing industries' wastewaters.Their presence in wastewater even at small concentrations aggravates photosynthesis; dyes disable the sun rays' adsorption in surface waters and negatively affect the surrounding flora and fauna [6].Not only is the suppression of aquatic biota growth and reproduction caused by these pollutants [6] but also excessive exposure to dye degradation products causes skin irritation as well as respiratory problems [7].In response to the rising demands for clean and safe water, many different technologies are utilized for the purification of metal and/or dye contaminated wastewater [8].Some of them include membrane separation, chemical and electrochemical technologies, reverse osmosis, ion exchange, electrodialysis, electrolysis, and adsorption procedures.Excluding adsorption, all of them require substantial financial input, high energy consumption, altogether restricting their utilization for wastewater treatment.On the other hand, adsorption using conventional and nonconventional adsorbents is an easy-handled and environmentally acceptable method for various pollutants' removal.It enables adsorbents' regeneration and operation under a broad range of process settings and has better selectivity [2].Since magnetic nanocomposites have a large specific surface area, they were recently used as adsorbents for metals and dyes [9][10][11][12].Compared to conventional adsorbents (i.e., activated carbons, ionexchange resins, and inorganic materials such as alumina, silica gel, and zeolites), green waste-derived adsorbents are economically viable (their cost-potential makes them competitive) and have proven satisfactory adsorption capacities toward heavy metals and dyes.The most studied green waste-derived adsorbents are sugar beet shreds [13], fibers [14][15][16][17], rice husk [18], wood-based adsorbents [2,[19][20][21], potato peels [22,23], and other cellulose-based adsorbents [7].Although green waste-derived materials are intensively studied as adsorbents for heavy metal ions and dyes, the exploration of new eco-friendly, biodegradable, low-cost, and abundant adsorbents that have low/no negative impact on the environment remains primary focus of investigation among researchers.
The rapid world soybean (Glycine max L.) production (about 385 million metric tons in 2021-2022, SOPA) and utilization worldwide result in the intensive generation of soybean processing waste-hulls that are rich in soybean peroxidase.We decided to extend the soybean hulls' lifetime and utilize them as adsorbents for metal ions and dyes since this processing waste is easily available and can be used without additional treatments (or after the peroxidase extraction in water).Moreover, soybean hulls have a myriad of functional groups (COOH, OH, etc.) which are capable of binding metals and dyes, while their porous structure contributes to a high ability to swell and store a high amount of solution.First, the used adsorbent was characterized in terms of its surface morphology (assessed by FE-SEM), chemistry (using ATR-FTIR), and electrokinetic properties (i.e., zeta potential).The influence of various parameters such as peroxidase extraction, solution pH, adsorbent particle size, contact time, the pollutant's initial concentration, and temperature on the soybean hulls' adsorption potential for cadmium or copper ions as well as textile dyes Reactive Yellow 39 (RY 39) or Acid Blue 225 (AB 225) (their structures are given in the Supplementary material, see Figures S1a and S2a) was investigated.Many reasons are behind the selection of these four pollutants.Namely, the World Health Organization (WHO) has identified cadmium as one of ten chemicals of major public health concern [24].The connection between long-term exposure to this toxic metal and various renal syndromes, osteoporosis and osteomalacia, endocrine-disrupting properties, and different types of cancer has already been established [25][26][27].Copper is an essential nutrient for humans, animals, and plants; however, its toxicity is a much overlooked contributor to many health problems including anorexia, migraine headaches, allergies, childhood hyperactivity, and learning disorders [28].Furthermore, RY 39 and AB 225 were not selected randomly; contrary to many other dyes, their structures are not significantly influenced by the pH change (see Figures S1c and S2c).This enables us to eliminate the influence of the dyes' structural changes caused by the solution's pH, and therefore, their adsorption can be ascribed solely to the soybean hulls' surface chemistry and morphology.Among different classes of synthetic dyes, azo dyes, like AB 225, are one of the most important synthetic dyes produced worldwide [29].Their degradation byproducts have a toxic and mutagenic impact on aquatic organisms [16].Furthermore, the second major synthetic dye class is anthraquinone dyes such as RY 39 which are used for dyeing wool, cotton, silk, and polyamide.Nonetheless, their toxicological profiles state that most anthraquinone dyes are mutagenic, carcinogenic, and allergenic [6].
After the adsorption from the single pollutant solution was investigated, different binary mixtures were prepared and competitive metal ion and dye adsorptions were examined.In order to make a more detailed study, one part of the manuscript is focused on water secondary pollution (i.e., leaching of organic and inorganic matter in the water) during adsorption onto soybean hulls (with and without peroxidase).The results of this investigation offer a novel valorization route, i.e., soybean hulls' recovery after the peroxidase extraction that brings them into the circular economy concept through their utilization as adsorbents for inorganic and organic pollutants.It has to be underlined that the utilization of this kind of adsorbent avoids secondary pollution, i.e., the leaching of organic and inorganic matter from the adsorbent.
Materials and Methods
2.1.Materials.Soybean hulls were obtained from Sojaprotein d.o.o., Bečej, Serbia.The used chemicals were of the highest commercial grades and used as received.
To study the effect of peroxidase extraction on the soybean hulls' adsorption potential towards metal ions and dyes, two types of soybean adsorbents were used separately.Namely, one set of experiments was performed using 2 Adsorption Science & Technology soybean hulls as received, i.e., with peroxidase, while the other set of experiments was carried out after peroxidase extraction.The extraction of the enzyme from soybean hulls was achieved according to the procedure described by Svetozarević et al. [30].The procedure was repeated until the peroxidase has not been detected.Before the adsorption experiments, dry soybean hulls were ground in a mill, whereby their particle sizes were in the range of 710-1000 and 1000-1500 μm, see Figure 1.
Soybean Hulls' Characterization.
To prove the existence of peroxidase in the sample SH+PO as well as its successful extraction from the SH-PO sample (see Figure 1), the enzyme activity was assessed according to the previously published method [30].ATR-FTIR spectroscopy (Nicolet™ iS™ 10 FT-IR (Thermo Fisher 2 SCIENTIFIC) spectrometer with Smart iTR™ attenuated total reflectance (ATR) sampling accessory) was used for the evaluation of the SH+PO and SH-PO surface chemistry.The spectra were recorded in the range of 4000-600 cm -1 with 32 scans per spectrum.Based on the ATR-FTIR absorbance spectra, the so-called hydrogen bond intensity (HBI), lateral order index (LOI), and cross-linked lignin ratio (CLL) were calculated as ratios of the intensities of the bands at 3338 and 1334 cm -1 , 1429 and 897 cm -1 , and 1600 and 1508 cm -1 , respectively [31].
The adsorbents' zeta potential as a function of pH was determined by a streaming potential method using a Sur-PASS electrokinetic analyzer (Anton Paar GmbH, Austria) following the procedure given by Ivanovska and Kostić [32].
SH+PO and SH-PO surface morphology was assessed by FESEM (Tescan MIRA 3 XMU).Before the analysis, the samples were sputter-coated with Au/Pd alloy.
Adsorption Experiments from Single-Pollutant
Solution.The adsorption of Cd 2+ or Cu 2+ has been carried out from a monometallic solution of CdCl 2 •2.5H 2 O or CuSO 4 •5H 2 O, while the adsorption of RY 39 or AB 225 was performed from their appropriate single-pollutant aqueous solutions.A 0.25 g of adsorbent was immersed in 100 ml of single-pollutant aqueous solution and constantly shaken.The adsorption optimization took place in two steps: (1) The optimization of the initial solution pH: c 0 = 25 mg/l, t = 24 h, T = 25 °C, pH = 3-6 or 2-5 (for metal ions and dyes, respectively), equal portions of both particle sizes; samples SH+PO and SH-PO (2) The optimization of the adsorbent particle size: c 0 = 25 mg/l, t = 24 h, T = 25 °C, pH = 5:00 or 3.00 (for metal ions and dyes, respectively), the adsorbent particle size of 710-1000 or 1000-1500 μm; samples SH-PO1 and SH-PO2 The kinetic, isotherm, and thermodynamic experiments were performed on a sample SH-PO1 under the optimized conditions shown in Table 1.The kinetic and equilibrium adsorption data were interpreted according to a set of widely used kinetic and isotherm models, respectively.
Pollutant removal by soybean hulls was calculated based on its residual concentration in the aqueous solution by using Equation (1), while the mass of adsorbed pollutant per gram adsorbent (q, mg/g) was calculated using Equation ( 2) where c 0 is the initial pollutant concentration in the solution, mg/l; c t is the pollutant concentration in the solution after a certain time, mg/l; m is the adsorbent mass, g; and V is the solution volume, l.The thermodynamic parameters: standard Gibbs free energy (ΔG 0 ), standard enthalpy (ΔH 0 ), and standard entropy (ΔS 0 ) were calculated using Equations ( 3) and ( 4) where R is the universal gas constant (8.314J/mol K), T is the process temperature (K), K eq is the process equilibrium constant calculated as the ratio between the amount of adsorbed pollutant q e (mg/g) and the residual pollutant concentration in the solution C e (mg/ml) at the equilibrium.
The expression of the equilibrium constant quantifies the distribution of the pollutant between the solution and the adsorbed phase.By combining Equation ( 3) and (4), Equation (6) was obtained: The K eq and ΔG 0 were calculated for each studied temperature, while the values of ΔH 0 and ΔS 0 were estimated from the slope and intercept of ln ðK eq Þ vs. 1/T plot, respectively.
All adsorption experiments were performed in triplicate with standard deviations below 2.2%.
Inductively coupled plasma optical emission spectrometry (ICP-OES, iCAP 6500 Duo ICP, Thermo Fisher Scientific, Cambridge, United Kingdom) was used for the determination of metal concentrations at Cd II 226.502 nm and Cu I 324.754 nm emission lines.The dye concentration in the aqueous solution was determined based on the UV-Vis (Shimadzu 1700 spectrophotometer) absorbance spectra at λ max at 390 and 628 nm for RY 39 and AB 225, respectively.
Competitive Adsorption Experiments.
To study the competitive dye and metal ion adsorption, four binary mixtures (RY 39+Cd 2+ , RY 39+Cu 2+ , AB 225+Cd 2+ , and AB 225 +Cu 2+ ) were prepared at pH of 3.00 and 5.00, while the mixtures containing solely dyes and solely metals were prepared at pH of 3.00 and 5.00, respectively.The other experimental conditions were as follows: 25 mg/l initial concentration of each pollutant, contact time of 120 min, and equal portions of both adsorbent particle sizes (sample SH-PO, see Figure 1).The presented results are the average of three measurements in parallel; the standard deviations were below 1.9%.
Study of the Secondary Pollution.
The secondary pollution, i.e., leaching of soybean hulls' organic and inorganic matter in the water, was studied by adding 0.25 g of the adsorbents SH+PO and SH-PO in 100 ml of demineralized water at pH of 3.00 (since the metal ion adsorption was carried out at this pH) and constantly shaken for 24 h.Thereafter, the contents of different elements (Al, B, Ba, Ca, Cd, Co, Cu, Cr, Fe, K, Li, Mg, Mn, Na, Ni, Sr, Pb, and Zn) in demineralized water were compared with those determined in the water after the adsorbent's removal.The content of organic matter in the above-mentioned samples was estimated by the dichromate index (chemical oxygen demand (COD)) according to the appropriate standard [33].
Results and Discussion
3.1.Characterization of SH Adsorbents.Bearing in mind that the peroxidase removal changes the soybean hull's overall structure, before the evaluation of the SH+PO and SH-PO adsorption potential for metal ions and organic dyes, their characterization was performed.In light of that, the enzyme activity assay was used as a key indicator of peroxidase extraction efficiency.The first extract showed peroxidase activity of 200 U/ml, while after the third cycle of extraction, the peroxidase activity within sample SH-PO was below the level of detection.Furthermore, from the examined samples FE-SEM microphotographs (see Figure 2), it is evident that the soybean hull's surface partially collapses after the enzyme's extraction (SH-PO) being more compressed than before the extraction (SH+PO), which is in accordance with the literature [34].It is worth mentioning that peroxidase's removal also contributes to a smoother soybean hull surface morphology compared to the rough SH+PO surface with pronounced pores (see Figure 2, sample SH+PO).
Considering the ATR-FTIR spectra of SH+PO and SH-PO (see Figure 3(a)), it could be observed that the peroxidase extraction induces noticeable modifications of the adsorbents' surface chemistry.A spectrum of the SH+PO displays characteristic bands inherent to lignocellulosic materials.Namely, the broad band between 3600 and 3000 cm - (related to a peptide-based enzyme), while the band at 2917 cm -1 originates from C-H stretching vibrations.A low-intensity band at 1726 cm -1 appears from stretching vibrations of characteristic C=O groups.Due to the complexity of the soybean hulls' surface chemistry, a relatively wide band centered at 1602 cm -1 could be ascribed to different vibrations ranging from C=O peptide, C=C, and COO -stretching vibrations [35] and absorbed water [34].After peroxidase extraction, the band at 3300 cm -1 becomes intensified, while the band at 2907 cm -1 is sharper (see Figure 3(a)).Furthermore, the band at 1602 cm -1 observed in the SH+PO spectrum shifts to a higher wavenumber (1611 cm -1 ) after peroxidase removal.This behavior is followed by a change in its shape and intensity indicating significant changes in the soybean hulls' surface functionality.Moreover, the bands at 1540 and 1367 cm -1 related to the lignin backbone become stronger suggesting the enrichment of the lignin content after the extraction of the peptidebased enzyme.Empirical ratios (LOI, HBI, and CLL [31]) calculated from ATR-FTIR spectra could be used to explain the changes that cellulose and lignin moieties (within soy-bean hulls) underwent enzyme extraction.LOI is closely related to the amount of cellulose crystalline moieties, i.e., represents the ordered regions perpendicular to the chain direction, which is greatly influenced by the chemical processing of cellulose [36], while HBI refers to the degree of the intramolecular hydrogen bond regularity.Upon enzyme extraction, both LOI and HBI values are lowered, indicating a less ordered cellulose structure with lower hydrogen bonding intensity between neighboring cellulose chains and hence lower crystallinity of SH-PO.On the other hand, CLL is related to the amount of lignin with condensed and cross-linked properties.A higher CLL value calculated for SH-PO (0.538) in comparison with that of SH+PO (0.497) could be ascribed to the higher cross-linking and condensation of lignin chains caused by the earlier mentioned soybean hulls' collapse after the enzyme extraction treatment.The changes in the soybean hulls' surface chemistry after the peroxidase extraction were further proven by the measurement of SH+PO and SH-PO zeta potential.The results presented in Figure 3(b) pointed out that both adsorbents' surfaces are negatively charged at pH values above 2.36 (SH+PO) and 2.55 (SH-PO), respectively, with the surface of SH-PO being less negative.
The Influence of Solution pH on the Removal of Metal
Ions and Dyes.Among the different adsorption variables, the effect of the solution's initial pH value on pollutant removal was first considered (see Figure 4) since it affects both the solubility and ionization state of the investigated cadmium and copper salts and dyes, as well as the soybean hulls' surface charge [37].In highly acidic conditions, i.e., at a pH of 3.00, the SH +PO and SH-PO affinity for binding Cd 2+ and Cu 2+ is low (removal is ranged between 57 and 65%) due to the excessive H + competing with metal ions for the soybean hull surface active sites.As is evident from Figure 4, for both studied adsorbents (SH+PO and SH-PO), maximal heavy metal removals (between 71 and 82%) were achieved at pH 5.00.With a further increase of the solution's pH, a higher concentration of OH -in the solution leads to the precipitation of Cd 2+ and Cu 2+ in the form of hydroxides [38] which hinder the adsorption and thus lower Cd 2+ and Cu 2+ removal.Based on the presented results, and in order to achieve maximum Cd 2+ and Cu 2+ removal by soybean hulls without incurring precipitation, pH 5.00 was chosen as optimal and used for further experiments.Sanni et al. [39] also found pH 5.00 as optimal for Cu 2+ removal by citric acidmodified soybean hulls, reaching 40% Cu 2+ removal, which is much lower than the results obtained in the current study.
Besides the fact that both studied adsorbents behaved similarly regarding the solution's pH, it has to be emphasized that the removal of Cd 2+ and Cu 2+ increased after the peroxidase's extraction by 14.6 and 10.9%, respectively.Although zeta potential measurements showed that the SH-PO negative charge is lower than that of SH+PO, its adsorption capacity for the metal ions is surprisingly reinforced.Such differences could be explained by the changes in the soybean hulls' surface physico-chemical properties that occurred after the peroxidase removal.Tummino et al. [34] ascribed the higher adsorption potential of SH-PO to the effect of the intrinsic metal ions present in the enzyme competing with metal ions for adsorbent active sites.Furthermore, the surface chemistry of the soybean hulls is modified after peroxidase extraction in terms of the availability of the groups that are capable and preferential for metal binding.As discussed previously, constituents within the lignocellulosic soybean hulls (primarily cellulose and lignin) are less ordered after peroxidase extraction.For this reason, the functional groups that were responsible for the interactions between these components are no longer constrained and are accessible for the interactions with metal ions.
Taking into account that the adsorption of Cu 2+ and Cd 2+ does not correlate with the negative value of the zeta potential, it could be concluded that it is not solely governed by electrostatic interactions with the negatively charged surface.Considering the involvement of different soybean hulls' groups (proven by the ATR-FTIR spectra, Supplementary Material, see Figure S3), it could be suggested that metals are adsorbed onto soybean hulls via different binding mechanisms such as surface complexation (involving hydroxyl and carboxyl groups), ion exchange (between adsorbent H + and metal ions, also established via hydroxyl and carboxyl groups) [40], and cation-π interactions (between the electron-rich phenyl groups of lignin and metal ions) [41] (see Figure 5, displayed for Cu 2+ , the same refers for Cd 2+ ).Furthermore, the physical adsorption of the metal ions onto soybean hulls' surface should also be taken into account.Similar observations were made in our previously published paper concerning the adsorption of Cd 2+ onto lignocellulosic wood waste [2].
In the case of the adsorption of RY 39 and AB 225 dyes on the SH+PO and SH-PO, the optimal dye removal was obtained in an acidic medium (pH =3), (see Figure 6).At pH < 3, lower removal percentages could be observed, while at higher pH, a drastic decline is noticed (except for the removal AB 225 by SH-PO).Taking into account the microstate distribution charts for RY 39 and AB 225 (Supplementary material, S2c), it can be concluded that in the investigated pH range (2-5), dyes do not undergo structural changes (dominant microstates are present almost 100%), which further implicates that adsorption is principally governed by the pH-induced changes of the adsorbent surface.This is evidenced by the fact that the dye removal trends comply with the soybean hulls' zeta potential (see Figure 3(b)).Furthermore, considering dye removal (see Figure 6), it is clear that similar trends are obtained indicating that analogous adsorbent functionalities are responsible for the dye-adsorbent interactions for both dyes.As presented in Figure 3(b), it is obvious that SH+PO and SH-PO are positively charged at pH below 2.55 and 2.36, respectively, while above these values, the adsorbents' surfaces are negative.The increased adsorption of the dyes on going from pH = 2 to 3 could only suggest that stronger interactions of the dyes are established with a negatively charged adsorbent.According to the predicted dyes' pK a values (Supplementary material, see Figure S1 and S2), it is clear that at pH 3, both dyes bear negative charges (i.e., their sulfonic groups are deprotonated).Since the surfaces of SH +PO and SH-PO at this pH value are negative (see Figure 3(b)), it is justified to assume that the electrostatic repulsion between dyes and adsorbents dictates the orientation of the dye molecules (see Figure 7).This happens in such a way that a negative dye charge is raised away from the negative adsorbent surface enabling groups on the opposite side to interact with the adsorbent surface through multiple hydrogen interactions.A further increase of the solution's pH value causes the decline of the dye removal, probably due to the competition of the negatively charged dyes with solution OH -groups, leading to a more disordered system preventing dyes to orientate in such a way to achieve effective interactions with an adsorbent.Also, the swelling of the soybean hulls causes the lowering of the zeta potential, i.e., its negative value, affecting the degree of the effectively established interactions.In general, it could be suggested that dye adsorption is closely related to the extent of an adsorbent negative charge.By comparing the adsorption capacities of SH+PO and SH-PO at pH = 3, it could be observed that no significant improvement is attained after the peroxidase extraction, 6).The differences between the removal of these dyes at a pH > 4 could be ascribed to their ionization states (Supplementary Material, see Figures S1 and S2), wherein the RY 39 bears a double negative charge and, as such, the repulsion with the excess of OH -is more pronounced with respect to AB 225 (having one anionic group), thus suppressing the effective adsorption of RY 39.
The detailed observations regarding the soybean hulls' ATR-FTIR spectra recorded before and after dye adsorption (Supplementary Material, see Figure S4) led us to conclude that dye adsorption is governed by the repulsion between its negative charges and the adsorbent surface and is attained by the cooperation of the different interactions such as hydrogen bonds, π-π stacking, and n-π interactions [15] (see Figure 7, displayed for AB 225, the same refers for RY 39).Multiple strong hydrogen bonds could be established between dye carbonyl groups with O-H groups (hydroxyl and carboxyl), as well as between dye N-H groups and adsorbent surface carbonyl groups (aldehyde, carbonyl, and carboxylate) (see Figure 7).As both dyes bear aromatic rings, it is justified to presume that π-π stacking interactions are formed with the aromatic monomer units (guaiacyl, syringyl, and phydroxyphenyl) of the lignin moieties [15,42].Considering the fact that adsorbent surface groups bear electron-rich oxygen atoms (such as O-H), the possible interaction mechanism may as well include n-π interactions with aromatic rings of the dyes [43].
For all further experiments, we have decided to use only soybean hulls without peroxidase (SH-PO) due to the following reasons: (1) The observed higher metal ion removal by SH-PO than by SH+PO (see Figure 4).
(2) To "close the loop" and to reduce the disposal cost after peroxidase extraction.Namely, soybean peroxidase is widely used for the decolorization of textile industry wastewater [6,44,45] as well as for the deg-radation of phenolic compounds [46] in wastewater.However, in the available literature data, the authors did not discuss the soybean hulls' disposal after the peroxidase extraction, i.e. the soybean hulls remain as waste.In the current study, soybean hulls were recovered after the peroxidase extraction (3) After the peroxidase extraction, the secondary pollution, i.e., leaching of organic and inorganic matter in demineralized water from adsorbent is significantly lower in comparison with the adsorbent that did not undergo aqueous peroxidase extraction.More precisely, six times lower dichromate index (265.2vs. 43.7 mg O 2 /l) and five times lower total metal content (15.3 vs. 3.1 μg/l, Supplementary Material, see Table S1) were obtained in the sample which underwent peroxidase extraction than in the sample from which peroxidase was not extracted
Effect of Adsorbent Particle Size on the Removal of Metal
Ions and Dyes.As given in the experimental part of this manuscript, the second step of adsorption optimization is to assess the effect of SH-PO particle size (710-1000 and 1000-1500 μm, i.e., samples SH-PO1 and SH-PO2) on the removal of studied pollutants.From the results listed in Table 2, it is clear that Cd 2+ and Cu 2+ removal is slightly higher (for 4.2 and 11.0%, respectively) when the smaller adsorbent fraction was used (i.e., SH-PO1).Such results are logical since for a given mass of adsorbent, the smaller adsorbent particle sizes have higher effective surface areas and therefore a higher number of available sites capable of binding metal ions [47,48].Interestingly, the soybean hulls' particle size did not have an impact on the percentage of removed dyes.Although the smaller fraction bears more active sites, no improvement in adsorption capacity is observed.Large dye molecules (Supplementary material, Figures S1a, and S2a) occupy a certain number of active sites, while at the same time they block unoccupied active sites due to their size, thus hindering the adsorption of other dye molecules.The obtained results are in line with the results presented by Rizzuti and Lancaster [49], for the removal of Remazol Brilliant Blue R by the same adsorbent.Taking into account the results presented in Table 2, further kinetic, isotherm, thermodynamic and competitive experiments were conducted only for the sample SH-PO1.
Kinetic Studies.
To gain detailed information about the pollutants' adsorption dynamic, their kinetics were studied, see Figure 8(a).At the beginning of the adsorption process, the pollutant removal sharply increased as time proceeded.This is due to a higher number of free sites having the ability for binding metal ions or dyes.The occurring phenomenon is more prominent in the case of dye removal than in the case of metal ion removal.Namely, after 30 min of contact time, about 92% and 88% of RY 39 and AB 225 were removed, while for the same time, about 80% and 69% of Cd 2+ and Cu 2+ were removed.By extending the contact time, the pollutant removal increased whereas the number of available sites decreased, reaching a plateau.Additionally, the effect of repulsive forces between pollutants in the solution and those adsorbed should not be neglected, since, by extending the contact time, they could hinder the pollutant's diffusion into the adsorbent structure [14].Furthermore, the data presented in Figure 8(a) indicate that the adsorption process can be considered rapid since the equilibrium of dye removal was attained after 90 min of contact time, while in the case of metal ion removal, 120 min were sufficient for reaching equilibrium.
With the aim to obtain more information regarding the adsorption processes studied in this work, pseudo-first and pseudo-second-order kinetic models were tested.The linear fitting of log ðq t − q e Þ vs. time and t/q t vs. time plots (see Figures 8(b) and 8(c)) provided the kinetic parameters (q e,cal , k 1 , and k 2 ), shown in Table 3.To select the best kinetic model that describes the adsorption processes, the corresponding coefficients of determination (R 2 ), together with the comparison between the experimental adsorption capacities at the equilibrium (q e,exp ) and those obtained by applying the kinetic equations (q e,cal ), were considered in parallel.Namely, from Figures 8(b) and 8(c) and Table 3, it is evident that the adsorption of metal ions and dyes follows a pseudo-second-order reaction, whereby the q e,cal values for pseudo-second-order comply well with the experimentally obtained values.According to Ayele et al. [50], the primary rate-determining step is the adsorption rate of pseudo-second-order that relies on chemisorption or chemical adsorption.
The results presented in this section are comparable with those given in the literature.Namely, the pseudo-secondorder is the most suitable for describing the Pb 2+ adsorption onto 3-aminopropyltrietoxysilane-modified soybean and peanut hulls [51], Cd 2+ and Pb 2+ adsorption from a single and binary mixture onto soybean hulls [52], and azo dye safranin adsorption onto soybean hulls [53].It has to be underlined that in the adsorption experiments carried out in the current study, special attention has been paid to secondary pollution during adsorption, i.e., the peroxidase was first extracted and then the soybean hulls were used as an adsorbent, which is not the case in the previously mentioned papers.
Since the pseudo-first and pseudo-second-order kinetic models cannot identify the diffusion mechanism during the adsorption process, the kinetic data were further examined by the intraparticle diffusion model (see Table 4).Concerning the fact that several factors participate in pollutants' adsorption, the SH-PO1 intraparticle diffusion plots are not linear over the whole t 1/2 range, making it necessary to separate them into three linear zones, Figure 9.
The first linear zone observed at a low t 1/2 range is assigned to external surface adsorption in which the diffusion rate constant (k 1 ) is higher than in the other two zones (k 2 and k 3 ), i.e., a quite fast adsorption occurred due to the well-shaken system.The second linear zone noticed at the intermediate t 1/2 range is attributed to the macropores' intraparticle diffusion.The last linear zone (k 3 = 0:0001-0.0106mg/g min 1/2 , see Table 4) is attributed to micropore diffusion, in which intraparticle diffusion starts to slow down as a result of low solute concentration in the solution.The greatest effect of the boundary layer on the adsorption was noticed in the third zone since C 3 is higher than C 1 and C 2 [47].The observed increased resistance during the second and third phases (i.e., C 1 < C 2 < C 3 ) can be explained by the fact that adsorbate diffusion and its kinetics are governed by Fick's law, i.e., the transport rate is a function of the adsorbate gradient which is the highest during the first phase of the adsorption experiment and decreases as the experiment proceeds [54].Negative values of the C 1 in the case of RY 39 and AB 225 indicate the effect of external film diffusion resistance in the adsorption initial stage [55].Moreover, the lines do not pass through the origin, and therefore, the intraparticle diffusion is not the only rate-limiting step [56]; other processes may control the adsorption rate of Cd 2+ , Cu 2+ , RY 39, or AB225 onto SH-PO1.
Isotherm Studies.
In order to better understand profoundly the interactions among the studied pollutants and the SH-PO1, the equilibrium adsorption experiments were performed at different initial pollutant concentrations.As given in Figure 10(a), SH-PO1 uptake capacity increases with increasing the initial pollutant concentration which could be ascribed to the high driving force for mass transfer at a high initial pollutant concentration.Bulut and Aydın q e = q e , exp and q t (mg/g) are the amounts of pollutant adsorbed per gram adsorbent at equilibrium and at time t (min), q e,cal is the calculated amount of pollutant adsorbed per gram of adsorbent (mg/g), k 1 (1/min) is the pseudo-first-order rate constant, and k 2 (g/mg min) is the pseudo-second-order rate constant.
10 Adsorption Science & Technology [57] and Ivanovska et al. [2,58] observed the same behavior during isotherm experiments for methylene blue adsorption onto wheat shells; cadmium ion adsorption onto wood waste, nickel, and copper; and zinc ion adsorption onto jute fabrics, respectively.It has to be noted that at the initial pollutant concentrations above 25 mg/l, the SH-PO1 possessed around 25% higher uptake capacity for metal ions than for the dyes (see Figure 10(a)).
The obtained equilibrium data were further analyzed by using linearized forms of the Langmuir and Freundlich isotherm models (see Figures 10(b) and 10(c) and Table 5).The data fitting based on Langmuir isotherm showed higher R 2 values for all the pollutants.Thus, it can be stated that this model provided a better description of the studied system than the Freundlich model.Better fitting with the Langmuir model implies a monolayer pollutant adsorption onto an adsorbent surface having a finite number of uniformly distributed adsorption sites and homogeneous adsorption [53], which is in accordance with the results presented by Yu et al. [59].
The theoretical maximal adsorption capacities (q m ) of SH-PO1 determined by Langmuir modeling are 21.10, 20.54, 16.54, and 17.23 mg/g for Cd 2+ , Cu 2+ , RY 39, and AB 225, respectively (see Table 5).Depending on the Langmuir constant (K L ) value, the adsorption process could be evaluated as irreversible (K L = 0), favorable (0 < K L < 1), linear (K L = 1), or unfavorable (K L > 1).The values of K L for all studied pollutants ranged between 0 and 1, pointing out the favorable adsorption.Slightly higher values of K L observed for metal ions than for dyes suggested a slightly stronger interaction between SH-PO1 and metal ions than between SH-PO1 and dyes.11 revealed that by increasing the temperature from 25 to 45 °C, the adsorption of all studied pollutants increases which could be explained by the increased availability of SH-PO1 surface sites and the pollutant mobility at higher temperatures [60].This behavior is the most prominent for AB 225; its adsorption capacity increased by about 20%.
From the temperature-dependent study of Cd 2+ , Cu 2+ , RY 39, and AB 225 adsorption onto SH-PO1, the thermodynamic parameters standard enthalpy ΔH 0 , standard entropy ΔS 0 , and standard Gibbs energy change ΔG 0 were calculated (see Table 6).At the system equilibrium, the obtained positive values of ΔS 0 imply an increase of randomness at the 11 Adsorption Science & Technology solid-solution interface, indicating that the adsorption of all considered pollutants is an entropy-driven process [61].The positive ΔS 0 in a combination with the negative ΔG 0 suggest that their adsorption onto SH-PO1 is feasible and spontaneous [16].Furthermore, the process is of endothermic character since the parameter ΔH 0 is positive.Langmuir C e /q e = 1/ q e = q e , exp (mg/g) is equilibrium pollutant concentration per gram of adsorbent, q m (mg/g) is a maximal adsorbed pollutant per gram of adsorbent, K L (l/mg) is Langmuir constant, C e (mg/l) is the equilibrium pollutant concentration in the solution, K f is the Freundlich constant (mg/g)/(l/mg) 1/n , and1/n is the constant related to the fabric surface heterogeneity.
3.7.
Competitive Adsorption of Metal Ions and Dyes.In real conditions, most of the wastewater contains a mixture of different pollutants such as metal ions and dyes, and therefore, the assessment of adsorbent overall performance is of great importance.To examine the simultaneous competitive adsorption of metals and dyes and the two tested metals and dyes onto SH-PO1, ten binary mixtures were prepared (see Table 7).The Cd 2+ removal from the binary mixtures containing RY 39 or AB 225 at a pH of 5.00 is almost the same as in the case of single pollutant adsorption (see Figure 4(a)).This is also observed for Cu 2+ (see Figure 4(b)), where somewhat higher metal removal was observed during the competitive adsorption experiments.However, in the metal binary mixture, Cu 2+ is more competitive than Cd 2+ (removal of 71.01 vs. 20.96%).This is expected due to Cu 2+ significantly lower molecular mass as well as higher effective ionic radii and electronegativity [62].
When the dye's removal from the single aqueous solution (see Figure 6) and binary mixtures (see Table 7) were compared, it is clear that the RY 39 and AB 225 removal from binary mixtures (at pH of 3.00) is not affected by the metal cosolute.The dye binary mixture results in lower adsorption of RY 39 with respect to the adsorption of single-dye solutions.This could be ascribed to the fact that the molecules of RY 39 are more rigid and bear double negative charge than the AB 225 molecule, which is smaller and thus can more easily diffuse through the solution.Surprisingly, the removal of RY 39 is significantly reinforced (more than 8 times) at pH = 5 in the presence of metal cosolute.One explanation could be that positive metal ions compensate for SH-PO1's surface negative charge and prevent repulsion between the negative dye and the soybean hull surface in that way.On the grounds of the previous statement, it is probable for RY 39 to establish stronger interactions being reflected through better adsorption onto SH-PO1.On the contrary, the presence of a co-solute (Cd 2+ , Cu 2+ , or RY 39) contributed to a higher AB 225 removal due to the previously described compensating metal ions' effect.
To summarize, SH-PO1 is the most efficient adsorbent for the mixture of AB 225 and Cu 2+ , independent of the solution pH.Additionally, the lowest adsorbent affinity was registered for the mixture containing both metal ions.These very interesting results promote the novel valorization of soybean hulls after peroxidase extraction as adsorbents for inorganic and organic pollutants and bring them into the circular economy.
3.8.Advantages and Future Perspective of Soybean Hulls.It seems this is the right place to discuss the advantages of the studied SH-PO over the well-known conventional adsorbents such as activated carbons, ion exchange resins, and zeolites.SH-PO and other nonconventional adsorbents are competitive with the mentioned conventional adsorbents since they are low cost and abundant and have high affinity, capacity, rate of adsorption, and selectivity for different pollutant concentrations [63,64].Having a wide variety of functional groups, nonconventional adsorbents such as SH-PO provide intrinsic chelating and complexing properties for different pollutants including heavy metals, dyes, and aromatic compounds and can reduce their concentrations to ppb levels.The regeneration of nonconventional adsorbents in washing solvents is very easy since; as it was mentioned, the interaction between the pollutant and adsorbent is driven mainly by electrostatic attraction, stacking interactions, and ion exchange.On the other hand, the utilization of activated carbons as adsorbents is restricted due to the high cost of the precursor (such as petroleum residues and some commercial polymers) and the rapid saturation that imposes the necessity for their regeneration (which is expensive, complicated, results in loss of the adsorbent, and is energy-consuming) and incineration [65].The activated carbons are nonselective and ineffective for disperse and vat dyes requiring the utilization of complexing agents to improve their removal performance [66].Furthermore, most commercial ion exchange resins are derived from petroleum-based raw materials using processing chemistry that is not always safe and environmentally friendly.One of the highest drawbacks of this adsorbent is poor contact with the aqueous solution which requires its further modification and/or pretreatment by activation solvents.Generally speaking, activated carbons and synthetic resins suffer from a lack of selectivity, and their applications are often limited to low contaminant concentrations.Zeolites represent another group of conventional adsorbents that are used for various pollutants; however, they are characterized by low selectivity, while their microporous structure makes them unsuitable for bulky molecules [67].
The investigation of adsorbent stability, reusability, or regeneration was not within the focus of the current study, since the mentioned procedures decrease the adsorbent capacity and generate new wastewater, making them inappropriate for industrial wastewater [68].Besides the SH-PO1 large adsorption capacities for different pollutants, the adsorption process produces solid waste that can cause secondary pollution.In light of that, the lifecycle of SH-PO1 with adsorbed pollutants could be extended by their further utilization for the production of bio-based composites which will be used as building materials 13 Adsorption Science & Technology or carbonized and used as a great alternative to porous carbon and a cathode matrix for lithium-sulfur batteries [69].
Conclusions
The experiments conducted in this study confirmed that the soybean hulls could be successfully recovered after peroxidase extraction and used as adsorbents for metal ions and dyes.The soybean hulls' adsorption potential for Cd 2+ , Cu 2+ , RY 39, and AB 225 from a single-pollutant solution changes depending on the pH, peroxidase extraction, adsorbent particle size, contact time, pollutant initial concentration, and temperature.Before peroxidase extraction, soybean hulls are capable of removing 72% Cd 2+ , 71% Cu 2+ (at a pH of 5.00) and 81% RY 39, 73% AB 225 (at a pH of 3.00), respectively.After peroxidase extraction, the removal of Cd 2+ and Cu 2+ increased by 14.6 and 10.9%, respectively.Soybean hulls without peroxidase possessed significantly lower secondary pollution (i.e., six times lower dichromate index (265.2vs. 43.7mgO 2 /l), and five times lower total metal content (15.3 vs. 3.1 μg/l) than those with peroxidase.
The adsorbent particle size did not affect the dye removal; however, Cd 2+ and Cu 2+ removal slightly increased when the smaller adsorbent fraction (710-1000 μm) was used.The adsorption of metal ions and dyes can be considered rapid since the equilibrium was attained after 120 min and 90 min of contact time, respectively.Furthermore, after 30 min of contact time, 92% and 88% of RY 39 and AB 225 were removed, while after the same time, 80% and 69% of Cd 2+ and Cu 2+ were removed.The adsorption of all tested pollutants follows a pseudo-second-order reaction (through the fast adsorption, intraparticle diffusion, and final equilibrium stage) indicating that chemical adsorption is the velocity limiting factor.On the other hand, better fitting with the Langmuir model implies a uniform distribution of adsorption sites and the presence of a single layer of pollutant on the soybean hulls' surfaces.The maximal adsorption capacities determined by the Langmuir model are 21.10, 20.54, 16.54, and 17.23 mg/g for Cd 2+ , Cu 2+ , RY 39, and AB 225, respectively.Calculated thermodynamic parameters suggested that the adsorption of all pollutants is spontaneous and of endothermic character.Taking into account that real wastewater contains a mixture of different pollutants, the simultaneous competitive adsorption of metals and dyes from binary mixtures was studied.The obtained results revealed that soybean hulls are the most efficient adsorbent for the mixture of AB 225 and Cu 2+ .The findings of this study contributed to a novel valorization of soybean hulls and bring them into the circular economy concept.
Figure 4 :
Figure 4: Effect of solution pH on (a) Cd 2+ and (b) Cu 2+ removal by soybean hulls.
Figure 5 :
Figure 5: Graphical interpretation of the possible interaction between Cu 2+ and soybean hulls' surface.
Figure 6 :
Figure 6: Effect of solution pH on (a) RY 39 and (b) AB 225 removal by soybean hulls.
Figure 7 :
Figure 7: Graphical interpretation of the possible interaction of AB 225 and soybean hulls' surface.
Figure 8 :
Figure 8: (a) Adsorption kinetic data and linear fit with (b) pseudo-first and (c) pseudo-second-order kinetic.
Figure 9 :
Figure 9: Intraparticle diffusion plots of the metal ions and dyes onto SH-PO1.
Figure 10 :
Figure 10: (a) Equilibrium pollutant adsorption onto SH-PO1 and (b) Langmuir and (c) Freundlich adsorption isotherms and the linear fit of experimental adsorption data for different pollutants.
Figure 11 :
Figure 11: The effect of temperature on the pollutants' adsorption onto SH-PO1.
Table 2 :
The influence of soybean hulls' particle sizes on the removal of pollutants.
Table 3 :
Kinetic models' equations and kinetic parameters obtained by the pseudo-first-and pseudo-second-order kinetic models for metal ions and dye adsorption onto SH-PO1.
3.6.Thermodynamic Studies.Thermodynamic experiments were carried out at three different temperatures, 25, 35, and 45 °C, i.e., 298.15, 308.15, and 318.15 K, while the other optimized adsorption parameters are listed in Table 1.The results presented in Figure
Table 4 :
Intraparticle diffusion model's equation and kinetic parameters for metal ions and dye adsorption onto SH-PO1.Intraparticle diffusion model q t = k i •t 1/2 + C t (mg/g) is the amount of pollutant adsorbed per gram adsorbent at time t (min), k i (mg/g min 1/2 ) is the interparticle diffusion rate constant, and C is the parameter proportional to the extent of the boundary layer thickness (mg/g). q
Table 5 :
Isotherm models' equations and obtained isotherm parameters for metal ions and dyes adsorption onto SH-PO1.
Table 6 :
Thermodynamic parameters for adsorption of different pollutants onto SH-PO1.
|
v3-fos-license
|
2018-04-03T04:20:16.825Z
|
2017-07-18T00:00:00.000
|
13707880
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.dib.2017.07.033",
"pdf_hash": "76c69d81ec6148043eace5fe263b30071b034638",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41884",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d661fb894199f2edb21f33b981250eafe2a01a87",
"year": 2017
}
|
pes2o/s2orc
|
Data on administration of cyclosporine, nicorandil, metoprolol on reperfusion related outcomes in ST-segment Elevation Myocardial Infarction treated with percutaneous coronary intervention
Mortality and morbidity in patients with ST elevation myocardial infarction (STEMI) treated with primary percutaneous coronary intervention (PCI) are still high [1]. A huge amount of the myocardial damage is related to the mitochondrial events happening during reperfusion [2]. Several drugs directly and indirectly targeting mitochondria have been administered at the time of the PCI and their effect on fatal (all-cause mortality, cardiovascular (CV) death) and non fatal (hospital readmission for heart failure (HF)) outcomes have been tested showing conflicting results [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16]. Data from 15 trials have been pooled with the aim to analyze the effect of drug administration versus placebo on outcome [17]. Subgroup analysis are here analyzed: considering only randomized clinical trial (RCT) on cyclosporine or nicorandil [3], [4], [5], [9], [10], [11], excluding a trial on metoprolol [12] and comparing trial with follow-up length <12 months versus those with longer follow-up [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16]. This article describes data related article titled “Clinical Benefit of Drugs Targeting Mitochondrial Function as an Adjunct to Reperfusion in ST-segment Elevation Myocardial Infarction: a Meta-Analysis of Randomized Clinical Trials” [17].
Subject area
Clinical research; meta-analysis More specific subject area
Medicine; Cardiology; Reperfusion injury
Type of data Figure How
Value of the data
The use of cyclosporine or nicorandil at the time of primary percutaneous coronary angioplasty (PCI) on fatal (all-cause mortality, cardiovascular (CV) death) and non fatal (hospital readmission for heart failure (HF)) outcomes, show the absence of any potential benefit.
Excluding a trial on metoprolol [12], which has a complex mechanism of action, not targeting only mitochondrial function, the pooled analysis on fatal and non fatal outcomes of the 14 studies did not changed.
The analysis on follow-up length shows effects on hospital readmission for HF for trials with longer follow-up.
These additional analyses should be the basis to plan further randomized clinical trials (RCTs) on reperfusion injury in ST elevation myocardial infarction (STEMI) patients undergoing PCI, focusing attention on other molecular mitochondrial targets.
New RCTs on reperfusion injury should have a longer follow-up analysis.
Data
Considering only trial focused on cyclosporine versus placebo, the HR for CV mortality, all-cause mortality and hospital readmission for HF were not statistical significant (p ¼0.33; p ¼0.16; p¼ 0.95, respectively) (Fig. 1). The same data are obtained considering only trials on nicorandil (p ¼0.06 for CV mortality; p ¼0.07 for all-cause death; p ¼0.2 for hospital readmission for HF) (Fig. 2). After the exclusion of the study on metoprolol from pooled analysis on trials with indirect/unspecific mechanism of action against mitochondrial component/pathway, the HR for CV death, all-cause death and hospital readmission for HF were significantly reduced (p¼ 0.03; p¼ 0.008; p ¼0.0001, respectively) (Fig. 3). Finally, the analysis on follow-up on all the studies included in the meta-analysis showed a reduction in hospital readmission for HF in studies with follow-up length Z12 months (HR 0.46; 95% CI 0.45-0.92, p ¼0.03) (Figs. 4-6).
Search strategy
A systematic review and meta-analysis was performed following Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) criteria [18][19][20][21]. The protocol of this study was published on PROSPERO (CRD42016033085).
Selection criteria
Detailed description of selection criteria of the papers is described elsewhere [17]. In particular, we focused on i) RCTs ii) enrolling STEMI patients; with iii) reperfusion strategy by primary PCI; iv) comparison of agent/drug against RI vs. placebo/gold standard treatment.
Data abstraction, endpoints, contact with authors
We performed a pre-hoc stratification of studies according to mechanism of action targeting a mitochondrial component/pathway (direct/selective vs. indirect/unspecific) according to a recent overview [22]. The analyses were performed according to the following criteria: i) administration of cyclosporine, ii) administration of nicorandil, iii) follow-up length o 12 vs. Z12 months iv) indirect/ unspecific drugs after exclusion of the study of Pizarro et al. [12]. The primary endpoint of the analysis was the incidence of cardiovascular death. Secondary endpoints were: all-cause death, hospital readmission for heart failure (HF).
Data analysis and synthesis
The endpoints were expressed as odds ratio (OR). Point estimates and standard errors were calculated and combined by the generic inverse variance method [23], computing risk estimates with 95% confidence intervals according to logarithmic transformation of the OR. A random effect model was used. Statistical heterogeneity was assessed with the Cochran's Q test and the I 2 statistic [24]. To test the difference between sub-group analyses the Chi 2 test has been used. Prometa (Internovi, Cesena, Italy) and RevMan 5 (The Cochrane Collaboration, The Nordic Cochrane Centre, Copenhagen, Denmark) software were used for statistical analyses. 3. Forest plots on cardiovascular mortality, all-cause mortality and hospital readmission for HF in studies randomizing indirect/unspecific mechanism of action against mitochondrial component/pathway vs. placebo, excluding the study on metoprolol [12]. ANP: atrial natriuretic peptide. NIC: nicorandil. CV: cardiovascular. HF: heart failure. hosp: hospitalization.
|
v3-fos-license
|
2018-02-16T17:55:11.025Z
|
2017-02-24T00:00:00.000
|
42278622
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://biodiscovery.pensoft.net/article/11207/download/pdf/",
"pdf_hash": "36619f2b577da60c4dfda23321aded31c7b432a9",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41886",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "36619f2b577da60c4dfda23321aded31c7b432a9",
"year": 2017
}
|
pes2o/s2orc
|
Phosphorylase Kinase Inhibition Therapy in Burns and Scalds
Severe burns and scalds almost always result in unsightly hypertrophic scarring. Among the important processes involved in scarring are fibroblast formation and transformation of fibroblasts into myofibroblasts. Myofibroblasts contain α-smooth muscle actin which has contractile properties and can lead to wound contraction and hypertrophic scarring. Phosphorylase kinase (PhK), expressed within 5 mins of injury, is among the earliest enzymes released after tissue damage. It is responsible for activation of NF-kB, which in turn activates over 200 different genes related to inflammation, fibroblastic proliferation, myofibroblast conversion, and eventual scar tissue formation. The sequence and approximate timing of events following injury include the following: activation of PhK (5 mins), followed by appearance of neutrophils (30 mins), macrophages (hours to days), fibroblasts (1 week) and myofibroblasts (2 weeks). Cytokines and growth factors secreted by macrophages include fibroblast growth factor (FGF) and transforming growth factors α and β (TGFα and TGFβ). Fibroblast growth factor is responsible for fibroblastic proliferation, and TGFβ1 for conversion of fibroblasts into myofibroblasts. After thermal injury, the use of topical curcumin, a non-competitive, selective PhK inhibitor that blocks PhK activity upstream of NF-kB activation, was found to be associated with more rapid and improved skin healing, as well as less severe or absent scarring. ‡ © Heng M. This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Mechanisms of Scarring after Thermal Injury
Wound healing following significant burns and scalds almost always result in hypertrophic scarring.In general, all tissues after injury are acutely infiltrated by many cell types (Leibovich and Ross 1975, Springer 1994, Desmouliere and Gabbiani 1996, Martin 1997), including platelets, inflammatory cells (neutrophils, macrophages and lymphocytes), endothelial cells, fibroblasts and epidermal cells.Wound healing involves a series of complex biologic processes, including inflammatory cell migration, cell proliferation, matrix synthesis, and wound contraction.Cytokines and growth factors secreted by inflammatory cells amplify NF-kB-dependent signaling pathways, resulting in fibroblastic proliferation and increased collagen synthesis.Single strands of collagen fibers polymerize and cross-link to form thick strands of collagen fibers.A proportion of fibroblasts transform into myofibroblasts, which possess strong contractile forces and are mainly responsible for wound contraction and hypertrophic scaring (Montesano and Orci 1988, Clark et al. 1989, Desmouliere et al. 1993, Grinnell 1994, Lee et al. 1999, Abdou et al. 2011).During wound healing, certain factors (Heng 2011), including wound location, wound depth (Dunkin et al. 2007, Wang et al. 2008), infection and genetic predisposition, enhance the frequency of hypertrophic scarring.However, thermal injuries secondary to burns and scalds are particularly prone to produce hypertrophic scarring in the skin (Tredget et al. 2006).The understanding of the sequence of events which lead to fibroblast proliferation and myofibroblast conversion, as well as identification of signaling targets on which to focus therapeutic interventions, may lead to more effective prevention of scarring after thermal injuries.
Sequence of Events in Wound Healing Following Injury
We found that one of the earliest cells activated by tissue injury is the Langerhans cell, the activation of which is detected within 5 mins of epidermal injury (Heng and Heng 2014).This is followed by infiltration of the wound by neutrophils as early as 30 mins after injury.Platelets, activated to secure hemostasis, release platelet derived growth factor (PDGF), which is chemotactic to neutrophils.Nuclear factor-kB (NF-kB), which is detected in injured tissue as early as 30 to 60 mins after injury, is expressed by both activated Langerhans cells and neutrophils.The neutrophils clear debris and bacteria, and secrete adhesion molecules (p-selectin) which assists in the migration of neutrophils into wounds.Macrophages and T lymphocytes, which are observed in wounds within hours to days, form the next wave of inflammatory cells.They secrete cytokines such as interleukin-1 (IL-1), tumor necrosis factor-α (TNFα), and growth factors such as FGF and transforming growth factors (α and β) which stimulate and amplify fibroblastic proliferation (Martin 1997, Heng 2011).Activated fibroblasts are observed one week following injury, and myofibroblasts usually make their appearance 2 weeks following heat injury.
Role of Fibroblasts and Myofibroblasts in Hypertrophic Scarring
Activated fibroblasts infiltrate the wound about a week after injury.TGF-β, which is chemotactic to activated fibroblasts, also stimulates fibroblastic proliferation (Montesano and Orci 1988, Desmouliere et al. 1993, Grinnell 1994, Martin 1997, Heng 2011).When activated, single strands of collagen fibers polymerize and cross-link to form thick strands of collagen fibers, embedded in a newly synthesized metalloprotein-rich extracellular matrix.Burned patients with hypertrophic scarring have been observed to have polarized IL-4+ Th2 cytokine production, with significantly increased IL-10 and TGF-β production (Tredget et al. 2006).TGF-β is a pleiotropic growth factor, secreted by many activated cells, including inflammatory cells such as macrophages.In the process of hypertrophic scarring, inflammatory cells release cytokines such as transforming growth factor-β1 (TGFβ1), which induces the transformation of fibroblasts into myofibroblasts (Montesano and Orci 1988, Desmouliere et al. 1993, Grinnell 1994, Tredget et al. 2006, Heng 2011).
The transformation of fibroblasts into myofibroblasts, which usually occurs 2 weeks after injury, is probably the key event in the formation of hypertrophic scarring.Myofibroblasts express α-smooth muscle actin and possess contractile properties resembling smooth muscle, resulting in generation of the forces that lead to wound contraction and hypertrophic scarring (Montesano and Orci 1988, Desmouliere et al. 1993, Grinnell 1994).The transformation of fibroblasts into myofibroblasts is stimulated by TGFβ1, which is found in high levels in burns and scalds (Heng 2011, Tredget et al. 2006).The formation of myofibroblasts is also enhanced by wound tension, infection, deep wounds (Dunkin et al. 2007, Wang et al. 2008) and an inherited keloidal tendency (Lee et al. 1999, Abdou et al. 2011, Heng 2011).It is noteworthy that hypertrophic scarring is usually not observed in embryos and fetuses (Adzick et al. 1985, Martin 1997, Ferguson and O'Kane 2004), in whom TGFβ1 is only expressed transiently and at low levels (Martin et al. 1993, Martin 1997, Whitby and Ferguson 1991) with resulting absence of myofibroblast conversion (Estes et al. 1994, Armstrong andFerguson 1995).In contrast, hypertrophic scarring is more commonly observed with adult wounds, where TGFβ1 levels are higher and more prolonged (Montesano and Orci 1988, Desmouliere et al. 1993, Grinnell 1994, Martin 1997, Heng 2011).
Signaling Pathways Induced by Injury
The molecule induced by injury and central to the wound healing process appears to be the transcription activator, NF-kB, which is detected as early as 30 mins after injury (Bethea et al. 1998).In the non-activated state, NF-kB exists as a pair of dimers (p50/p65) present in the cytoplasm.When NF-kB is activated by injury, phosphorylation occurs at Ser-276, Ser-529 and Ser-536, and the inhibitory molecule (IkBα) is removed so that the p50/p65 dimers can translocate to the nucleus, where it binds to the kB site on the DNA molecule (Verma et al. 1995, Takada et al. 2004, Yang et al. 2004).This is necessary for NF-kB to activate the transcription of over 200 genes involved in cell cycling, cell proliferation, cell migration and anti-apoptosis.The removal of the inhibitory molecule, IkBα, is dependent on activation of its kinase, IkBα kinase.The activation of IkBα kinase requires phosphorylation of multiple sites on the β subunits (Ser 171, Ser 181, Tyr 188.Tyr 199), as well as phosphorylation of the Zn finger on the γ subunit (Yang et al. 2004), which also contains an ubiquitin ligase site.The resultant degradation by ubiquitin-dependent proteolysis frees the NF-kB subunits (Karin and Ben-Neriah 2000, Yang et al. 2004, Palkowitsch et al. 2008), enabling migration of the subunits to their DNA binding site.The synchronization of phosphorylation events in the above phosphorylation sites required for the activation of both NF-kB and IkBα kinase is facilitated by the dual-specificity enzyme, PhK (Yuan et al. 1993, Reddy and Aggarwal 1994, Singh and Aggarwal 1995, Heng et al. 2000, Heng 2010), and inhibited by its inhibitor, curcumin (Reddy and Aggarwal 1994, Singh and Aggarwal 1995, Heng 2010, Heng 2013), (Fig. 1).
Curcumin (PhK Inhibitor) use after Thermal Injury
Phosphorylase kinase is a dual specificity kinase capable of transferring high energy phosphate bonds to both serine/threonine and tyrosine specific substrates (28).Most protein kinases can only transfer high energy phosphate bonds to substrates of a single specificity i.e. either serine/threonine or tyrosine.Phosphorylase kinase is a unique enzyme in which the spatial arrangements of the specificity determinants can be manipulated so that PhK can transfer high energy phosphate bonds from ATP to substrates of different specificities, such as serine/threonine and tyrosine residues (Yuan et al. 1993, Heng et al. 2000).It achieves its unique ability to act as a dual-specificity protein kinase (Yuan et al. 1993) by means of a hinge joint between its subunits, which permits changes in size of the substrate binding site, and also through binding to ions such as Mg or Mn, which allows the shape of the substrate binding site to be altered in different planes.
Curcumin, the active ingredient of the spice, turmeric, is a non-competitive selective PhK inhibitor (Reddy and Aggarwal 1994).It has also been shown to be a potent inhibitor of NF-kB activation (Singh and Aggarwal 1995) and may assist in blocking fibroblastic proliferation in thermal injuries.Phosphorylase kinase is activated within 5 mins of injury (Heng 2010, Heng andHeng 2014), and functions upstream of NF-kB (which is expressed 30 mins following injury), and much earlier than the appearance of activated fibroblasts (one week following injury) and myofibroblasts (2 weeks following injury).Accordingly, we hypothesized that blocking the activity of PhK is likely to have salutary effects in the treatment of burns and scalds by reducing the inflammatory response and the resulting tendency to hypertrophic scarring (Heng 2010, Heng 2013).
The following cases illustrate examples of patients with burns and scalds treated with topical curcumin gel that resulted in rapid healing and minimal or absent scarring.
Patient 1:
The patient is an 11 year old boy who sustained severe 2nd degree flash burns from pouring lighter fluid on warm barbeque coals.The heat from the ensuing fire singed his hair, eyelashes, forehead, ears, nose, cheeks and neck (Fig. 2, upper panel).He was treated with applications of curcumin gel applied hourly initially for several hours, and subsequently as frequently as possible during the first few days.When seen 5 days later, he was observed to be much improved, with re-epithelization of the raw areas over the forehead, ears, cheeks, nose and neck, and decreased edema over the affected areas, including the eyelids (Fig. 2, lower panel).At this time, pain was significantly decreased and reported to be absent most times.When seen 6 weeks later, the skin had healed completely, with some residual pigmentation over the most involved areas over the right cheek (Fig. 3).Follow up six months later revealed resolution of all pigmentary changes to resemble pre-injury skin (Fig. 4).Patient 2: The patient was a 2-year old boy who sustained 2nd degree burns over the palms of both hands after falling into a camp-fire.He was seen at a number of emergency care centers, and treated with silvadene cream.When seen four days later, large blisters were seen over both palms (Fig. 5) and he was in a lot of pain.He was started on curcumin gel treatment (initial hourly applications) and was much improved when seen 24 hours later (Fig. 6), associated with marked decrease in pain.When seen 2 weeks later (Fig. 7), there was significant re-epithelialization with a few residual areas of incomplete re-epithelialization.
The patient was not able to fully extend his fingers at this time (Fig. 7).When seen 2 months later, healing was complete, with no residual scarring or loss of function.The skin looked normal and the patient was able to extend his fingers fully (Figs 7, 8).Burns from Barbeque Fire: • When seen several months after the burn injury, the patient was observed to be completely healed with no erythema, clinical scarring or pigmentary changes.There was also no neurological deficit.
Phosphorylase Kinase Inhibition Therapy in Burns and Scalds Burns from a campfire see four days following injury, Note the presence of blisters suggesting at least second degree burns.Significant healing observed with curcumin gel (multiple applications daily) was observed after two weeks.Note rapid re-epithelialization of most of the skin of both palms.Also note inability of the patient to fully extend his fingers.Two months after curcumin gel treatment, there was complete healing of the skin of both palms, with no clinical scarring detected.The patient was able to fully extend all the fingers of both hands.
Phosphorylase Kinase Inhibition Therapy in Burns and Scalds Patient 3: The patient, a 35-year old female, sustained scalds to her left hand when she accidentally poured boiling water over the left hand.When seen one day later, early blister formation, suggestive of second degree injury, was observed both over the left palm and palmar aspect of all the fingers of her left hand (Fig. 9).Blister formation was also observed over the dorsum and fingers of the left hand (Fig. 9).She was in a lot of pain and was unable to fully extend the fingers of the injured hand (Fig. 9).She was treated with initial hourly applications of curcumin gel.When seen one day later (Fig. 10), she was much improved with aborted blister formation, decreased edema, and minimal pain.She was able to fully extend all the fingers of the injured hand (Fig. 10 ).
Scald from boiling water seen one day after injury before curcumin gel.Note the presence of early blister formation associated with significant pain.Note also the inability of the patient to fully extend her fingers.
Perspective
Wounds in adults, unlike fetal wounds, often heal with scarring (Martin 1997).Wounds resulting from burns and scalds are particularly predisposed to scar tissue formation, often resulting in unsightly, cicatricial scars which are usually hypertrophic, and occasionally leading to deformities (Hayakawa et al. 1979, Dunn et al. 1985, Robson et al. 1992) In this review, we report the clinical outcome of several patients with burns and scalds treated with curcumin gel that resulted in minimal scarring or scar-free healing.We have previously reported similar beneficial outcomes with use of curcumin gel in post-surgical scars (Heng 2011).We hypothesize that the minimal scarring noted after these injuries may be due to inhibition of PhK activity by curcumin (Reddy andAggarwal 1994, Heng 2011).Because PhK is released very early after injury (Yuan et al. 1993, Heng et al. 2000, Heng and Heng 2014), blocking PhK activity with curcumin results in significantly reduced Improvement 1 day after multiple applications of curcumin gel.Note aborted blister formation and greatly decreased pain.Note also the ability of the patient to fully extend her fingers without discomfort.
NF-kB-mediated inflammatory cell proliferation, and reduction of subsequent cytokine and growth factor-mediated fibroblastic proliferation and myofibroblastic transformation that follow one and two week later.Restated another way, the benefits of curcumin treatment after thermal injury are probably due to a reduction or inhibition of down-stream NF-kBdependent events, including macrophage secretion of TGFβ1.Since TGFβ1 induces both fibroblastic proliferation and conversion of fibroblasts into myofibroblasts, the inhibition of the above mechanisms by curcumin may be responsible for the diminished scarring in thermal wounds in our patients treated with curcumin gel.
We noted that application of curcumin gel after burns and scalds is usually followed by rapid decrease in erythema, blistering, swelling and pain.Improvement in these clinical symptoms and findings strongly suggests that cytokine activity, in particular TNFα, is reduced in curcumin-treated wounds.TNFα is an inflammatory cytokine produced by many activated cells, and especially inflammatory cells such as T lymphocytes and macrophages.TNFα levels have been shown to be significantly increased in burns and other types of injury.By blocking PhK activity (Reddy and Aggarwal 1994), curcumin decreases NF-kB activity (Singh and Aggarwal 1995), and NF-kB-dependent cytokine (TNFα) secretion by inflammatory cells.Since NF-kB is also stimulated by TNFα (Ozes et al. 1999, Palkowitsch et al. 2008), thus providing a positive feed-back loop, suppression of NF-kB activity by curcumin gel protects against the deleterious effects of this cytokine after thermal injury.The rapidity of healing of burns and scalds with application of curcumin gel is particularly noteworthy from a clinical perspective and may involve several mechanisms.Besides that mentioned above with regard to its effects on cytokines, another reason for the salutary effects of curcumin may be through removal of damaged cells by the process of curcumin-induced apoptosis (Anto et al. 2002, Wang et al. 2009).The removal of damaged or dead cells may provide the space for replacement by new healthy cells -it could be speculated that it may be faster to grow new cells than to heal damaged ones.Yet another possible mechanism with curcumin use is a reduction in the incidence of bacterial colonization of the wound as result of more rapid healing, thus preventing secondary bacterial antigenic and lipopolysaccharide induced worsening of the wound (Gromkowski et al. 1990).
Figure 1 .
Figure 1.Signaling pathways and Sequence of Events induced by injury.Approximate time sequences after injury are given within brackets.Abbreviations: NF-kB (nuclear factor-kB), TGFβ1 (transforming growth factor-β1).
Figure 2 .
Figure 2. Burns from Barbeque Fire: • (upper panel): Second degree burns from a barbeque fire before treatment with curcumin gel.• (lower panel): Rapid healing 5 days after curcumin gel applied hourly.
Figure 3 .
Figure 3. Burns from Barbeque Fire: • (upper panel): The singed hair from the burn were removed by a hair-cut.At 6 weeks after curcumin gel, the skin of the forehead, eyelids and ears were completely healed, with some residual pigmentation over the forehead.• (lower panel): Residual hyperpigmentation right cheek 6 weeks following curcumin gel treatment.
Figure 6 .
Figure 6.Improvement one day later after hourly application of curcumin gel.Pain and blistering were much improved.
|
v3-fos-license
|
2019-04-03T13:11:56.719Z
|
2019-03-15T00:00:00.000
|
92504199
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.aiep.pl/volumes/2020/0_1/pdf/01_02357_F1.pdf",
"pdf_hash": "54706765b68714cab9920677e2f8769574c0cf53",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41887",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "8205fbbe0337febcc889b561ff81ce1be6261cd8",
"year": 2019
}
|
pes2o/s2orc
|
PROPOSED BYCATCH-REDUCTION MODIFICATIONS OF SHRIMP FYKE NETS USED IN SOUTH AMERICAN LAGOONS
Background. Shrimp fisheries using fyke nets have been associated with a massive acquisition of teleost fishes as bycatch, potentially resulting in the decimation of their stocks. Based on this assumption, the presently reported study intended to test an alternative modification of a commonly used fyke net, in order to minimize the impact of its low selectivity. Materials and methods. To evaluate the alternative design proposed in this work, a total of 44 sampling efforts, including 22 with a control gear (CG) and 22 with a modified gear (MG), were conducted at a subtropical coastal lagoon system located in southern Brazil. In all trials, the fyke nets were installed at the fishing area approximately at 18:00 h and removed approximately at 06:00 h. The duration of each trial was nearly 12 h, which was similar to the catching time preferred by local fishermen. Results. Bycatch (BC) was preponderant in both modalities but the results showed that MG presented a reduction by 66 percentage points in BC catches, being more selective than CG. Additionally, the non-parametric test showed no significant differences of shrimp catches between the fishing gears used (MG and CG). So, the tested bycatch reduction devices (BRD) reduced the bony fishes acquisition preserving the volume of the target catch. Conclusion. The vertical opening reduction due the adoption of guiding panel + fan upper panel contributed to bycatch reduction, being a consistent BRD to reduce the potential impacts of this fishing gear over the bony fishes stocks.
INTRODUCTION
Transitional coastal wetlands, such as estuaries, coastal lagoons, mangroves, or salt marshes, are among the richest aquatic ecosystems in the world, supporting a diversity of organisms, many of them of commercial value (Barletta et al. 2017). In this sense, the penaeid shrimp fisheries is one of the most relevant economic activities in these coastal zones, representing almost 80% of global shrimp catches with wide range occurrence around the world (Broadhurst 2000, D'Incao et al. 2002. In South America, bottom trawls are the main fishing gear used for shrimp catches in the marine environment (Branco and Verani 2006, Cattani et al. 2012, Domingos et al. 2016, Vieira et al. 2017). On the other hand, shrimp traps or shrimp fyke nets are largely used in estuarine and coastal lagoons (Vianna andD'Incao 2006, Benedet et al. 2010). Regardless of the nature of catch, both fishing gears the trawls (active) and shrimp fyke nets (passive) yield a massive bycatch (Loebmann and Vieira 2006).
In order to minimize the bycatch in shrimp fisheries, many researchers have been suggesting the development of bycatch reduction devices (BRD) as an alternative to fisheries management (Andrew et al. 1993, Broadhurst and Kennelly 1996, Broadhurst 2000. The overwhelming majority of published records was devoted to BRDs in shrimp trawls and only few efforts were intended to reduce bycatch in other shrimp fisheries (Larocque et al. 2012, Colotelo et al. 2013, Soeth et al. 2015. It has been widely known that shrimp fyke nets catch an excessive number of different bony fishes that usually are discarded after the fishing activity but only few authors suggested low-cost alternatives to reduce this problem (Vianna andD'Incao 2006, Soeth et al. 2015).
Considering the relevance of this subject, the presently reported study suggests an alternative design of shrimp fyke nets, in order to minimize the impacts caused by the low selectivity of this fishing gear to contribute to the development of a new BRD, that reconciles technology and sustainability. Barletta et al. 2017. Each lagoon is linked to the other by small channels, and the water flows to the sea through a single channel (for the map of the area see Marques 2011). The estuary is located in a strip of coastal plains, and it is a typical choked lagoon, where the salinity ecocline is formed by the connection between the sea and the Santo Antonio dos Anjos Lagoon. The estuary receives freshwater from the Tubarão River basin which discharges directly in the low estuary. The middle estuary, where the presently reported study was carried, presents limnetic to oligohaline conditions, with a seasonal influence of rainfall patterns (Barletta et al. 2017).
Control (CG) and modified (MG) shrimp fyke nets.
This study was performed during the 2016 shrimpfishing season in the Laguna Estuarine Complex (LEC). In the LEC, the shrimp fyke nets are the main artisanal fishing gear to catch the juvenile population of pink shrimp (Farfantepenaeus paulensis and Farfantepenaeus brasiliensis). The fyke nets are allocated in shallows zones (<1.5 m) preferably close to widgeon grass (Ruppia maritima) and/or close to sheltered areas. Notably, this fishing ground supports different young estuarinedependent species, and occasionally adult, estuarineresident, marine and freshwater species.
Shrimp fyke nets have a geometric shape similar to bottom trawls, that is, a conical appearance, two wings, and a codend. However, they are passive gears with its wings and codend attached to wooden stakes and submerged in estuarine shallow waters during the night period. On top of the codend wooden stake a 3 W white light-emitting diode (LED) lamp is installed to attract the shrimp into the gear. Another structural difference can be observed due to valves sequences (two or three) in fyke nets bodies with a distance of 0.5 m of between them to facilitate the shrimp catch. The shrimp fyke nets adopted in this study had the typical dimensions of such nets commonly used in the LEC (Table 1).
In order to combine selectivity and low cost, the BRD proposed consists of two adaptations in a regular shrimp fyke net: Substitution of a regular upper panel (PE multi and 24 mm) with a fan panel that consists of polyethylene ropes connected between a cork line to the top portion of the ring of the first funnel.
Incorporation of a guiding panel in an inner portion of the wings to the upper portion of the first funnel ring. Guiding panel consist in a polyamide monofilament panel with 15 mm of stretched mesh (Fig. 1). Except for the fan panel and guiding panel, the other components adopted in the modified shrimp fyke net were the same as the control gears: 4 main multifilament polyamide (PA multi) panels, without knots (210/12), 24 mm of meshes opening and cork line/lead line constituted by polyethylene braided multifilament (PE) with 8 mm in diameter (Fig. 2).
A B
Experimental design and data analysis. To evaluate the alternative layout proposed in this work, a total of 44 sampling efforts, including 22 with a control-gear (CG) and 22 with a altered-gears (MG) along 11 fishing trials were conducted in the LEC between November 2015 and February 2016 during the shrimp-fishing season. The fishing gears were attached to bamboo stakes with an artificial light attraction for all gears in a top of codend. In all trials, the fyke nets were installed in pairs (that is, MG and CG) at the fishing area approximately at 18:00 h and removed approximately at 06:00 h, which is similar to the catching time preferred by local fishermen.
On the fishing boat, the catches were divided into two categories: shrimp (SH) and bycatch (bony fishes and blue crabs) (BC). Although the shrimp fyke nets also caught blue crabs, this fishery resource is very important to the regional economy being also considered target species. It is important to highlight that blue crabs caught by shrimp fyke nets are very important to the regional economy being considered byproduct. The fishes caught were identified to the family level and the species according to specific taxonomic keys. Additionally, Kolmogorov-Smirnov test (Siegel and Castellan 1988) was employed to check if the dataset of the main SH and BC catches in the control (CG) and modified fyke nets (MG) were well modelled by a normal distribution. Additionally, Mann-Whitney test (Lehmann 2006) was adopted to test the statistical differences between experiments. The datasets were pooled (sum of all fishing trials) by species, considering the number of individuals and biomass by each treatment (CG or MG) to compare their catch performance. Thus, it was possible to evaluate the effectiveness of each fyke net in terms of bycatch reduction. Both tests were applied considering a one-tailed distribution and a 0.05 significance level.
Kolmogorov-Smirnov test suggested that our dataset could not be analysed through the parametric approach. So, paired Mann-Whitney non-parametric test was employed to evaluate the BRD impacts on the bycatch reductions in shrimp fyke nets (Table 3). For the total number and biomass of bycatch, significant differences were observed (P < 0.05), showing that altered shrimp fyke nets (MG) had a significant potential for bycatch reduction (Table 3). MG presented a better selectivity performance (228 bony fishes weighing 3519.74 g) than control fyke nets (CG) (719 bony fishes weighing 12 215.90 g). These results suggest that MG can reduce by more than 65 percentage points the bycatch acquisition.
For all main bony fishes in bycatch (E. gula, G. genidens, C. spilopterus, and M. furnieri), significant differences were observed in the number of individuals and biomass (P < 0.05), between MG and CG (Table 3). Micropogonias furnieri was the main bony fish caught in control and modified shrimp fyke nets (400 individuals), including 309 individuals (6129.20 g) in CG and 91 (1447.42 g) in MG, followed by E. gula (209 individuals) where 153 individuals (1564.20g) were caught in CG and 56 individuals (608.08 g) in MG (Figs. 3 and 4; Table 2). The third most representative bycatch species was G. genidens, with 94 (1639.20 g) and 40 (693.40 g) individuals caught in CG and MG, respectively. Finally, the fourth highest occurrence bycatch species found in CG and MG was C. spilopterus (60 individuals) with a predominance of catches in CG (52 individuals weighing 536.20 g) against 8 individuals, weighing 53.38 g, caught in MG (Figs. 3 and 4, Table 2).
All bony fishes caught (CG and MG) had total lengths below the L 50 (length at which 50% of the fish are mature). Micropogonias furnieri represented mean length of 12.2 cm (standard deviation = 3.4 cm), for E. gula the mean length was 9.9 cm (standard deviation = 1.3 cm) and G. genidens showed mean length close to 11 cm (standard deviation = 3.78 cm). Finally, C. spilopterus had the mean length of 8.4 cm and standard deviation of 2.9 cm in CG and MG. Note in Fig. 5 that all bony fishes caught had a similar total length and standard deviation. The non-parametric test showed that there were no significant differences of shrimp catches between the fishing gears used (MG and CG). Therefore, the BRD reduced the bony fishes acquisition preserving the volume of target catches. In addition, the mean length of the shrimps showed an increment in the MG, compared to CG ( Table 2). The blue crab Callinectes sapidus showed significant differences in the number and biomass, between CG and MG (P < 0.05) ( Table 3). The CG caught 20 individuals (1097.1 g) and MG 8 individuals (220.57 g), but the mean length of the individuals was higher in MG than in CG. Similar statistical patterns were found in Callinectes danae, which was possible to identify differences between CG and MG in number and biomass (P < 0.05). Details about the species caught and statistics results are presented in Tables 2 and 3.
DISCUSSION
The results of this study demonstrate that simple structural alterations in a conventional fishing gear can contribute to an increase in its selectivity. In this case, the use of a guiding panel associated with a fan upper panel reduced by 66 percentage points the bycatch acquisition without affecting significantly the shrimp catches. However, compared to previous studies (Vieira et al. 1996, Loebmann and Vieira 2006, Vianna and D'Incao 2006 the absolute shrimp catches obtained in this study (regardless of the treatment) were distinctly lower than those reported in other years. Possibly, the lower shrimp catch rates observed in the presently reported study were related to the El Niño events between 2015 and 2016. Usually, El Niño is associated with positive precipitation anomalies in southern Brazil (Grimm et al. 2000). This fact was previously observed by Vianna and D'Incao (2006) and Santana et al. (2015). These authors found a negative correlation between El Niño events and shrimp catches in South America. Möller et al. (2009) suggested that the increase in estuarine hydrodynamic flows due to the water input derived from the precipitation constitutes a physical barrier, limiting the entry of shrimps into transitional environments. According to Möller et al. (2009), the increase in estuarine flows is more evident in environments with narrow estuary-sea interfaces, which can be observed in the LEC.
The bycatch composition found in the presently reported study was similar to that in previous studies (Vieira et al. 1996, Loebmann andVieira 2006) including the abundance of some bony fish species. In this study, Eucinostomus gula, Genidens genidens, Citharichthys spilopterus, and Micropogonias furnieri were the most prominent bycatch items in the presently reported study. In general terms, these species inhabiting estuarine shallow waters, especially during the early stages of their life cycle once these environments can provide food and shelter for these potential fisheries resources (Barletta et al. 2008, Lacerda et al. 2014. The alternative layout, proposed in this study, presented good performance in terms of reduction of demersal fish catches. According to Marchesan et al. (2009), when exposed to artificial light, fish species with diurnal habits perform a vertical migration even in shallow environments. Thus, possibly the low capture of these species by MG, is directly associated with the vertical opening limitation.
Blue crabs are not usually the target species of the fyke nets fisheries, but due to the low catches of shrimps, during El Nino events, Callinectes danae and Callinectes sapidus are treated as a secondary target species and retained for marketing, in order to increase the income of fishermen. In the presently reported study, the blue crab catch was reduced in the MG treatment, but the mean length of the individuals increased, showing a better selectivity and excluded small individuals.
The vertical opening reduction due to the adoption of the guiding panel + fan upper panel contributed to bycatch reduction, resulting in an increase the shrimp fyke net selectivity and a reduction of the potential impacts of the traditionally used, unaltered fyke net over the bony fish stocks.
The absence of statistically significant differences between the shrimp catches in CG and MG will be reviewed in future works. Although this information suggests that the alternative layout can concatenate selectivity with the maintenance of shrimp catches on the unchanged level, the low occurrence of target species acquisition was possibly associated with the El Niño phenomenon. Therefore, it is pertinent to repeat this experiment under the usual meteorological conditions in the LEC. Thus, the statistical tests applied may be ratified.
Considering the relevance of this subject to artisanal fisheries development, new experiments need to be encouraged to improve the fishing gear selectivity in the regional context.
|
v3-fos-license
|
2022-03-09T16:09:12.729Z
|
2022-03-01T00:00:00.000
|
247300381
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-6374/12/3/164/pdf",
"pdf_hash": "69a3b51b546c265763b75a15d938a3a6aa92565c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41888",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ac58f919e2eac57e29a6e1f5608b60192ba697b7",
"year": 2022
}
|
pes2o/s2orc
|
Self-Powered Wearable Biosensor in a Baby Diaper for Monitoring Neonatal Jaundice through a Hydrovoltaic-Biosensing Coupling Effect of ZnO Nanoarray
Neonatal jaundice refers to the abnormality of bilirubin metabolism for newborns, and wearable transcutaneous bilirubin meters for real-time measuring the bilirubin concentration is an insistent demand for the babies’ parents and doctors. In this paper, a self-powered wearable biosensor in a baby diaper for real-time monitoring neonatal jaundice has been realized by the hydrovoltaic-biosensing coupling effect of ZnO nanoarray. Without external power supply, the system can work independently, and the hydrovoltaic output can be treated as both the power source and biosensing signal. The working mechanism is that the hydrovoltaic output arises from the urine flowing on ZnO nanoarray and the enzymatic reaction on the surface can influence the output. The sensing information can be transmitted through a wireless transmitter, and thus the parents and doctors can treat the neonatal jaundice of babies in time. This work can potentially promote the development of next generation of biosensors and physiological monitoring system, and expand the scope of self-powered technique and smart healthcare area.
Introduction
Neonatal jaundice refers to the abnormality of bilirubin metabolism (the blood bilirubin level rising) for newborns [1][2][3]. Pathological jaundice can make the face and trunk of the baby appear yellow color, accompanied with anemia, hepatosplenomegaly and yellow urine. In severe cases, it is manifested as poor response, listlessness, anorexia and even breathing difficulties [4][5][6]. Due to unpredictable pathological recurrence in several weeks after birth, it is inconvenient for the hospital to carry out examinations in a timely manner [7,8]. Thus, monitoring and uploading the neonatal jaundice status in real time is an insistent demand for the parents and doctors. At present, the conventional method for jaundice detection is based on blood sampling and bilirubin analysis [9,10]. This technique cannot meet the requirement of real-time dynamic monitoring, and the frequent blood drawing with skin damage is unacceptable for children and their parents. To solve the problem, a new wearable transcutaneous bilirubin meter that can be conformably attached on the skin for real-time measuring the bilirubin concentration in body fluid such as urine is highly expected [11].
The urine bilirubin concentration in the baby with pathological neonatal jaundice obviously rise [12,13], and the diaper is usually worn on a baby for holding urine. Thus, a wearable biosensor embedded in baby diaper for continuously detecting bilirubin concentration can realize monitoring neonatal jaundice in real time. Nowadays, some research groups and companies have developed intelligent diapers with various bilirubin biosensor integration. The main working principal of the biosensor is based on traditional electrochemical or chemical colorimetric approaches [14][15][16][17][18]. These methods provide relatively accurate sensing results, but usually need extra and big-sized bulky power supplies, e.g., a battery or capacitor, which raises the cost of the whole system, enlarges the total volume of the diaper and may also cause harm to the health of baby. Recently reported selfpowered techniques may potentially remove the power supply from the system [19][20][21][22][23][24]. The urine bilirubin analyzing sites with wireless, battery-free electronics are promising as substitutions by coupling the energy harvesting and biosensing processes in the diaper environment.
In this paper, a self-powered wearable biosensor in a baby diaper for real-time monitoring neonatal jaundice has been realized by the hydrovoltaic-biosensing coupling effect of ZnO nanoarray. The system can work without an external power unit, and has low cost, small size, noninvasive, and flexible features. The working mechanism is that the urine flowing on ZnO nanoarray and the enzymatic reaction (bilirubin and bilirubin oxidase) on the surface can generate bilirubin dependent hydrovoltaic output [25][26][27][28][29][30][31]. The output can be treated as not only the energy for the sensing process but also the biosensing signal. ZnO nanostructures have the characteristics of hydroelectric effect with high output [32,33]. It also can be easily and efficiently functionalized for sensing by surface chemical modification [34]. The main goal of the device is to monitor the bilirubin in urine to determine whether there is a possibility of jaundice in baby. The signal collected by the sensing unit can be transmitted to the wireless transceiver module, and then transmitted to the guardian's smart device. Once an abnormality is detected, it can be sent to the doctor for treatment in time. This achievement can play an important role in monitoring neonatal pathological jaundice, helping parents of newborns to obtain information about the baby's fluid in real time, and sending it to the doctor in the event of an early warning. Meanwhile, this work can expand the scope of self-powered techniques and smart healthcare areas.
Fabrication of Self-Powered Bilirubin Biosensing Unit
Bilirubin oxidase and bilirubin were provided by Chongqing Amida Biotechnology Co., Ltd (Chongqing, China). The other chemicals were provided by Chengdu Keweizhuo Technology Co., Ltd. (Chengdu, China).
The vertically grown ZnO nanoarray was prepared by a hydrothermal method. A piece of PDMS film attached to a silicon wafer (for keeping the film steady in the solution) was cleaned with deionized water and alcohol, and dried at 60 • C. 0.5 g of Zn(NO 3 ) 2 ·6H 2 O was dissolved in 38 mL of deionized water, then 2 mL of NH 3 ·H 2 O was dropped into the solution while stirring. After Zn(NO 3 ) 2 ·6H 2 O was evenly dissolved, the PDMS film was immersed in the solution. The beaker was quickly sealed and placed in a dry oven, and then kept at 80 • C for 24 h. The PDMS film with ZnO nanoarray grown was finally taken out from the beaker, and stripped off from the silicon wafer [25,35].
The ZnO-grown PDMS film was cut into 20 mm × 30 mm in area, and then ZnO nanoarray were modified with bilirubin oxidase (BOx). Lyophilized bilirubin oxidase powder was dissolved in PBS buffer to form 40 u/mL BOx solution. 0.5 mL BOx solution was evenly and slowly dropped on the nanowires. The film was naturally dried for 3-4 h, and the enzyme was modified on the nanowires [36]. Both ZnO and PDMS have been proven to be nontoxic/biocompatible and can work well on the human body environment [37][38][39].
Characterization and Measurements
The morphology and microstructure of ZnO nanoarray were investigated by a scanning electron microscope (Gemini SEM300, Oberkochen, Germany). The output hydrovoltaic voltage is measured with electrometer (Keithley 6514, Beaverton, OR, USA). Figure 1a shows the experimental design and application of self-powered wearable biosensor in baby diaper for real-time monitoring neonatal jaundice. The biosensing unit is embedded in a diaper, and urine can flow across the surface of the device. The output hydrovoltaic voltage of the device can be influenced by the concentration of the target biomolecule (bilirubin) in the urine, serving as the biosensing signal. Biological sensor information can be wirelessly transmitted to parents for determining the health of their baby, realizing immediate treatment in time.
Characterization and Measurements
The morphology and microstructure of ZnO nanoarray were investigated by a scanning electron microscope (Gemini SEM300, Oberkochen, Germany). The output hydrovoltaic voltage is measured with electrometer (Keithley 6514, Beaverton, OR, USA). Figure 1a shows the experimental design and application of self-powered wearable biosensor in baby diaper for real-time monitoring neonatal jaundice. The biosensing unit is embedded in a diaper, and urine can flow across the surface of the device. The output hydrovoltaic voltage of the device can be influenced by the concentration of the target biomolecule (bilirubin) in the urine, serving as the biosensing signal. Biological sensor information can be wirelessly transmitted to parents for determining the health of their baby, realizing immediate treatment in time. In practical applications, the output mainly depends on the area of the liquid flowing. The thickness of ZnO nanoarray is determined by preparation process. The ZnO nanoarray on the device is densely distributed, which is beneficial for the piezoelectric output. Figure 2 shows the morphology and microstructure of the self-powered wearable biosensor in the baby diaper. Figure 2a and 2b show that the device has good flexibility and small size. The device can be embedded in the absorbent layer of baby diaper, and can fit well with the diaper, as shown in Figure 2c,d. The top and side views of ZnO nanoarray are shown in the SEM image of Figure 2e-h. The length of the nanowires is about 3 μm, and the cross section of the nanowires is of hexagonal structure with an average diameter of about 600 nm. The area of the ZnO nanoarray is determined by the size of the substrate. In practical applications, the output mainly depends on the area of the liquid flowing. The thickness of ZnO nanoarray is determined by preparation process. The ZnO nanoarray on the device is densely distributed, which is beneficial for the piezoelectric output. Figure 3a,b show the power generating process of the self-powered wearable biosensor. When the liquid is drawn across the surface of ZnO nanoarray, due to the CE (contact electrification) effect, the free electrons in ZnO can move to the vicinity of the contact region between the droplet and ZnO, resulting in charge transfer. Studies have shown that materials with free electrons will be negatively charged due to the solid-liquid contactelectrification on the surface of various materials [40][41][42][43][44]. In the solid-liquid contact between ionic solution and solid has both ion transfer and electron transfer, while the charge transfer between nonionic liquid and solid is substantially contributed to by electron transfer. A slight increase in ion concentration can increase the amount of CE charge. Furthermore, a large amount of excessive ion concentration in the solution can cause electrons to combine with ions, forming a shielding effect, hindering the charge transfer and inhibiting the amount of CE charge [45][46][47][48]. On the surface of ZnO, the transfer of hydrogen ions and electrons is a competitive process, and hydrogen ions inhibit the transfer of electrons between ZnO and water molecules. In contrast, hydroxide ions do not inhibit this process, but promote this process at high concentration [49]. In order to prove this speculation, we carry out subsequent experiments related to hydroxide ions on the surface of ZnO nanoarray. Figure 3a,b show the power generating process of the self-powered wearable biosensor. When the liquid is drawn across the surface of ZnO nanoarray, due to the CE (contact electrification) effect, the free electrons in ZnO can move to the vicinity of the contact region between the droplet and ZnO, resulting in charge transfer. Studies have shown that materials with free electrons will be negatively charged due to the solid-liquid contactelectrification on the surface of various materials [40][41][42][43][44]. In the solid-liquid contact between ionic solution and solid has both ion transfer and electron transfer, while the charge transfer between nonionic liquid and solid is substantially contributed to by electron transfer. A slight increase in ion concentration can increase the amount of CE charge. Furthermore, a large amount of excessive ion concentration in the solution can cause electrons to combine with ions, forming a shielding effect, hindering the charge transfer and inhibiting the amount of CE charge [45][46][47][48]. On the surface of ZnO, the transfer of hydrogen ions and electrons is a competitive process, and hydrogen ions inhibit the transfer of electrons between ZnO and water molecules. In contrast, hydroxide ions do not inhibit this process, but promote this process at high concentration [49]. In order to prove this speculation, we carry out subsequent experiments related to hydroxide ions on the surface of ZnO nanoarray. Biosensors 2022, 12, x FOR PEER REVIEW 5 of 11 Figure 3c-f shows the biosensing process of the device. When the device is in contact with sodium bilirubin, an enzymatic reaction between BOx and bilirubin will occur. The reaction is as follows [29]:
Working Mechanism
Furthermore, hydrogen peroxide is oxidized as follows: The hydrogen ion neutralizes the hydroxide ion in the solution thereby affecting the output voltage of the contact electrification. Figure 3d experimentally confirms that hydroxide ions can indeed affect the output voltage of the device. As the pH value of aqueous solution is 11, 10, 9 and 8, respectively, the change of the output voltage is 0.013, 0.006, 0.005 and 0.001 V. These results prove that the biosensing behavior of the device can be attributed to the coupling of enzymatic reaction and hydrovoltaic effect. Figure 4 shows the biosensing behavior of the self-powered wearable biosensor in baby diaper. Figure 4a shows the biosensing performance of the device for detecting bilirubin (ZnO nanoarray are modified with bilirubin oxidase). Since bilirubin is insoluble in water, NaOH solution (0.004 mol/L) is usually used to dissolve bilirubin in deionized water [50]. The obtained sodium bilirubin is conjugated bilirubin in the solution for the following tests. As shown in Figure 4a, when the concentration of sodium bilirubin dropped on the surface of the device is 12.5, 25.0, 37.5 and 50.0 mg/L, the output hydrovoltaic voltage (peak value) of the device through contact electrification CE is 0.039, 0.030, 0.019 and 0.008 V, respectively. The output voltage is negatively correlated with the concentration of sodium bilirubin in the solution, as shown in Figure 4b. There is an approximately linear relationship between the output voltage and the concentration of sodium bilirubin. The biosensing response can be simply defined as [51]: Figure 3c-f shows the biosensing process of the device. When the device is in contact with sodium bilirubin, an enzymatic reaction between BOx and bilirubin will occur. The reaction is as follows [29]:
Sensing Performance
Furthermore, hydrogen peroxide is oxidized as follows: The hydrogen ion neutralizes the hydroxide ion in the solution thereby affecting the output voltage of the contact electrification. Figure 3d experimentally confirms that hydroxide ions can indeed affect the output voltage of the device. As the pH value of aqueous solution is 11, 10, 9 and 8, respectively, the change of the output voltage is 0.013, 0.006, 0.005 and 0.001 V. These results prove that the biosensing behavior of the device can be attributed to the coupling of enzymatic reaction and hydrovoltaic effect. Figure 4 shows the biosensing behavior of the self-powered wearable biosensor in baby diaper. Figure 4a shows the biosensing performance of the device for detecting bilirubin (ZnO nanoarray are modified with bilirubin oxidase). Since bilirubin is insoluble in water, NaOH solution (0.004 mol/L) is usually used to dissolve bilirubin in deionized water [50]. The obtained sodium bilirubin is conjugated bilirubin in the solution for the following tests. As shown in Figure 4a, when the concentration of sodium bilirubin dropped on the surface of the device is 12.5, 25.0, 37.5 and 50.0 mg/L, the output hydrovoltaic voltage (peak value) of the device through contact electrification CE is 0.039, 0.030, 0.019 and 0.008 V, respectively. The output voltage is negatively correlated with the concentration of sodium bilirubin in the solution, as shown in Figure 4b. There is an approximately linear relationship between the output voltage and the concentration of sodium bilirubin. The biosensing response can be simply defined as [51]: In Equation (3), Vi and V1 are the output voltage of the device in each concentration and the initial concentration, respectively. As the output voltage is 0.03408, 0.02708, 0.01881 and 0.00914 V (the average peak value of 10 experiments for each concentration), the response is 0.0%, 20.6%, 44.8%, 73.2%, respectively. By fitting the response scatter plot, the linear regression equation can be obtained as y = 0.2438x − 0.263 [52]. In order to eliminate the influence of the primary cell effect, the device is, respectively, immersed in the solution and taken out (the device is wet in air atmosphere). The output voltage of the device is shown in Figure 4c. It can be seen that the output voltage of the device in the solution is much smaller than that in air. This result confirms that the output voltage is In Equation (3), V i and V 1 are the output voltage of the device in each concentration and the initial concentration, respectively. As the output voltage is 0.03408, 0.02708, 0.01881 and 0.00914 V (the average peak value of 10 experiments for each concentration), the response is 0.0%, 20.6%, 44.8%, 73.2%, respectively. By fitting the response scatter plot, the linear regression equation can be obtained as y = 0.2438x − 0.263 [52]. In order to eliminate the influence of the primary cell effect, the device is, respectively, immersed in the solution and taken out (the device is wet in air atmosphere). The output voltage of the device is shown in Figure 4c. It can be seen that the output voltage of the device in the solution is much smaller than that in air. This result confirms that the output voltage is mainly dominated by the contact electrification (CE) effect between the solution and nanowires (a kind of hydrovoltaic effect), and the influence of the primary cell effect can be ignored. Figure 4d shows the repeatability of the device. The device is investigated with same concentration of sodium bilirubin solution for four times (one-hour interval for each test). It can be seen that the device can maintain a stable output within a certain period of time. The repetition number of the device can match the utility of the diaper, and it can be thrown away with the diaper.
Sensing Performance
Specificity is also one of the important indicators of biosensors. The output voltage of our device is only related to the concentration of sodium bilirubin in the solution, and the device can specifically target bilirubin in solution without being affected by other partial substances. Figure 4e-k shows the specificity of the device against sodium bilirubin. The specificity arises from the enzymatic reaction between bilirubin and bilirubin oxidase. Several other typical substances in urine are dropped on the device at different concentrations, and the output voltage almost keeps unchanged. Figure 5a shows the output voltage of the device at different temperature. The temperature has a slight influence on the output of the device. Figure 5b,c show the biosensing behavior of the device against small lactate concentration change. As the sodium bilirubin concentration in solution is 25.0, 27.5, 30.0 and 32.5 mg/L, the output voltage of the device is 0.0299, 0.0261, 0.0238 and 0.0214 V, and the response is 0.0%, 12.7%, 20.5%, 28.4%, respectively. Figure 5d,e show the limit of detection of the device, as the sodium bilirubin concentration in solution is 8.0, 10.0 and 12.5 mg/L, the output voltage of the device is 0.0361, 0.0359 and 0.0334 V, the response is 0.0%, 0.5%, 7.0%, respectively. The lower limit of detection concentration of the device is around 8.0 mg/L. Figure 5f shows that when the approximate urine (97% water, 1.8% urea, 0.05% uric acid, 1.1% inorganic salts and trace amounts of sodium bilirubin at the concentrations indicated) containing different concentrations of sodium bilirubin is dropped on the device, the output of the device changes significantly. Figure 6 shows the practical application of the self-powered wearable biosensor in baby diaper for real-time monitoring bilirubin concentration. The device is embedded between the outer cotton cloth and the inner absorbent layer of the diaper. The device must be placed on top of the absorbent layer, otherwise no liquid can flow on the surface of the device. For uploading the biosensing information, the device is connected to an external circuit module, as shown in Figure 6a. The circuit can amplify the sensing signal, and the single-chip microcomputer can perform analog-to-digital conversion and analysis of the signal through voltage converter, shifter and low-pass filter. Then the circuit can control a wireless transmitter for uploading the sensing information. Here, LED lights are used to exhibit the sensing result. As the single-chip microcomputer detects the amplified signal, LED lights can be lightened on. In our experiment, eight LED lights are lined up. The green light indicates that a voltage with a large variation range is detected (>15 mV). Furthermore, the number of red light represents the output voltage (one red light for 5 mV, two red lights for 10 mV, three red lights for 15 mV and so on). As shown in Figure 6b, after dropping sodium bilirubin solution on the diaper, the concentration can be read out from the number of LED lights. As the solution is prepared to simulate urine containing 1.0 mg/L of sodium bilirubin, three red lights are on. In Figure 6c, as the sodium bilirubin concentration in the urine is zero, no red lights are on. These data demonstrate that this system can roughly analyze the bilirubin concentration in urine. In the near future, other wireless transmitters, such as a Bluetooth module, can be integrated into the system. These techniques can transmit sensing information (bilirubin in urine) to the smart phones of babies' parents. The parents can real-time monitor the baby's health and will not delay the treatment of neonatal jaundice. Figure 6 shows the practical application of the self-powered wearable biosensor in baby diaper for real-time monitoring bilirubin concentration. The device is embedded between the outer cotton cloth and the inner absorbent layer of the diaper. The device must be placed on top of the absorbent layer, otherwise no liquid can flow on the surface of the device. For uploading the biosensing information, the device is connected to an external circuit module, as shown in Figure 6a. The circuit can amplify the sensing signal, and the single-chip microcomputer can perform analog-to-digital conversion and analysis of the signal through voltage converter, shifter and low-pass filter. Then the circuit can control a wireless transmitter for uploading the sensing information. Here, LED lights are used to exhibit the sensing result. As the single-chip microcomputer detects the amplified signal, LED lights can be lightened on. In our experiment, eight LED lights are lined up. The green light indicates that a voltage with a large variation range is detected (>15 mV). Furthermore, the number of red light represents the output voltage (one red light for 5 mV, two red lights for 10 mV, three red lights for 15 mV and so on). As shown in Figure 6b, after dropping sodium bilirubin solution on the diaper, the concentration can be read out from the number of LED lights. As the solution is prepared to simulate urine containing 1.0 mg/L of sodium bilirubin, three red lights are on. In Figure 6c, as the sodium bilirubin concentration in the urine is zero, no red lights are on. These data demonstrate that this system can roughly analyze the bilirubin concentration in urine. In the near future, other wireless transmitters, such as a Bluetooth module, can be integrated into the system. These techniques can transmit sensing information (bilirubin in urine) to the smart phones of babies' parents. The parents can real-time monitor the baby's health and will not delay the treatment of neonatal jaundice. Biosensors 2022, 12, x FOR PEER REVIEW 9 of 11
Conclusions
In summary, we report a self-powered wearable biosensor in a baby diaper for realtime monitoring neonatal jaundice. The working mechanism is based on the hydrovoltaicbiosensing coupling effect of ZnO nanoarray. The system can work without an external power supply, and the hydrovoltaic output can be treated as the biosensing signal. The biosensor in the diaper can continuously detect bilirubin concentration in urine, and the sensing information can be wirelessly uploaded, which facilitates the parents and doctors treating the neonatal jaundice of baby in time. This self-powered biosensing system can probably expand the scope of intelligent health care.
Conclusions
In summary, we report a self-powered wearable biosensor in a baby diaper for realtime monitoring neonatal jaundice. The working mechanism is based on the hydrovoltaicbiosensing coupling effect of ZnO nanoarray. The system can work without an external power supply, and the hydrovoltaic output can be treated as the biosensing signal. The biosensor in the diaper can continuously detect bilirubin concentration in urine, and the sensing information can be wirelessly uploaded, which facilitates the parents and doctors treating the neonatal jaundice of baby in time. This self-powered biosensing system can probably expand the scope of intelligent health care.
Data Availability Statement:
The experimental data is contained within the article.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2024-03-17T15:54:53.084Z
|
2024-03-14T00:00:00.000
|
268448534
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/criid/2024/5167805.pdf",
"pdf_hash": "e2b975211672ac73052b314dcd1f685c6b087797",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41889",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ed7e64f1e52a7ea13f0c914a3dbf1d7cc18821fd",
"year": 2024
}
|
pes2o/s2orc
|
Delayed Surgical Treatment of a CE1 Lung Cyst Resulting in Pericystectomy of CE4 Cyst
Lung is the second most common locationof cystic echinococcosis (CE), after the liver. Diagnosis of lung CE is often incidental, and clinical manifestations depend on the location and size of the cyst, the most common being chest pain, shortness of breath, expectoration of fragments of endocyst, and haemoptysis. Surgery is the primary treatment, with a minor role for medical therapy. Delayed diagnosis and treatment may have important consequences. We present a case of lung CE in whichsurgical treatment was delayed due to the first wave of COVID-19. Since surgery could not be performed immediately, the patient was kept on albendazole and the cyst stage moved from CE1 to CE3a, to CE4, eventually requiring a more aggressive pericystectomy instead of the commonly performed endocystectomy. The clinical and imaging characteristics of a rare CE4 cyst of the lung are reported.
Introduction
Cystic echinococcosis (CE) is a zoonosis caused by the larval stage of the cestode Echinococcosis granulosus s.l..It mainly afects patients from poor areas where sheep raising is practiced.Cysts may form in any internal organ, but the liver and lungs are the main locations [1].Cyst rupture and dissemination (either through body cavities or via hematogenous spread) can result in secondary CE [1].Diagnosis of CE is based on imaging, while serology has an ancillary role [2].Cysts are staged using the "WHO Informal Working Group on Echinococcosis" classifcation, which also directs subsequent management of uncomplicated cases [3].Lung lesions can be diagnosed by X-ray in most patients with a compatible history, while CT is used for pre-surgical evaluation [4].Lung cysts can be treated with albendazole (ABZ) if they are small and uncomplicated [4], whereas cysts larger than 5 cm and smaller complicated cysts should be managed surgically, sparing as much parenchyma as possible [5].Uncomplicated lung cysts initially may be asymptomatic or present with cough, chest pain, and shortness of breath.Here, we report a case of lung echinococcal cyst in which the delay of surgical intervention due to disruption of surgical activity during the frst COVID-19 wave resulted in a CE4 stage cyst, rarely seen in the lung, eventually treated with pericystectomy.
Case Presentation
A 36-year-old Ghanaian man, living in Italy since 2007, with a history of hypertension, was admitted to the E.D. of a tertiary hospital in the Campania region, Italy, in December 2019, following a car accident.A chest X-ray and CT scan documented an 11 × 7 cm cystic lesion localized in the right superior lobe of the lung without pleural efusion.Te scan also showed a cystic lesion of the spleen.Te patient was transferred to a tertiary care hospital in Naples, where imaging confrmed the lung cyst (Figures 1(a) and 1(b)) and serology for E. granulosustested positive.Medical therapy was started with albendazole (ABZ) and the patient was discharged after being scheduled for surgery.However, he was lost to follow-up after a few weeks.In February 2020 he experienced sudden chest pain and expectoration of yellowish mucus and presented once again to the E.D. of the closest hospital.A CT scan of the chest ruled out lifethreatening conditions, the diagnosis of lung CE was confrmed and the patient was discharged because surgery was limited due to COVID-19.Te following year, in April 2021, another episode of expectoration of fragments of the endocysts and chest discomfort occurred, and the patient was admitted to the Pulmonology Unit of another hospital in Naples.ABZ was started once again, and a new CT scan of the chest showed a change in the appearance of the lung cyst, which was now surrounded by a thick wall with fuid content and wavy lines inside the cavity.A homolateral pleural effusion was seen, as well as mediastinal and right peribronchial lymphadenopathies, with a necrotic component.Indication for surgery was confrmed, but again surgery was not possible because of the ongoing COVID-19 pandemic.Radiological and clinical follow-up was carried outby a multidisciplinary team including infectious diseases specialists and thoracic surgeons at the Monaldi and Cotugno Hospitals in Naples.A CT scan of the chest performed in December 2021 showed a reduction in the size of the lung lesion, with the appearance of intracystic small air crescent and perilesional atelectasis.Bacterial superinfection of the cyst was suspected (leukocytosis, fever, and contrastenhanced CT fndings showing pericystic infammation) and treated empirically with amoxicillin/clavulanate. Te patient was referred to the WHO-Collaborating Centre for the Clinical Management of Cystic Echinococcosis in Pavia and was rescheduled for surgery but developed haemoptysis in May 2022, which lead to emergency surgery.Te preoperative CT scan confrmed the above-described changes in the radiological appearance of the cyst (Figures 1(c)-1(e)), and the bronchoscopic examination ruled out major cystobronchial fstulisations.Surgical pericystectomy with a thoracoscopic-thoracotomic hybrid approach with bronchial saturation was performed after positioning of hypertonic-soaked gauze pads and subsequent lavage with hydrogen peroxide and povidone-iodine of the pleural cavity was carried out (Figures 1(f ) and 1(g)).Te patient recovered and continued therapy with ABZ, suspended in October 2022 after a 6-month course, when US examination of the abdomen showed almost complete solidifcation of the splenic cyst.Te postoperative course was free of complications, and the patient is currently well and asymptomatic.A timeline of the events is presented in Figure 2.
Discussion
CE cysts in humans develop mainly in the liver (70% of cases) and lungs (20%).Cysts in the lungs tend to grow faster than in the liver [4].Although temporarily silent, all lung cysts carry the risk of perforation [6].Local complications of pulmonary CE include cyst rupture, superinfection, and mass efect of the cyst [7].Te cyst can form cysto-bronchial and cysto-pleural fstulae.Te former causes expectoration of cyst fuid and portions of the endocyst and haemoptysis.Fistulized cysts are prone to superinfection with chronic lung abscess formation.Diferent treatment options are available for liver cysts, where a stage-specifc approach has been recommended (medical therapy, surgery, percutaneous treatment, and watch and wait approach) [4].However, the anatomical structure of the lung makes surgery the only option for large cysts.Benzimidazoles may be used in small cysts, as stated in the WHO-IWGE recommendations [4,8], although failure rates of around 30% have been reported [8].Percutaneous treatments are not indicated in lung CE patients, due to the high risk of rupture and dissemination and the same risk is present for larger cysts treated with benzimidazoles [9].Hence, in lung CE, surgery must be employed for active and transitional cysts larger than 5 cm or for complicated cysts of any stage [4,10,11].Prompt surgical treatment or treatment with ABZ in selected cases allow cure in the majority of cases, with relatively few recurrences and-in expert hands-minimal complications [10,12,13].While not indicated as treatment in large pulmonary cysts due to the risk of rupture [14], all surgical interventions on the cyst require the use of a onemonthperioperative prophylaxis with ABZ (beginning from the day before the procedure), to avoid surgeryrelated CE dissemination [15].
Te impact of the COVID-19 outbreak on healthcare systems, with the almost complete interruption of surgical activities, prevented our patients from being treated early [16].Te radiological fndings in May 2022 (volume reduction, intracystic air bubbles, and pericystic consolidation) correlate with rupture and infection of the lung CE cyst [5,7].Complicated cysts in the lung are associated with higher postoperative morbidity and mortality [6].Te elapsed time probably allowed for the formation of an infammatory tissue around the lung cyst, increasing the thicknessof the pericyst.Te CT scan and ultrasound of the lung cyst performed immediately before surgery showed solidifcation of the content (CE4) and a thick cystic wall, a pattern rarely observed in lung CE (Figure 1(e)).Te patient was initially treated with ABZ, then he developed a bronchial fstula that led episodes of expectoration of parasitic material.Chronic infammation caused the thickening of the pericyst , assuggested by the presence of increased hilar lymph nodes.Bacterial superinfection (neutrophilia, increase in C-reactive protein) was noted.Since pericystic infammation is known to occur following the administration of ABZ [17], it might have had a role in blood vessel erosion triggering haemoptysis.Chronic infammation and the increased thickness of the pericyst may also have prevented the occurence ofmajor bronchial fstulas that were absent at the preoperative bronchoscopic examination.[10].[14], [10].
In conclusion, the forced delay in surgical treatment of the lung cyst shows that the shift in cyst stage occurred during medical treatment from CE1 to CE3a, to CE4, occurs 2 Case Reports in Infectious Diseases in the lung as well as in the liver and in other organs (Figure 3) [18,19] and that entails the use of the more invasive pericystectomy instead of the endocystectomy performed when the cyst is CE1 or CE3a.
Figure 1 :Figure 2 :
Figure 1: Radiological and surgical fndings.(a, b) Toracic imaging at the time of diagnosis showing a liquid cyst; (c, d) thoracic imaging right before surgery showing a decrease in cystic volume, air within the cyst, and pericystic thickening; (e) ultrasound of the lung cyst carried out during the hospitalization before surgery showing the complex US structure of the cyst, with the "ball of wool sign"; (f, g) intraoperative fndings showing the thoracotomic approach and the shreds of pericyst requiring careful dissection.
|
v3-fos-license
|
2024-05-08T06:17:06.234Z
|
2024-05-06T00:00:00.000
|
269611162
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "2db77173a913ee241d2a5519d89559f536062abb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41894",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science"
],
"sha1": "337576f88124374e6622188d141dc375a4480f4e",
"year": 2024
}
|
pes2o/s2orc
|
Improved random forest classification model combined with C5.0 algorithm for vegetation feature analysis in non-agricultural environments
In response to the challenges posed by the high computational complexity and suboptimal classification performance of traditional random forest algorithms when dealing with high-dimensional and noisy non-agricultural vegetation satellite data, this paper proposes an enhanced random forest algorithm based on the C5.0 algorithm. The paper focuses on the Liaohe Plain, selecting two distinct non-agricultural landscape patterns in Shenbei New District and Changtu County as research objects. High-resolution satellite data from GF-2 serves as the experimental dataset. This paper introduces an ensemble feature method based on the bagging concept to improve the original random forest classification model. This method enhances the likelihood of selecting features beneficial to classifying positive class samples, avoiding excessive removal of useful features from negative samples. This approach ensures feature importance and model diversity. The C5.0 algorithm is then employed for feature selection, and the enhanced vegetation index (EVI) is utilized for vegetation coverage estimation. Results indicate that employing a multi-scale parameter selection tool, combined with limited field-measured data, facilitates the identification and classification of plant species in forest landscapes. The C5.0 algorithm effectively selects classification features, minimizing information redundancy. The established object-oriented random forest classification model achieves an impressive accuracy of 94.02% on the aerial imagery for forest classification dataset, with EVI-based vegetation coverage estimation demonstrating high accuracy. In experiments on the same test set, the proposed algorithm attains an average accuracy of 90.20%, outperforming common model algorithms such as bidirectional encoder representation from transformer, FastText, and convolutional neural network, which achieve average accuracies ranging from 84.41 to 88.33% in identifying non-agricultural artificial habitat vegetation features. The proposed algorithm exhibits a competitive edge compared to other algorithms. These research findings contribute scientific evidence for protecting agricultural ecosystems and restoring agricultural ecosystem biodiversity.
Literature review
With the rapid advancement of information technology, feature analysis algorithms have been continuously optimized in text analysis algorithms.Ozigis et al. 11 conducted research on the fusion and classification of various vegetation indices and spectral wavelengths in different bands, utilizing random forest classifiers.The random forest-machine learning classifier demonstrates versatility in its application to various ecological environments and has the capability to generate accurate vegetation function type maps, thereby offering an effective approach for vegetation classification 12 .Dobrinić et al. 13 employed a random forest variable selection method with reduced precision to identify the most relevant features for vegetation mapping, resulting in improved classification performance suitable for large-scale land cover classification.Meno et al. 14 utilized machine learning algorithms such as random forest and C5.0 decision trees to successfully predict daily late blight spore levels, with the C5.0-optimized random forest model achieving higher accuracy.Guo et al. 15 investigated the generation of regional landslide susceptibility maps using machine learning methods based on the C5.0 decision tree model and K-means clustering algorithm.Their results showed superior mapping outcomes compared to traditional models like SVMs and Bayesian networks 15 .Çelik 16 conducted a comparison between the C4.5 and C5.0 algorithms and found that the classification model built using the C5.0 algorithm exhibited lower misclassification rates and higher accuracy.The use of satellite-derived normalized difference vegetation index (NDVI) and EVI enables the assessment of the direct impact of floods on vegetation cover, offering an effective method for studying vegetation coverage 17,18 .Additionally, Dai et al. 19 demonstrated that the evaluation of the influence of crop residues on vegetation index and vegetation cover estimation could be achieved by comparing enhancement values and vertical values using a 2-m pixel model and a three-dimensional radiative transfer model.
In summary, machine learning algorithms, including random forest and C5.0 decision trees, have found extensive application in vegetation classification, land cover classification, and yield prediction.Additionally, the
Non-agricultural habitat vegetation
Non-agricultural environment vegetation encompasses a wide range of plant communities found in non-agricultural settings.These vegetation types include diverse plant forms, such as flowers in urban parks, street trees along sidewalks, and trees and herbaceous plants in forests 21 .Figure 1 visually presents the different types of non-agricultural environment vegetation and highlights their research significance.
In non-agricultural environments, plant diversity (S W ), richness (F), and dominance (Y) can be calculated using the following equations: In these equations, N stands for the number of species in a sample plot, X signifies the total number of individuals of all species in the sample plot, and Z M represents the importance of species M within its population.These equations provide a quantitative assessment of plant diversity, richness, and dominance in non-agricultural environments.
The theoretical framework of habitats lays the groundwork for understanding the distribution, ecological functions, and impacts of non-agricultural habitat vegetation.Researchers can establish a comprehensive research background and theoretical framework by incorporating concepts of habitat and non-agricultural habitat vegetation.This enables targeted exploration of ecological characteristics, ecosystem functions, and classification issues (1)
High-resolution satellite-2 (GF-2) data processing workflow
The high-resolution satellite-2 (GF-2) is a domestically developed remote sensing satellite system in China that offers high-resolution and multispectral capabilities.It was designed and manufactured by the Fifth Academy of China Aerospace Science and Technology Corporation 22 .Detailed parameters of the GF-2 satellite can be found in Tables 1 and 2.
The GF-2 satellite has significantly contributed to diverse fields, including land resource surveys and environmental monitoring, by providing high-resolution multispectral image data.This has been made possible through the implementation of an efficient image data preprocessing workflow tailored specifically for the GF-2 satellite.The extensive capabilities of the GF-2 satellite, along with its accompanying image data preprocessing workflow, are clearly illustrated in Fig. 2.
Figure 2 showcases the remarkable capabilities of the GF-2 satellite, including high-resolution imaging, multispectral observation, data acquisition and updates, wide application domains, as well as data sharing and utilization.
C5.0 algorithm and computational process
The C5.0 algorithm is a decision tree algorithm that utilizes the information gain ratio criterion for effective analysis.It is particularly suitable for handling high-dimensional data and large-scale datasets.Through the process of feature selection and determination of splitting points, the C5.0 algorithm efficiently extracts valuable information from complex data structures 23 .The key characteristics and computational process of the C5.0 algorithm are visually represented in Fig. 3.
As illustrated in Fig. 3, the C5.0 algorithm exhibits distinct characteristics, including feature selection, determination of splitting points, and recursive processing.It excels in constructing decision tree models that are both accurate and interpretable, enhancing model generalization through the implementation of pruning operations.The computational process primarily entails data initialization, feature selection, data splitting, and recursive processing.
Estimation of EVI and calculation of vegetation cover
The estimation of vegetation cover is accomplished using the EVI, which utilizes remote sensing data from the visible and near-infrared bands 24 .EVI serves as an effective index for assessing vegetation cover and is characterized by specific computational processes, as depicted in Fig. 4. As depicted in Fig. 4, EVI estimation possesses distinct characteristics, including its reliance on vegetation indices, sensitivity to vegetation cover, ability to reflect vegetation growth status, and applicability to large-scale areas.The computational process of EVI estimation encompasses several stages, namely obtaining remote sensing data, performing data preprocessing, calculating EVI values, conducting spatial statistics and analysis, and interpreting and applying the obtained results.
Let R r represent the reflectance of near-infrared light in the remote sensing image, r denotes the reflectance of red light in the remote sensing image, and b indicates the reflectance of blue light in the remote sensing image.O signifies the gain factor used to correct spectral response, V 1 and V 2 serve as adjustment parameters used to correct atmospheric scattering and soil background effects, and D stands for the adjustment parameter for correcting image background brightness.The EVI is defined as Eq. ( 4).Vegetation cover refers to the extent or proportion of a particular region or surface that is occupied by plants.
It provides information about the density and growth status of vegetation in that area 25 .The estimation of vegetation cover is commonly performed using methods such as the NDVI and EVI algorithms.These indices enable the quantification and assessment of vegetation abundance and health.
The determination coefficient K 2 and root mean square error W can be used to evaluate the accuracy of vegetation cover estimation.The equations for calculating K 2 and W are as follows, where q represents the total number of samples, jp denotes the vegetation cover value for the pth sample, J represents the modeled estimation of the vegetation cover value for the pth sample, and j denotes the average vegetation cover value:
Random forest classification model under text classification
Text classification is an automated process that aims to categorize textual data into predefined classes or labels.It encompasses several steps, including preprocessing the raw text, feature extraction, and training or predicting using machine learning or deep learning models 26 .The workflow for text classification is visualized in Fig. 5, demonstrating the sequence of tasks involved in the process.
As depicted in Fig. 5, the text classification process comprises several essential steps, including dataset selection, document parsing, feature extraction, feature selection, text vectorization, classification, and evaluation.Each of these steps plays a crucial role in achieving accurate and reliable text classification results.
The bagging algorithm, illustrated in Fig. 6, is an ensemble method that effectively reduces model variance, improves generalization, and enhances prediction accuracy.It achieves this by employing bootstrap sampling and aggregation techniques.The bagging algorithm is widely used in various machine learning tasks and has been www.nature.com/scientificreports/Random Forest is a robust machine learning algorithm widely employed for text classification tasks 28,29 .It exhibits notable performance in handling high-dimensional data and provides effective feature selection and prediction capabilities.Figure 7 illustrates the structure and algorithmic process of the random forest model.
Figure 7 depicts the Random Forest model comprising multiple decision trees.Each decision tree is trained using bootstrap sampling and random feature selection.The final classification or regression is conducted by aggregating the prediction results of individual trees, either through voting or averaging.This ensemble approach aims to enhance the accuracy and generalization capability of the model.
Let C represent the total number of pixels actually classified as class r, c stands for the total number of pixels, β vv denotes the number of pixels correctly classified as class v, β rv signifies the number of pixels misclassified as class r but actually belong to class v, β rr represent the number of pixels correctly classified as class r, and β vr represent the number of pixels classified as class r but actually belong to class v.The formulas for overall accuracy Q J , map accuracy Z T , and user's accuracy Y H are as follows: Let T represent the total number of pixels used for accuracy evaluation, α represent the total number of classes, X γγ represent the number of correctly classified pixels, X γg represents the total number of pixels in the γth row of the confusion matrix, and X gγ represents the total number of pixels in the γth column of the confusion matrix.The kappa coefficient can be described as Eq. ( 10).This paper enhances the random forest classifier by integrating steps from the C5.0 algorithm to boost its performance.The incorporation of the C5.0 algorithm leads to improvements in the random forest classifier's performance.Firstly, the implementation involves computing the entropy of initial samples to gauge information uncertainty.Subsequently, data partitioning is based on each feature, with the best splitting feature selected through information gain calculation.Following this, samples with the highest information gain ratio are chosen for partitioning, forming child nodes, and recursively generating the entire decision tree until all feature attributes are partitioned.These steps enhance the accuracy of the random forest classifier as the C5.0 algorithm efficiently selects splitting features, resulting in a more discriminative decision tree structure.By amalgamating the C5.0 algorithm with random forest, the improved algorithm better accommodates the high-dimensional and high-noise characteristics of non-agricultural habitat satellite data, thereby yielding more precise classification outcomes.This paper utilizes multi-temporal remote sensing data from the GF-2 (Gaofen-2) and Landsat-8 satellites.GF-2 satellite data comprised panchromatic and multispectral bands, spanning 0.45-0.90µm for the panchromatic band and including blue, green, red, and near-infrared bands for multispectral data.Landsat-8 satellite data encompasses multispectral bands, covering blue, green, red, and near-infrared bands.Observation times were GF-2 (2018-06-03) and Landsat-8 (2018-05-24).Initially, radiometric calibration using the Generic Calibrator tool in ENVI 5.3 software ensures data accuracy for both panchromatic and multispectral images.Subsequently, atmospheric correction on multispectral data is conducted using the FLASH tool to mitigate atmospheric and lighting effects on land feature reflectance.Orthorectification via the RPC Orthorectification Workflow in ENVI software eliminates geometric distortions, yielding accurate orthorectified images.Finally, multispectral image fusion with the panchromatic image using the GS method produced high-resolution multispectral images, serving as reliable foundational data for subsequent land cover classification and change detection.Feature extraction on segmented objects covers four main aspects: spectral, geometric, texture, and remote sensing indices, totaling 85 features.Spectral features, reflecting object spectral information, include grayscale mean, standard deviation, brightness, and maximum difference calculations.Geometric features, derived from covariance matrix statistics, describe an object's geometric shape and size, comprising area, perimeter, length-width ratio, density, and rectangular fit.Texture features, calculated using a gray-level co-occurrence matrix and gray-level difference vector, capture object texture information, such as homogeneity, variance, heterogeneity, angular second moment, and entropy.Remote sensing indices, including NDVI, EVI, Atmospherically Resistant Vegetation Index (ARVI), Water Index, and Building Area Index, aided in land feature extraction.
This paper employs high-resolution satellite imagery data alongside an enhanced random forest classification model based on the C5.0 algorithm.To adapt text classification algorithms to image data, a preprocessing step is essential, transforming images into feature vectors suitable for algorithmic processing.This process entails extracting features like spectral information and texture features, alongside data preprocessing and labeling.Subsequently, appropriate text classification algorithms, such as SVMs and naive Bayes, are chosen for model training, leveraging enhanced feature selection methods and feature-based enhanced vegetation indices for optimization.Following model training, thorough evaluation and validation refine the classification model, which is then applied to unknown image data for prediction.This holistic approach effectively applies text classification algorithms to image data, enabling precise classification and identification of complex image data.The paper concentrates on forest classification research across two categories: forest and grassland.It employs 932 grassland samples and 45 forest samples for the training set, and 1031 grassland samples and 23 forest samples for the validation set, meticulously annotated and labeled to ensure the accuracy and reliability of the models presented in this paper.As depicted in Fig. 8, the improved Random Forest classification model achieves high accuracy on different datasets.On the GF-2 dataset, the Random Forest model exhibited an overall accuracy of 91.19% with a Kappa coefficient of 0.911.The C5.0 model achieves an overall accuracy of 85.17% with a Kappa coefficient of 0.831, while the SVM model achieves an overall accuracy of 90.2% with a Kappa coefficient of 0.897.On the AIFC dataset, the Random Forest model achieves an overall accuracy of 94.02% with a Kappa coefficient of 0.931, while the C5.0 model achieves an overall accuracy of 92% with a Kappa coefficient of 0.919.The SVM model achieves an overall accuracy of 93.97% with a Kappa coefficient of 0.922.On the Landsat dataset, the Random Forest model achieves an overall accuracy of 91.01% with a Kappa coefficient of 0.895.The C5.0 model achieves an overall accuracy of 83.51% with a Kappa coefficient of 0.811, while the SVM model achieves an overall accuracy of 89.63% with a Kappa coefficient of 0.875.The comparison of vegetation coverage estimation results indicates that different vegetation types have a significant impact on NDVI and EVI estimation values.Forested areas generally exhibit higher NDVI and EVI values compared to grassland, indicating a higher vegetation coverage and growth vitality in forested regions.Among the grassland samples, Sample 1 demonstrates the highest NDVI and EVI estimation values, measuring 45.3% and 49.5%, respectively, while Sample 5 exhibits the lowest values at 18.9% and 17.6%, respectively.Among the forest samples, Sample 1 has the highest NDVI and EVI estimation values at 52.1% and 89.2%, respectively, while Sample 6 has the lowest values at 27.6% and 36.1%,respectively.Notably, EVI estimation values generally outperform NDVI in reflecting the vegetation condition in forested areas, as they tend to be higher in such regions.In summary, the findings of this paper underscore the notable advantages of the enhanced random forest classification model in processing high-resolution satellite data.Across diverse datasets, the model exhibits high accuracy and Kappa coefficients, showcasing its proficiency in accurately categorizing various land cover types.Comparative analyses of NDVI and EVI estimates across different vegetation types unveil disparities in vegetation coverage and vitality, offering valuable insights into land surface vegetation distribution and ecosystem conditions.Moreover, the research outcomes emphasize the reliability and robustness of the enhanced random forest classification model in vegetation classification and coverage estimation, thereby furnishing substantial support for leveraging remote sensing data in ecological environment monitoring and resource management endeavors.As depicted in Fig. 9, the comparison results of average accuracy among common classification models show that Bert, FastText, and TextCNN achieve average accuracies of 84.41%, 87.55%, and 88.33%, respectively.In contrast, the improved model algorithm attains an average accuracy of 90.20%, significantly outperforming these common models in recognizing features of non-agricultural artificial habitat vegetation.This underscores its superior classification accuracy and performance in non-agricultural habitat vegetation classification.Analyzing the landscape classification accuracy of the improved random forest model reveals notable enhancements.In Sample 1, the unimproved random forest model yields user accuracy, mapping accuracy, and overall accuracy of 0.74, 0.73, and 0.67, respectively, with a Kappa coefficient of 0.58, indicating subpar accuracy and performance.In contrast, the improved model achieves substantial improvements, with user accuracy, mapping accuracy, and overall accuracy reaching 0.97, 0.97, and 0.90, respectively.The Kappa coefficient rises to 0.81, signifying higher classification accuracy and result reliability.In Sample 2, the unimproved random forest model exhibits user accuracy, mapping accuracy, and overall accuracy of 0.12, 0.43, and 0.64, respectively, with a Kappa coefficient of 0.51, indicating inadequate overall performance.Conversely, the improved model demonstrates enhanced accuracy metrics, with user accuracy, mapping accuracy, overall accuracy, and Kappa coefficient reaching 0.21, 0.66, 0.87, and 0.74, respectively.These results affirm its superior overall accuracy and improved classification accuracy and result reliability.In general, the evaluation and comparison of the enhanced random forest classification model in non-agricultural artificial habitat vegetation classification tasks yield the following conclusions: The enhanced model exhibits remarkable accuracy and reliability in discerning non-agricultural artificial habitat vegetation characteristics.Compared to conventional classification models, it attains higher average accuracy, signifying superior classification performance.Furthermore, through comprehensive landscape classification accuracy analysis, substantial enhancements are observed across various samples, further affirming its efficacy in practical scenarios.In summary, this enhanced random forest classification model holds considerable practical value and promising application prospects, particularly in ecological environment monitoring, resource management, and land use planning.
Discussion
In the realm of non-agricultural habitat vegetation research, this paper delves deeply into the classification of vegetation satellite data within non-agricultural environments.Focusing on the Liaohe Plain and two distinct non-agricultural landscapes, Shenyang North New District and Changtu County, high-resolution satellite data serve as the experimental dataset.The prevalent challenges of high dimensionality and significant noise are acknowledged in the field.However, through refining the random forest classification model and integrating the C5.0 algorithm and EVI estimation, this paper aims to optimize the feature analysis model, enhancing the accuracy and generalization ability of the classification model for non-agricultural habitat vegetation.Notably, the adoption of an ensemble feature method based on the bagging approach increases the likelihood of selecting features conducive to classifying positive samples while mitigating the risk of discarding useful features from negative samples.This ensures the significance of features and promotes model diversity, offering a novel approach to address issues like information redundancy and high computational complexity in satellite data classification for non-agricultural habitat vegetation.Additionally, leveraging the C5.0 algorithm alongside EVI estimation provides a more scientific foundation for selecting classification features.Overall, this paper innovates in methodology and demonstrates superior accuracy and competitiveness through experimentation in classifying non-agricultural habitat vegetation.By enhancing the capability to identify and classify such vegetation, it furnishes a more reliable scientific underpinning for ecosystem protection and biodiversity restoration in farmland ecosystems.Future research avenues could explore the applicability of this method in diverse regions and datasets to affirm its universality and stability.Research on non-agricultural habitat vegetation serves multiple purposes, including comprehending urban ecosystems, preserving natural environments, assessing vegetation health, and providing scientific grounding for urban planning, ecological conservation, and sustainable development.
Conclusion
In recent years, the impact of non-agricultural habitat vegetation on ecological diversity and balance has grown in significance.However, challenges persist in satellite data classification of such vegetation, prompting the need for research to optimize models for feature analysis, enhancing classification accuracy.This paper selects the Liaohe Plain as the research area, with Shenyang North New District and Changtu County as focal points, utilizing high-resolution satellite data as the experimental dataset.The original random forest model is refined to improve classification by introducing an ensemble feature method based on the bagging approach.This method enhances the selection of features conducive to classifying positive samples while preserving useful features from negative samples, ensuring feature importance and model diversity.Additionally, the C5.0 algorithm is employed for feature selection, and EVI is utilized to estimate vegetation coverage.The results demonstrate the high classification performance of the random forest model in non-agricultural habitat vegetation satellite data classification.Achieving an overall accuracy of 94.02% and a Kappa coefficient of 0.931 on the AIFC dataset, the random forest model outperforms the C5.0 model and support vector machine model in terms of classification accuracy and reliability.Moreover, EVI-based vegetation coverage estimation yields highly accurate results.With an average accuracy of 90.20%, the improved algorithm surpasses common model algorithms like Bert, FastText, and TextCNN, which had average accuracies ranging from 84.41 to 88.33%.This underscores the enhanced accuracy of the improved model algorithm, rendering it more adept at identifying features of non-agricultural habitat vegetation.The enhanced model facilitates precise identification and mapping of target categories, offering valuable insights for decision-making and resource management in relevant fields.It also provides guidance for further refinement and application of classification algorithms, contributing to advancements in satellite data analysis and ecosystem management.
One limitation of this paper pertains to the data used.The experiments were conducted solely on specific regions and agricultural landscape data from the Liaohe Plain, which may introduce biases and restrict the representation of a broader range of non-agricultural habitat vegetation.Furthermore, the parameter settings employed here may not be universally applicable to other datasets, necessitating further investigation into parameter tuning and generalizability studies.To address this limitation, future research should aim to expand the dataset by incorporating a wider range of non-agricultural habitat vegetation types from diverse regions.This strategy would facilitate the validation of the improved algorithm's robustness and applicability.Additionally, optimizing the parameter settings of the improved algorithm should be considered to enhance model performance and generalizability, enabling its suitability for various non-agricultural habitat vegetation classification tasks.Lastly, exploring the integration of additional text classification algorithms or incorporating deep learning
Figure 2 .
Figure 2. Capability of GF-2 satellite and image data preprocessing process.
Figure 6 .
Figure 6.Calculation process of the bagging algorithm.
Figure 7 .
Figure 7. Structure and algorithm flow of the Random Forest model.
Figure 8
Figure 8 presents a comparison of the accuracy of the improved random forest classification model and the estimation results of vegetation coverage.As depicted in Fig.8, the improved Random Forest classification model achieves high accuracy on different datasets.On the GF-2 dataset, the Random Forest model exhibited an overall accuracy of 91.19% with a Kappa coefficient of 0.911.The C5.0 model achieves an overall accuracy of 85.17% with a Kappa coefficient of 0.831, while the SVM model achieves an overall accuracy of 90.2% with a Kappa coefficient of 0.897.On the AIFC dataset, the Random Forest model achieves an overall accuracy of 94.02% with a Kappa coefficient of 0.931, while the C5.0 model achieves an overall accuracy of 92% with a Kappa coefficient of 0.919.The SVM model achieves an overall accuracy of 93.97% with a Kappa coefficient of 0.922.On the Landsat dataset, the Random Forest model achieves an overall accuracy of 91.01% with a Kappa coefficient of 0.895.The C5.0 model achieves an overall accuracy of 83.51% with a Kappa coefficient of 0.811, while the SVM model achieves an overall accuracy of 89.63% with a Kappa coefficient of 0.875.The comparison of vegetation coverage estimation results indicates that different vegetation types have a significant impact on NDVI and EVI estimation values.Forested areas generally exhibit higher NDVI and EVI values compared to grassland, indicating a higher vegetation coverage and growth vitality in forested regions.Among the grassland samples, Sample 1 demonstrates the highest NDVI and EVI estimation values, measuring 45.3% and 49.5%, respectively, while Sample 5 exhibits the lowest values at 18.9% and 17.6%, respectively.Among the forest samples, Sample 1 has the highest NDVI and EVI estimation values at 52.1% and 89.2%, respectively, while Sample 6 has the lowest values at 27.6% and 36.1%,respectively.Notably, EVI estimation values generally outperform NDVI in reflecting the vegetation condition in forested areas, as they tend to be higher in such regions.In summary, the findings of this paper underscore the notable advantages of the enhanced random forest classification model in processing high-resolution satellite data.Across diverse datasets, the model exhibits high accuracy and Kappa coefficients, showcasing its proficiency
Figure 9
Figure 9 illustrates the comparison of average accuracy among different models and the landscape classification accuracy.As depicted in Fig.9, the comparison results of average accuracy among common classification models show that Bert, FastText, and TextCNN achieve average accuracies of 84.41%, 87.55%, and 88.33%, respectively.In contrast, the improved model algorithm attains an average accuracy of 90.20%, significantly outperforming these common models in recognizing features of non-agricultural artificial habitat vegetation.This underscores its superior classification accuracy and performance in non-agricultural habitat vegetation classification.Analyzing the landscape classification accuracy of the improved random forest model reveals notable enhancements.In Sample 1, the unimproved random forest model yields user accuracy, mapping accuracy, and overall accuracy of 0.74, 0.73, and 0.67, respectively, with a Kappa coefficient of 0.58, indicating subpar accuracy and performance.In contrast, the improved model achieves substantial improvements, with user accuracy, mapping accuracy, and overall accuracy reaching 0.97, 0.97, and 0.90, respectively.The Kappa coefficient rises to 0.81, signifying higher
Figure 8 .
Figure 8.Comparison of model accuracy and vegetation coverage estimation results.
Figure 9 .
Figure 9.Comparison results of average accuracy and landscape classification accuracy of different models.
pertaining to vegetation in non-agricultural habitats.Such a framework serves as the basis for addressing the challenges posed by high computational complexity and poor classification performance of traditional random forest algorithms in classifying non-agricultural habitat vegetation.
The improved random forest classification model based on the C5.0 algorithm established in this paper utilizes several databases, including the GF-2, Landsat, and Aerial Imagery Forest Classification (AIFC) datasets.The GF-2 database comprises high-resolution remote sensing image data, remote sensing products, and remote sensing application services from the Chinese High-Resolution Earth Observation System's GF-2 satellite.The Landsat database contains remote sensing image data acquired through the United States Landsat program, which utilizes multispectral remote sensing technology to capture surface images and provides data for multiple spectral bands, widely applied in fields such as land use, vegetation monitoring, and water resources management.The AIFC dataset, available at (https:// www.gisrs data.com), is specifically designed for forest classification research, comprising high-resolution aerial imagery data tailored for forest areas, which can be used to train and evaluate the performance of forest classification algorithms and models (Supplementary information).
|
v3-fos-license
|
2015-05-13T22:28:07.000Z
|
2014-11-18T00:00:00.000
|
119217865
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.nuclphysb.2015.05.014",
"pdf_hash": "c1d12f80504b93be394745038fa27780d51afaa0",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41896",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "ba40c8f4f888c890ed552a5b172713464ebe08b3",
"year": 2015
}
|
pes2o/s2orc
|
The effective QCD phase diagram and the critical end point
We study the QCD phase diagram on the plane of temperature T and quark chemical potential mu, modelling the strong interactions with the linear sigma model coupled to quarks. The phase transition line is found from the effective potential at finite T and mu taking into accounts the plasma screening effects. We find the location of the critical end point (CEP) to be (mu^CEP/T_c,T^CEP/T_c) sim (1.2,0.8), where T_c is the (pseudo)critical temperature for the crossover phase transition at vanishing mu. This location lies within the region found by lattice inspired calculations. The results show that in the linear sigma model, the CEP's location in the phase diagram is expectedly determined solely through chiral symmetry breaking. The same is likely to be true for all other models which do not exhibit confinement, provided the proper treatment of the plasma infrared properties for the description of chiral symmetry restoration is implemented. Similarly, we also expect these corrections to be substantially relevant in the QCD phase diagram.
The different phases in which matter, made up of quarks and gluons, arranges itself depends, as for any other substance, on the temperature and density, or equivalently, on the temperature and chemical potentials. Under the assumptions of beta decay equilibrium and charge neutrality, the representation of the QCD phase diagram is two dimensional. This is customary plotted with the light-quark chemical potential µ as the horizontal variable and the temperature T as the vertical one. µ is related to the baryon chemical potential µ B by µ B = 3µ.
Most of our knowledge of the phase diagram is restricted to the µ = 0 axis. The phase diagram is, by and large, unknown. For physical quark masses and µ = 0, lattice calculations have shown [1] that the change from the low temperature phase, where the degrees of freedom are hadrons, to the high temperature phase described by quarks and gluons, is an analytic crossover. The phase transition has a dual nature: on the one hand the colorsinglet hadrons break up leading to deconfined quarks and gluons; this is dubbed as the deconfinement phase transition. On the other hand, the dynamically generated component of quark masses within hadrons vanishes; this is referred to as chiral symmetry restoration.
The picture presented by lattice QCD for T ≥ 0, µ = 0 cannot be easily extended to the case µ = 0, the reason being that standard Monte Carlo simulations can only be applied to the case where either µ = 0 or is purely imaginary. Simulations with µ = 0 are hindered by the sign problem, see, for, example, [7], though some mathematical extensions of lattice techniques [8] can probe this region. Schwinger-Dyson equation studies support these findings and can successfully explore all region of the phase space [9].
On the other hand a number of different model approaches indicate that the transition along the µ axis, at T = 0, is strongly first order [10]. Since the first order line originating at T = 0 cannot end at the µ = 0 axis which corresponds to the starting point of the crossover line, it must terminate somewhere in the middle of the phase diagram. This point is generally referred to as the critical end point (CEP). The location and observation of the CEP continue to be at the center of efforts to understand the properties of strongly interacting matter under extreme conditions. The mathematical extensions of lattice techniques place the CEP in the region (µ CEP /T c , T CEP /T c ) ∼ (1.0 − 1.4, 0.9).
In the first of Refs. [9], it is argued that the theoretical location of the CEP depends on the size of the confining length scale used to describe strongly interacting matter at finite density/temperature. This argument is supported by the observation that the models which do not account for this scale [11][12][13][14] produce either a CEP closer to the µ axis (µ CEP /T c and T CEP /T c larger and smaller, respectively) or a lower T c [15] than the lattice based approaches or the ones which consider a finite confining length scale. Given the dual nature of the QCD phase transition, it is interesting to explore whether there are other features in models which have access only to the chiral symmetry restoration facet of QCD that, when properly accounted for, produce the CEP's location more in line with lattice inspired results.
An important clue is provided by the behavior of the critical temperature as a function of an applied magnetic field. Lattice calculations have found that this temperature decreases when the field strength increases [16][17][18]. It has been recently shown that this phenomenon, dubbed inverse magnetic catalysis, is not due exclusively to confinement but instead that chiral symmetry restoration plays an important role. This result is born out of the decrease of the coupling constant with increasing field strength and is obtained within effective models that do not have confinement such as the Abelian Higgs model or the linear sigma model with quarks. The novel feature implemented in these calculations is the handling of the screening properties of the plasma, which effectively make the treatment go beyond the mean field approximation [19,20].
The importance of accounting for screening in plasmas where massless bosons appear has been pointed out since the pioneering work in Ref. [21] and implemented in the context of the Standard Model to study the electroweak phase transition [22]. Screening is also important to obtain a decrease of the coupling constant with the magnetic field strength in QCD in the Hard Thermal Loop approximation [23]. In this work we explore the consequences of the proper handling of the plasma screening properties in the description of the effective QCD phase diagram within the linear sigma model with quarks. We find that for certain values of the model parameters, obtained from physical constraints, the CEP's location agrees with lattice inspired calculations. Since the linear sigma model does not have confinement, we argue that it is the adequate description of the plasma screening properties for the chiral symmetry breaking within the model which determines the CEP's location.
We start from the linear sigma model coupled to quarks. It is given by the Lagrangian density where ψ is an SU(2) isospin doublet, π = (π 1 , π 2 , π 3 ) is an isospin triplet and σ is an isospin singlet. The neutral pion is taken as the third component of the pion isovector, π 0 = π 3 and the charged pions as π ± = (π 1 ∓iπ 2 )/2. The squared mass parameter a 2 and the self-coupling λ and g are taken to be positive.
To allow for the spontaneous breaking of symmetry, we let the σ field develop a vacuum expectation value v which can later be taken as the order parameter of the theory. After this shift, the Lagrangian density can be rewritten as where L b I and L f I are given by and describe the interactions among the fields σ, π and ψ, after symmetry breaking. From Eq. (3) we see that the σ, the three pions and the quarks have masses respectively. Including the v-independent terms, choosing the renormalization scale asμ = e −1/2 a and after mass renormalization, it is straightforward to show that the effective potential up to the ring diagrams contribution for a finite chemical potential and in the limit where the masses are small compared to T , can be written as where ψ 0 (x) is the digamma function and Li n (x) is the polylogarithm function of order n. It can also be shown that the boson self-energy Π in Eq. (6), computed for a finite chemical potential and also in the limit where the masses are small compared to T , is given by where N f = 2 and N c = 3 are the number of light flavors and colors, respectively. In the limit when µ → 0, Eq. (6) yields the well known result for the effective potential up to the ring diagrams contribution at high temperature [21]. In the same limit, Eq. (7) reduces to the well known expression at high temperature [20].
Note that the self-energy provides the screening to render the effective potential in Eq. (6) stable. Should this self-energy be absent, the term (m 2 i +Π) 3/2 would instead be (m 2 i ) 3/2 , which becomes imaginary when for certain values of v, m 2 i becomes negative [see Eqs. (5)]. This term is obtained from considering the resummation of the ring diagrams and therefore Eq. (6) represents the effective potential computed beyond the mean field approximation that accounts for the leading screening effects at high temperature.
In order to find the values of the parameters λ, g and a appropriate for the description of the phase transition, we note that when considering the thermal effects the boson masses are modified since they acquire a thermal component. For µ = 0 they become At the phase transition, the curvature of the effective potential vanishes for v = 0. Since the boson thermal masses are proportional to this curvature, these also vanish at v = 0. From any of the Eqs. (8), we obtain a relation between the model parameters at T c given by Furthermore, we can fix the value of a by noting from Eqs. (5) that the vacuum boson masses satisfy Since in our scheme we consider two-flavor massless quarks in the chiral limit, we take T c ≃ 170 MeV [24] which is slightly larger than T c obtained in N f = 2 + 1 lattice simulations. Also, in order to allow for a crossover phase transition for µ = 0 (which in our description corresponds to a second order transition) with g, λ < 1 we need that g 2 > λ and that a is not very large. This last condition is obtained for a small m σ . The Particle Data Group quotes 400 MeV ≤ m σ ≤ 550 MeV [25]. There are also analyses that place m σ close to the two-pion threshold [26]. Since σ is anyhow a broad resonance, in order to satisfy the above requirements we take for definitiveness m σ ≃ 300 MeV, namely, right at the beginning of the two-pion threshold. Therefore, the allowed values for the couplings λ and g are restricted by Equation (11) provides a restriction stemming exclusively from the boson sector. We can attempt to supplement this restriction with information from the fermion sector. The physical quark mass is given by where v 0 represents the value of v where the effective potential has a minimum and m 0 is the current mass. When chiral symmetry is restored (referring to its dynamical component), v 0 = 0 and the quark mass is only given by m 0 ≃ 5 MeV. To restrict the allowed values of g, we need information on v 0 . A finite value of v 0 is obtained at the beginning of the phase transition when this is first order, for otherwise if the phase transition is second order, v 0 vanishes. At the beginning of the first order phase transitions the value of v 0 must be a small fraction of the energy scale a. For definitiveness, let's take this scale to be one order of magnitude smaller than a, namely, Let us consider that at the phase transition, the lightquark masses acquire a chiral symmetry broken contribution to their masses of the same order as the current mass, namely gv 0 ≃ 5 MeV. Using a ≃ 130 MeV, we get Here it is pertinent to mention that though the linear sigma model is usually regarded only as a rough model to provide qualitative information at finite temperature, more quantitative statements can be made in this realm by estimating the couplings from arguments valid at the phase transition instead of only from vacuum [27]. Figure 1 shows the phase diagram obtained for a set of allowed values λ and g. Note that for small µ the phase transition is second order. In this case the (pseudo)critical temperature is determined from setting the second derivative of the effective potential in Eq. (6) to zero at v = 0. When µ increases, the phase transition becomes first order. The critical temperature is now computed by looking for the temperature where a secondary minimum for v = 0 is degenerate with a minimum at v = 0. From the detailed analysis, we locate the position of the CEP as (µ CEP /T c , T CEP /T c ) ∼ (1.2, 0.8), which is in the same range as the CEP found from lattice inspired analyses [8].
In conclusion, we have shown that the location of the CEP we find is more in line with the location found by mathematical extensions of lattice analyses. Since the linear sigma model does not have confinement we attribute this location to the adequate description of the plasma screening properties for the chiral symmetry breaking at finite temperature and density. These properties are included into the calculation of the effective potential through the boson's self-energy and in the determination of the allowed range for the coupling constants through the observation that the thermal boson masses vanish at the phase transition and that the chiral symmetry restored fermion mass is proportional to v 0 at the onset of the first order phase transitions. These observations determine a relation between the model parameters which is put in quantitative terms by taking physical values for T c from lattice calculations, for a from the vacuum boson masses and for g from the light-quark mass. We believe this description will play an important role in determining the location of the CEP in QCD.
|
v3-fos-license
|
2021-02-03T06:16:48.100Z
|
2021-01-24T00:00:00.000
|
231766847
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/22/3/1140/pdf",
"pdf_hash": "b82ba80ffc876f8a0b4b292db0c7d9c91d441ed8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41897",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "c1df3d9321da74711d01792fcda837ee9682b906",
"year": 2021
}
|
pes2o/s2orc
|
Tissue Nonspecific Alkaline Phosphatase Function in Bone and Muscle Progenitor Cells: Control of Mitochondrial Respiration and ATP Production
Tissue nonspecific alkaline phosphatase (TNAP/Alpl) is associated with cell stemness; however, the function of TNAP in mesenchymal progenitor cells remains largely unknown. In this study, we aimed to establish an essential role for TNAP in bone and muscle progenitor cells. We investigated the impact of TNAP deficiency on bone formation, mineralization, and differentiation of bone marrow stromal cells. We also pursued studies of proliferation, mitochondrial function and ATP levels in TNAP deficient bone and muscle progenitor cells. We find that TNAP deficiency decreases trabecular bone volume fraction and trabeculation in addition to decreased mineralization. We also find that Alpl−/− mice (global TNAP knockout mice) exhibit muscle and motor coordination deficiencies similar to those found in individuals with hypophosphatasia (TNAP deficiency). Subsequent studies demonstrate diminished proliferation, with mitochondrial hyperfunction and increased ATP levels in TNAP deficient bone and muscle progenitor cells, plus intracellular expression of TNAP in TNAP+ cranial osteoprogenitors, bone marrow stromal cells, and skeletal muscle progenitor cells. Together, our results indicate that TNAP functions inside bone and muscle progenitor cells to influence mitochondrial respiration and ATP production. Future studies are required to establish mechanisms by which TNAP influences mitochondrial function and determine if modulation of TNAP can alter mitochondrial respiration in vivo.
Introduction
Tissue nonspecific alkaline phosphatase (TNAP) is a cellular enzyme that is encoded by the Alpl gene. TNAP is well known for its role in tissue mineralization, in which TNAP functions as an ectoenzyme in cell and matrix vesicle membranes to hydrolyze pyrophosphate to inorganic phosphate in the extracellular space [1]. Because pyrophosphate is a strong inhibitor of mineralization [2] while phosphate is a substrate for mineralization, TNAP activity promotes collagenous matrix mineralization [3,4]. In fact, TNAP is essential for bone mineralization such that lack of adequate TNAP leads to the metabolic disorder hypophosphatasia [5,6]. More severe and earlier onset forms of hypophosphatasia have predominant signs of poor bone mineralization with symptoms of bone bending, pain and fracture leading to mobility limitations, respiratory issues due to weak ribs, muscle weakness and fatigue, as well as seizures and even death [7]. Later onset and milder forms of hypophosphatasia (HPP) also show signs of poor bone mineralization that over time can lead to pain and fracture with limited bone remaining for good surgical repair [8,9]. Fortunately, through the diligent work of multiple investigators, individuals with severe hypophosphatasia can now be treated with enzyme replacement therapy using a bone targeted recombinant form of TNAP
TNAP Deficiency Decreases Bone Mineralization and Trabecular Bone Formation by Bone Marrow Stromal Cells
Representative nano CT isosurface renderings show that the bone marrow stromal cell (BMSC) collagen implants (ossicles) from both Alpl +/+ and Alpl −/− mice formed a cortical bone shell after 8 weeks of subcutaneous implantation. However, decreased trabeculation was apparent in the Alpl −/− as compared to Alpl +/+ ossicles ( Figure 1A-F). Representative histologic stains confirm decreased trabecular bone evident in Alpl −/− when compared to the Alpl +/+ ossicles ( Figure 1G-N). As expected, alkaline phosphatase staining (purple) was decreased in the Alpl −/− as compared to Alpl +/+ ossicles ( Figure 1J,N). It is worth noting that, while the represented Alpl −/− ossicle stain appears to show increased adipocytes compared to the represented Alpl +/+ ossicle stain, we did not find consistent differences in adipocity of ossicles between the two genotypes.
Quantitative nano computed tomography (nano CT) of ossicle bone supports the representative images shown in Figures 1 and 2. Cortical bone parameters demonstrate a significant decrease in bone mineralization indices (bone mineral content and bone mineral density) in BMSC implants from Alpl −/− mice. Non-normalized cortical bone volume was decreased in Alpl −/− mice; however, there was no significant difference in the bone volume fraction between Alpl −/− and Alpl +/+ mice. Therefore, the cortical bone volume was dependent upon the size of the ossicle (ossicles with Alpl +/+ cells were larger than those with Alpl −/− cells), not the genotype of the BMSCs. In contrast, trabecular bone parameters demonstrate a significant decrease in bone mineralization indices (bone mineral content, bone mineral density) in implants from Alpl −/− mice in addition to decreased indices of trabecular bone volume fraction, trabecular thickness, and trabecular number in Alpl −/− ossicles. Concordantly, trabecular separation is increased in Alpl −/− as compared to Alpl +/+ ossicles. Quantitative nano computed tomography (nano CT) of ossicle bone supports the representative images shown in Figures 1 and 2. Cortical bone parameters demonstrate a significant decrease in bone mineralization indices (bone mineral content and bone mineral density) in BMSC implants from Alpl −/− mice. Non-normalized cortical bone volume was decreased in Alpl −/− mice; however, there was no significant difference in the bone volume fraction between Alpl −/− and Alpl +/+ mice. Therefore, the cortical bone volume was dependent upon the size of the ossicle (ossicles with Alpl +/+ cells were larger than those with Alpl −/− cells), not the genotype of the BMSCs. In contrast, trabecular bone parameters demonstrate a significant decrease in bone mineralization indices (bone mineral content, bone mineral density) in implants from Alpl −/− mice in addition to decreased indices of trabecular bone volume fraction, trabecular thickness, and trabecular number in Alpl −/− ossicles. Concordantly, trabecular separation is increased in Alpl −/− as compared to Alpl +/+ ossicles.
We also performed micro computed tomography (micro CT) on tibia bones from the Alpl −/− and Alpl +/+ donor mice (Supplementary Figure S1). Results are consistent with the nano CT results from BMSC ossicle implants in that cortical bone volume when normalized to total volume was not significant while parameters of bone mineralization (bone mineral content, bone mineral density) were significantly diminished in tibias from Alpl −/− as compared to Alpl +/+ donor mice. Also similar to the nano CT analyses of the ossicles, measures of trabecular bone including trabecular bone volume fraction (bone volume/total volume), trabecular number, trabecular thickness were all significantly decreased, while trabecular spacing was significantly increased, in tibias from Alpl −/− as compared to Alpl +/+ donor mice. Representative images of alizarin and Alcian blue stained tibia shows the degree of mineralization defects in long bones of Alpl −/− as compared to Alpl +/+ donor mice (Supplementary Figure S2). Results shown are means ± standard error. n = 5 per genotype. cort = cortical; trab = trabecular; * p value < 0.05 between genotypes; ** p value < 0.01 between genotypes; *** p value < 0.005 between genotypes, **** p value < 0.001.
TNAP Deficiency Increases BMSC Total and Adipocyte Colony Forming Units
Alpl −/− BMSCs formed more overall colonies as stained with crystal violet than Alpl +/+ cells when cultured immediately after isolation from host animals (passage 0 cells) ( Cortical and trabecular bone parameters of Alpl +/+ and Alpl −/− ossicles. Results shown are means ± standard error. n = 5 per genotype. cort = cortical; trab = trabecular; * p value < 0.05 between genotypes; ** p value < 0.01 between genotypes; *** p value < 0.005 between genotypes, **** p value < 0.001. We also performed micro computed tomography (micro CT) on tibia bones from the Alpl −/− and Alpl +/+ donor mice (Supplementary Figure S1). Results are consistent with the nano CT results from BMSC ossicle implants in that cortical bone volume when normalized to total volume was not significant while parameters of bone mineralization (bone mineral content, bone mineral density) were significantly diminished in tibias from Alpl −/− as compared to Alpl +/+ donor mice. Also similar to the nano CT analyses of the ossicles, measures of trabecular bone including trabecular bone volume fraction (bone volume/total volume), trabecular number, trabecular thickness were all significantly decreased, while trabecular spacing was significantly increased, in tibias from Alpl −/− as compared to Alpl +/+ donor mice. Representative images of alizarin and Alcian blue stained tibia shows the degree of mineralization defects in long bones of Alpl −/− as compared to Alpl +/+ donor mice (Supplementary Figure S2).
TNAP Deficiency Increases BMSC Total and Adipocyte Colony Forming Units
Alpl −/− BMSCs formed more overall colonies as stained with crystal violet than Alpl +/+ cells when cultured immediately after isolation from host animals (passage 0 cells) ( Figure
TNAP Deficiency Increases Both Osteoblast and Adipocyte Differentiation of BMSCs
Despite lack of alkaline phosphatase positive Alpl −/− colony formation, analysis of mRNA revealed that Alpl −/− cells expressed higher levels of osteoblast genes (Bglap, Ibsp, Col1a1, Sp7) compared to Alpl +/+ cells when cultured in non-differentiation media or media containing ascorbate to induce osteoblast differentiation ( Figure 4A,C,E,G). Consistent with the colony forming units (CFU) oil red staining results, Alpl −/− cells also expressed higher levels of adipocyte genes (Pparg, Fabp4, Adipsin, Adipoq) compared to Alpl +/+ cells when cultured in adipocyte induction and maintenance media ( Figure 4B
TNAP Deficiency Decreases Muscle Strength and Impairs Motor Coordination
Because individuals with hypophosphatasia exhibit muscle weakness in addition to motor coordination deficiencies, we assessed Alpl −/− mice for muscle strength and coordi nation as compared to wild type littermates ( Figure 5). Grip strength and grip strength normalized for body weight were significantly decreased in Alpl −/− mice, compared to thei Alpl +/+ littermates. In addition, Alpl −/− mice fell off the inverted screen and horizontal ba much earlier, as compared to the Alpl +/+ mice. , Col1a1 (C), and Sp7 (D) significantly increased in the Alpl −/− compared to Alpl +/+ BMSCs, when cultured in non-differentiation media (no tx) or osteoblast differentiation media containing ascorbate (Asc) for 6 days. The adipocyte genes, Adipsin (E), Adipoq (F), Fabp4 (G), and Pparg (H) significantly increased in the Alpl −/− compared to Alpl +/+ BMSCs, when cultured in adipocyte induction then maintenance media (AdI/M) for 6 days total. Passage 2 cells were used. n = 3 per genotype. * p < 0.05, statistical significance between genotypes.
TNAP Deficiency Decreases Muscle Strength and Impairs Motor Coordination
Because individuals with hypophosphatasia exhibit muscle weakness in addition to motor coordination deficiencies, we assessed Alpl −/− mice for muscle strength and coordination as compared to wild type littermates ( Figure 5). Grip strength and grip strength normalized for body weight were significantly decreased in Alpl −/− mice, compared to their Alpl +/+ littermates. In addition, Alpl −/− mice fell off the inverted screen and horizontal bar much earlier, as compared to the Alpl +/+ mice.
Because individuals with hypophosphatasia exhibit muscle weakness in addition to motor coordination deficiencies, we assessed Alpl −/− mice for muscle strength and coordination as compared to wild type littermates ( Figure 5). Grip strength and grip strength normalized for body weight were significantly decreased in Alpl −/− mice, compared to their Alpl +/+ littermates. In addition, Alpl −/− mice fell off the inverted screen and horizontal bar much earlier, as compared to the Alpl +/+ mice.
TNAP Deficiency Diminishes Progenitor Cell Proliferation and Increases Cell Metabolic Activity
Abnormalities in mitochondrial cristae were previously reported in muscle biopsies of a sheep model of hypophosphatasia [27]. Mitochondrial dysfunction could account for the diminished strength and motor skills of individuals and mice with severe hypophosphatasia [14,28]. As an initial step towards determining if TNAP influences mitochondrial function, we measured cell metabolic activity as evidenced by reduction of the tetrazolium dye, MTT, in cranial bone progenitor cells (primary cranial cells and MC3T3E1 cells), bone marrow stromal cells (BMSCs) and muscle progenitor cells (Sol8 cells). Because reduction levels of MTT can be influenced by cell number, we also assessed cell proliferation in concurrent experiments. Cell metabolic activity was significantly increased in MC3T3E1 cranial osteoprogenitors transduced with Alpl shRNA as compared to control non-target shRNA, in Alpl −/− as compared to Alpl +/+ primary cranial osteoprogenitors, in Alpl −/− as compared to Alpl +/+ bone marrow stromal cells, and in Sol8 muscle progenitor cells transduced with Alpl shRNA as compared to non-target shRNA ( Figure 6A,C,E,G). All TNAP deficient cell types concurrently showed significantly diminished cell proliferation when compared to control cells ( Figure 6B,D,F,H).
TNAP Deficiency Alters Mitochondrial Activity and Cell Respiration
Oxygen consumption rate (OCR) reflects the rate of mitochondrial and non-mitochondrial respiration, and can be measured in live cells with/without specific mitochondrial electron transport chain modulators, to evaluate and measure mitochondrial function. Results on live cells over time appear to demonstrate differences in mitochondrial function upon TNAP deficiency in MC3T3E1 cranial osteoprogenitors, primary cranial bone osteoprogenitors, bone marrow stromal cells and Sol8 muscle progenitor cells ( Figure 7). Statistical comparisons from these assays demonstrate that all tested TNAP deficient progenitor cells exhibit significantly increased levels of basal respiration and ATP production ( Table 1). Both mitochondrial proton leak and maximal respiration levels were significantly increased in TNAP deficient MC3T3E1 cells, primary cranial osteoprogenitors, and Sol8 muscle progenitors, but not in TNAP deficient bone marrow stromal cells. Space respiratory capacity (measures how closely cells are respiring relative to maximal respiratory ability) and non-mitochondrial enzyme activity were also significantly increased in the Sol8 skeletal muscle progenitor cell line. current experiments. Cell metabolic activity was significantly increased in MC3T3E1 cranial osteoprogenitors transduced with Alpl shRNA as compared to control non-target shRNA, in Alpl −/− as compared to Alpl +/+ primary cranial osteoprogenitors, in Alpl −/− as compared to Alpl +/+ bone marrow stromal cells, and in Sol8 muscle progenitor cells transduced with Alpl shRNA as compared to non-target shRNA ( Figure 6A,C,E,G). All TNAP deficient cell types concurrently showed significantly diminished cell proliferation when compared to control cells ( Figure 6B,D,F,H). The number of cells is significantly decreased in Alpl shRNA MC3T3E1 cranial osteoprogenitors at days 2, 4, and 6 after plating, as compared with non-target shRNA MC3T3E1 cells. Similar results (increased metabolic activity with decreased proliferation) were also found for Alpl −/− compared to Alpl +/+ primary cranial cells (C,D); Alpl −/− compared to Alpl +/+ bone marrow stromal cells (BMSCs) (E,F); and Alpl shRNA as compared to nontarget shRNA Sol8 skeletal muscle progenitor cells (G,H). n = 3 per genotype. * p < 0.05, statistical significance between genotypes. The Sol8 cell growth experimental period is shorter due to higher proliferation rate of these, as compared to the other cell types. Note: standard deviations were very low in some groups such that, while present, the bars are not visually apparent on the graphs (A,C,E,G).
TNAP Deficiency Alters Mitochondrial Activity and Cell Respiration
Oxygen consumption rate (OCR) reflects the rate of mitochondrial and non-mitochondrial respiration, and can be measured in live cells with/without specific mitochondrial electron transport chain modulators, to evaluate and measure mitochondrial function. Results on live cells over time appear to demonstrate differences in mitochondrial function upon TNAP deficiency in MC3T3E1 cranial osteoprogenitors, primary cranial The number of cells is significantly decreased in Alpl shRNA MC3T3E1 cranial osteoprogenitors at days 2, 4, and 6 after plating, as compared with non-target shRNA MC3T3E1 cells. Similar results (increased metabolic activity with decreased proliferation) were also found for Alpl −/− compared to Alpl +/+ primary cranial cells (C,D); Alpl −/− compared to Alpl +/+ bone marrow stromal cells (BMSCs) (E,F); and Alpl shRNA as compared to non-target shRNA Sol8 skeletal muscle progenitor cells (G,H). n = 3 per genotype. * p < 0.05, statistical significance between genotypes. The Sol8 cell growth experimental period is shorter due to higher proliferation rate of these, as compared to the other cell types. Note: standard deviations were very low in some groups such that, while present, the bars are not visually apparent on the graphs (A,C,E,G). . Statistical comparisons from these assays demonstrate that all tested TNAP deficient progenitor cells exhibit significantly increased levels of basal respiration and ATP production ( Table 1). Both mitochondrial proton leak and maximal respiration levels were significantly increased in TNAP deficient MC3T3E1 cells, primary cranial osteoprogenitors, and Sol8 muscle progenitors, but not in TNAP deficient bone marrow stromal cells. Space respiratory capacity (measures how closely cells are respiring relative to maximal respiratory ability) and non-mitochondrial enzyme activity were also significantly increased in the Sol8 skeletal muscle progenitor cell line. Table 1. Table 1. Results shown are means ± standard error. n = 3 per genotype. * p < 0.05, ** p < 0.01, *** p < 0.005 statistical significance between genotypes. Non-Mito = non-mitochondrial; NT = non-target/control.
TNAP Deficiency Increases Intracellular ATP Levels in Bone Marrow Stromal and Sol8 Muscle Progenitor Cells
We next measured ATP in cells to confirm results established by the live cell mitochondrial metabolic function assay described above. Results confirm that intracellular ATP levels were significantly increased in Alpl −/− as compared to Alpl +/+ bone marrow stromal cells and in Sol8 muscle progenitor cells that expressed Alpl shRNA as compared to non-target shRNA (Figure 8).
TNAP is Localized Internally and Co-Localizes with Mitochondria
Our results indicate that TNAP alters mitochondrial function and ATP production in bone and muscle progenitor cells. Therefore, we next sought to determine where TNAP is localized within these cells. Immunofluorescent staining for F-Actin (a cytoskeletal component) in combination with staining for TNAP demonstrates that TNAP is expressed in a peri-nuclear intracellular pattern in undifferentiated MC3T3E1 cranial bone progenitor cells, bone marrow stromal cells, and Sol8 muscle progenitor cells (Figure 9). To determine the spatial relationship between TNAP and mitochondria, we next co-stained for TNAP and mitochondria. Results show that both mitochondria and TNAP are located in a perinuclear pattern around the nucleus (Figure 9). Co-localization of TNAP and mitochondria occurs to different degrees in the three cells types; in many cells TNAP localizes near but not in the same place as mitochondria (Figure 8). It is worth noting here that the intracellular location of TNAP was confirmed through use of two different primary antibodies (Abcam ab65834 and R&D MAB29091).
TNAP is Localized Internally and Co-Localizes with Mitochondria
Our results indicate that TNAP alters mitochondrial function and ATP production in bone and muscle progenitor cells. Therefore, we next sought to determine where TNAP is localized within these cells. Immunofluorescent staining for F-Actin (a cytoskeletal component) in combination with staining for TNAP demonstrates that TNAP is expressed in a peri-nuclear intracellular pattern in undifferentiated MC3T3E1 cranial bone progenitor cells, bone marrow stromal cells, and Sol8 muscle progenitor cells (Figure 9). To determine the spatial relationship between TNAP and mitochondria, we next co-stained for TNAP and mitochondria. Results show that both mitochondria and TNAP are located in a peri-nuclear pattern around the nucleus (Figure 9). Co-localization of TNAP and mitochondria occurs to different degrees in the three cells types; in many cells TNAP localizes near but not in the same place as mitochondria (Figure 8). It is worth noting here that the intracellular location of TNAP was confirmed through use of two different primary antibodies (Abcam ab65834 and R&D MAB29091). the spatial relationship between TNAP and mitochondria, we next co-stained for TNAP and mitochondria. Results show that both mitochondria and TNAP are located in a perinuclear pattern around the nucleus (Figure 9). Co-localization of TNAP and mitochondria occurs to different degrees in the three cells types; in many cells TNAP localizes near but not in the same place as mitochondria (Figure 8). It is worth noting here that the intracellular location of TNAP was confirmed through use of two different primary antibodies (Abcam ab65834 and R&D MAB29091). Figure 9. TNAP is expressed in a peri-nuclear pattern and partially co-localizes with mitochondria in undifferentiated bone and muscle progenitor cells. Immunofluorescent staining of undifferentiated MC3T3E1 cranial osteoprogenitor cells, bone marrow stromal cells, and Sol8 muscle progenitor cells reveals that TNAP is localized internally at a peri-nuclear region, and is co-localized with mitochondria. Left panels: representative images of co-localization of TNAP (red) and F-Actin (green). Middle panels: representative images of co-localization of TNAP (green) and mitochondria (red). Right panels: representative images of co-localization of TNAP (green) and mitochondria (red) at a higher magnification. Nuclear stain was performed with DAPI (blue).
Discussion
In this study, we first investigated bone marrow stromal cells isolated from Alpl −/− mice, to determine if lack of TNAP expression causes cell autonomous defects in osteogenesis and/or differentiation of these bone progenitor cells. To test this hypothesis in a 3D matrix in vivo, we mixed bone marrow stromal cells (BMSCs) from either Alpl −/− and Alpl +/+ mice with a collagen carrier, then implanted them subcutaneously in immunodeficient donor mice for eight weeks to allow for bone formation. We found that Alpl −/− cells successfully formed a bone cortical shell similar in bone volume fraction but decreased in mineralization, when compared to those of Alpl +/+ cells. This finding is consistent with the widely accepted and well proved concept that TNAP is essential for bone mineralization. Notably, we also found that implants of Alpl −/− cells had significantly less trabecular bone volume fraction and trabeculation in addition to reduced mineralization, when compared to those of Alpl +/+ cells. If the role of TNAP is solely to facilitate matrix mineralization, we would have found trabecular bone that was of equal volume and trabeculation with decreased mineralization. Micro CT of tibial bones from Alpl −/− and Alpl +/+ donor mice confirmed the results of the collagenous implants, in that trabecular bone volume fraction and trabeculation were significantly diminished in tibias of Alpl −/− mice. These results are consistent with previously published studies. For example, in a prior study, bone volume fraction and trabecular thickness in addition to bone mineral density, were significantly reduced in tibias of Alpl −/− mice as compared to wild type littermates [29]. Together, these finding demonstrate a need for TNAP in BMSCs for trabecular osteogenesis that extends beyond that of matrix mineralization. Such findings are consistent with our prior studies which showed that cranial bone progenitor cells have a cell autonomous need for TNAP that influences cell cycle progression, cytokinesis, and proliferation [24]. TNAP deficiency may therefore decrease the pool of osteoprogenitors needed for bone formation in certain bone cell populations. Additional studies are required to definitively state that this is the case. Our results also suggest that TNAP may have differential impacts on cortical and trabecular bone formation.
To better understand how TNAP influences BMSCs, we next performed colony forming assays using isolated cells that were not previously passaged. We were surprised to find that Alpl −/− cells formed more colony forming units (CFUs) when stained with crystal violet than Alpl +/+ cells. This data is inconsistent with previous studies which showed that human TNAP positive BMSCs had significantly more colony forming units than TNAP negative BMSCs [22]. It is likely that the increased crystal violet stained CFUs are not caused by increased proliferation of Alp l−/− cells because the subsequent proliferation data shows decreased BMSC proliferation. Decreased proliferation was also present in other tested TNAP deficient cells including cranial osteoprogenitors and muscle progenitors. It is worth noting that crystal violet staining of CFUs reflects the number of viable cells from an isolation that result in adherent bone marrow cells. We interpret the increase in crystal violet-stained colonies of Alpl −/− BMSCs to reflects an increased adherence of those cells during the culturing process and/or an increase in cell size of these cells, which we previously found in TNAP deficient cranial osteoprogenitor cells [24]. Increased adherence and/or increase cell size of TNAP deficient cells could account for the increased crystal violet-stained colony forming unit assay results in the presence of diminished proliferation.
As expected, Alpl −/− cells did not form alkaline phosphatase positive colonies, while Alpl +/+ cells did. This is easily explained by the lack of TNAP activity in the Alpl −/− cells.
To determine if the Alpl −/− BMSC's were diminished in their overall ability to differentiate osteoblasts, we performed real time PCR which demonstrated increased expression of osteoblast differentiation markers both prior to and during differentiation. This indicates that Alpl −/− BMSCs are in fact more disposed to osteoblast differentiate.
Alpl −/− cells also formed more adipocyte colony forming units as stained with oil red, compared to Alpl +/+ cells. To confirm this result, we performed real time PCR which demonstrated increased expression of adipocyte associated genes and transcription factors. The findings of increased adipogenesis and expression of adipocyte differentiation markers in Alpl −/− cells are consistent with prior studies which demonstrated that alkaline phosphatase negative mesenchymal subpopulations retain adipocytic potential [30]. It is possible that Alpl −/− cells did not form trabecular bone in the collagenous implants because Alpl −/− cells had a greater tendency to form adipocytes than osteoblasts though the fact that Alpl −/− cells also showed increased osteoblast mRNA expression suggests that TNAP is more likely essential for the overall BMSC proliferation vs. differentiation fate switch.
It is important to note that while results from this study show that TNAP deficiency decreases trabecular bone volume and promotes bone marrow stromal cell osteoblast and adipocyte differentiation, while decreasing proliferation, we interpret our results to indicate that TNAP is essential for trabecular bone osteogenesis. This is based upon prior studies of Alpl −/− mice that showed significant differences in osteoblast but not osteoclast function between Alpl −/− and Alpl +/+ mice [31]. A limitation of this study is that we did not study osteoclastogenesis or function in Alpl −/− implanted ossicles or tibia.
Because HPP patients have diminished muscular strength and fatigue issues, we tested muscle strength and motor coordination in Alpl +/+ and Alpl −/− mice littermates. We found that Alpl −/− mice phenocopy these deficiencies seen in individuals with TNAP deficiency [14,28], as indicated by significantly decreased grip strength and grip strength normalized to body weight, in addition to significantly diminished endurance and motor coordination in inverted screen and horizontal bar tests. Because the Alpl −/− mice exhibit decreased muscle strength and coordination, we created Sol8 skeletal muscle cells that stably express Alpl shRNA to include TNAP deficient muscle progenitor cells in our subsequent studies.
In this study, we used the reduction of MTT assay followed by live cell Seahorse assay to establish that TNAP deficiency leads to defects in mitochondrial activity and cell respiration in bone and muscle progenitor cells. We found that cellular respiration and ATP production were significantly higher in all of the four tested TNAP-deficient bone and muscle progenitor cell types. The increase in ATP was confirmed in subsequent direct measurements of intracellular ATP in BMSCs and TNAP deficient Sol8 muscle progenitor cells. Extracellular ATP levels were also significantly increased in the media of the TNAP deficient cultured cells (data not shown), but extracellular levels were approximately 500× less than that found intracellularly. Notably, while not directed compared, baseline levels of ATP in Alpl +/+ cells were higher in the Sol8 muscle progenitor cells than the bone marrow stromal cells. The increase in ATP found in Alpl −/− cells was also greater in Sol8 muscle progenitor than bone marrow stromal cells. Sol8 cells also showed less variation in ATP measurements as compared to bone marrow stromal cells. Together, this indicates that TNAP may have a greater effect on ATP production in muscle compared to bone progenitor cells.
The idea that TNAP influences mitochondrial function and ATP levels is not new. The upregulation of TNAP in vascular smooth muscle cells causes mitochondrial dysfunction and diminishes ATP intra-and extracellular levels [32]. In addition, mitochondrial cristae abnormalities were previously reported in a sheep model of hypophosphatasia [27]. While not directly investigated in this study, prior results indicate potential mechanisms by which mitochondrial hyperfunction and ATP levels can influence progenitor cells. Mitochondrial hyperfunction can promote cell senescence (irreversible loss of cell proliferation) and/or apoptosis in BMSCs and other cell types [23,33]. In BMSCs, evidence indicates that TNAP deficiency induced increases in intracellular ATP contribute to senescence of BMSCs via repression of the AMPKα pathway [23]. Increased mitochondrial oxidative phosphorylation function can also promote BMSC differentiation as mediated by ATP or b-catenin [34].
Mitochondria also regulate many critical cellular processes in skeletal muscles, including muscle cell metabolism, energy supply, and calcium homeostasis [35]. In this study, Alpl shRNA treated Sol8 muscle progenitor cells showed mitochondrial hyperfunction, including increased basal respiration and ATP production. The continuously increased basal mitochondrial respiration in TNAP deficiency may therefore induce chronic oxidative stress, which can result in superoxide production and induce muscle pathological changes [36]. Optimized mitochondrial function, including control of generated reactive oxidative species and quality control of mitochondrial proteins, are essential for maintaining muscle mass and muscle function [35]. In addition, the high levels of oxidative phosphorylation seen in TNAP deficient cells should increase inner mitochondrial membrane potential, potentially driving calcium into the inner mitochondrial space from calcium stores such as the sarcoplasmic reticulum [37]. Low calcium levels in the sarcoplasmic reticulum could therefore contribute to muscle weakness. High ATP levels seen in TNAP deficient cells would also be expected to decrease activation of AMPK, which could in turn lead to diminished glucose uptake and muscle activity [38]. Given that we investigated TNAP function in cultured Sol8 muscle progenitor cells, our results indicate a direct influence of TNAP on muscle progenitor cells that is likely mediated by mitochondrial changes. TNAP is also known to influence neural progenitor cells, development, and function [39,40]; therefore, it is also possible that TNAP deficiency causes loss of muscle strength, increased fatigue, and loss of motor coordination due to TNAP deficiency in neurons. TNAP regulates purinergic transmission in the central nervous system, and plays an important role in neuronal development, differentiation, and synaptic function [41]. TNAP inhibition also increases extracellular ATP in neurons, which can reduce motoneuron neurite extension [42], and dysregulate synaptic transmission in neuromuscular junction [43]. In addition, TNAP deficiency leads to reduced brain white matter, accompanied by decreased axonal myelination in the spinal cord and cortex [44]. Therefore, it is possible that TNAP deficiency induced high ATP levels influence muscle function directly, and/or via deficient neural function, the latter of which would impair neural control of muscle and result in decreased muscle function.
The relationship between TNAP deficiency and mitochondrial function in our and others prior studies made us question the widely held belief that TNAP is only expressed on extracellular membranes. Accordingly, we next used immunostaining to detect TNAP in bone and muscle progenitor cell lines. Here we report, for the first time, that TNAP is expressed in a peri-nuclear intracellular pattern in bone and muscle progenitor cells. Because this is a novel finding, we used two different primary antibodies for TNAP to confirm these results. Intracellular location of TNAP in progenitor cells calls into question the use of fluorescent-activated cell sorting (FACS) for isolation of TNAP positive/negative from bone marrow stromal and/or other cell populations, because this type of sorting will miss those cell that express TNAP intracellularly. On a positive note, the knowledge that TNAP can be expressed inside and/or outside the cell may help to reconcile the contradictory results showing influences of TNAP on bone marrow stromal and other cell populations [15,16,22,30,[45][46][47][48]. We also found that TNAP co-localizes in part with mitochondria. In future studies it will be important to determine if TNAP is tethered to mitochondrial membranes, and/or to intracellular membranes adjacent to mitochondria, and further delineate how TNAP influences mitochondrial function. Notably, it was previously shown that high inorganic pyrophosphate levels prevent mitochondrial membrane depolarization in ischemic cardiac muscle cells [49]. We are currently pursuing studies of TNAP and Enpp1 (enzymatic hydrolyzer of pyrophosphate) double knockout mice to determine if the high pyrophosphate levels present in TNAP deficient (Alpl −/− ) mice mediates the mitochondrial respiration changes seen in cells from these mice.
Results of this study are also pertinent to studies of TNAP and mitochondria in metabolic syndrome. High serum alkaline phosphatase levels are associated with higher risk of metabolic syndrome in the U.S. population [26]. Mechanisms between this association are unknown. Insulin resistance, obesity, and/or metabolic syndrome can induce mitochondrial dysfunction and mitochondrial ultrastructural abnormalities, which can be passed down through at least three generations via the female germline [25,[50][51][52][53][54]. While much discussion in the literature has suggested that mitochondrial dysfunction linked to the pathogenesis and/or progression of metabolic disease can be explained by downstream effects of mitochondria generated reactive oxygen species, our results indicate that TNAP itself may influence mitochondrial function, which could be relevant to the worldwide epidemics of obesity, diabetes, and metabolic syndrome.
Animals
Wild type (Alpl +/+ ) and TNAP knockout (Alpl −/− ) mouse littermates were bred on a 97% 129/SVJ (Jackson Laboratory, Bar Harbor, ME, USA) and 3% C57BL/6 (Charles River Laboratory, Wilmington, MA, USA) genetic background. This transgenic mouse model of infantile hypophosphatasia (HPP) represents the more severe phenotype of infantile HPP in the human population. Because TNAP is essential for vitamin B6 metabolism [55], all of the mice in this study were given free access to modified laboratory rodent diet containing 325 pyridoxine. Genotyping was performed by PCR using DNA samples from tail digests. Alpl +/+ primers (TGCTGCTCCACTCACGTCGAT and ATCTACCAGGGGT-GCTAACC) and Alpl −/− primers (GAGCTCTTCCAGGTGTGTGG and CAAGACCGAC-CTGTCCGGTG) were used as previously described [3,56]. Six-week-old male CB17-SCID immunocompromised mice were obtained from Charles River Laboratory (Charles River Laboratory, Wilmington, MA, USA) and used as recipient mice for subcutaneous implant experiments. All animal procedures were performed according to U.S.A. federal guidelines and the Declaration of Helsinki. Prior to experimentation, animal protocols were approved by the Institutional Animal Care and Use Committee (IACUC) of the University of Michigan (protocol PRO00008675, expiration 1/2/2022).
Bone Marrow Stromal Cell Isolation
Bone marrow stromal cells (BMSCs) were isolated from femurs of 14-day-old Alpl −/− mice and wild type (Alpl +/+ ) littermates. Briefly, epiphyseal growth plates were removed and the marrow was collected by flushing with a 25-gauge needle and a 5 mL syringe containing media. Marrow cell clumps were aspirated several times through a 22-gauge needle and filtered through a 70-µm cell strainer. Cells were cultured in a custom formulated αMEM containing no ascorbate, supplemented with 20% fetal bovine serum (FBS), penicillin/streptomycin (P/S), and fungizone for several days. Media were changed every 12 h for 2 days, then every 3 days until all suspension cells were removed and adherent cells were confluent. Significantly greater numbers of cells were isolated from Alpl +/+ (2.2 × 10 6 ± 2.2 × 10 5 BMSCs per mouse) than Alpl −/− mice (1.1 × 10 6 ± 1.6 × 10 5 per mouse, p < 0.05). During bone marrow aspiration, long bones of the Alpl −/− mice were noticeably obstructed with hard tissue in the mid-diaphyseal region such that diminished cell numbers isolated from these mice were likely due to diminished overall mouse/bone size and diminished bone marrow per bone.
Collagenous Implant Preparation
Collagenous gel implants were prepared using isolated BMSCs. Cells of passage 3 were used for fabrication of implants. 2 × 10 6 bone marrow stromal cells per implant were suspended with 0.01% NaOH in phosphate buffered saline (PBS) and 4 µg/µL of rat tail collagen type I (Corning, Tewksbury, NY, USA) on ice. The solution was then aliquoted into glass tissue chamber slide wells (Thermo Fisher Scientific, Waltham, MA, USA) for total gel volumes of 200 µL per implant and final collagen gel concentration of 3.0 mg/mL. Gel solutions with cells were incubated at 37 • C for 1 h to enable gel hardening.
Subcutaneous Implant Placement and Nano Computed Tomography of Ossicles
Midline longitudinal incisions were made along the dorsal surface of each host mouse and subcutaneous pockets were formed by blunt dissection. A single implant was placed into each subcutaneous pocket, for a total of two implants per animal. Each mouse received two Alpl −/− BMSC implants, two Alpl +/+ BMSC implants, or two blank (no cells) collagenous implants. Implants were removed eight weeks after implantation. After fixation, implants were analyzed for mineralized tissue formation by nano computed tomography (nano CT) scanning (Phoenix Nanotom M nano computed tomography imaging system and associated software, GE Healthcare Pre-Clinical Imaging, London, ON, Canada) at a 9 µm isotropic voxel resolution. Implants were then decalcified and embedded in paraffin for histologic staining by mason's trichrome and hematoxylin/eosin staining. Alkaline phosphatase enzyme activity of implants was analyzed by staining sections with NBT/BCIP colorimetric substrate (Sigma-Aldrich, St. Louis, MO, USA).
Long Bone Micro Computed Tomography
Tibial bones from Alpl −/− and Alpl +/+ day-17 mice were scanned by micro computed tomography (micro CT) using a Scanco µCT 100 micro-computed tomography system at a 18 µm isotropic voxel resolution and associated software. Trabecular region of interest used was 10% of total bone length from the end of the proximal growth plate. Cortical region of interest used was 10% of total bone length from the mid-diaphysis.
BMSC Cell Culture and Assay
For colony forming unit (CFU) assays, isolated cells were directly plated at 5 × 10 5 cells per well into 12 well plates (passage 0 cells). For general colony forming units (CFU-F), cells were cultured in DMEM containing 20% FBS with media changes every 12 h for 2 days, then every 3 days. After 8 days of culture, the cells were fixed in methanol and stained with crystal violet. For colony forming adipocyte units (CFU-Ad), cells were cultured as for CFU-F and then cultured in adipocyte induction media (DMEM media containing 0.5 mM IBMX, 1µM dexamethasone, 100 µg/mL insulin, 10 µM troglitazone and 10% FBS and P/S) for 3 days followed by adipocyte maintenance media (DMEM media containing 100 µg/mL insulin, 10% FBS and P/S) for 3 days. Cells were then fixed and stained with Oil Red O (Abcam, Cambridge, MA, USA). For colony forming alkaline phosphatase positive units (CFU-AP), cells were cultured as for CFU-F and then in αMEM media containing 50 µg/mL ascorbate, 10% FBS, and P/S for 6 days. Cells were fixed and then stained for alkaline phosphatase activity using the colorimetric substrate, NBT/ BCIP (Sigma-Aldrich, St. Louis, MO, USA). After staining, plates were scanned and images were quantified using Image J. Comparison between genotypes was performed using the Student's t-test (n = 3 per genotype per experiment).
Strength and Motor Coordination Tests
For the grip strength test, eight weights were used: 0.15, 0.45, 1, 1.5, 2, 4.5, 6, and 10 g. For each test, the mouse was held by the middle/base of the tail and allowed to grasp the lightest weight which was lying on a flat laboratory benchtop. After the mouse grasped the weight with its forepaws, the mouse was raised until the weight was clear of the bench. A stopwatch was used to record the time. A hold of three seconds was the criterion. If the mouse dropped the weight in less than 3 sec, the mouse was allowed to try once again. If the mouse failed three times, the trial was terminated, and the mouse was assigned the lesser maximal weight. If the mouse held the weight for 3 sec, then the next heavier weight was tested. The test was run until the maximal weight was achieved. A final total score was calculated as the maximal weight, and compared between Alpl −/− and Alpl +/+ mice. To compensate the difference in the body weight, the ratio of maximal weight/body weight was also calculated for each mouse and compared between Alpl −/− and Alpl +/+ mice.
For Kondziela's inverted screen test, the inverted screen was a 48 cm square of wire mesh consisting of 15 mm squares of 1 mm diameter wire. The mouse was placed in the center of the wire mesh screen. The screen was inverted and held 50 cm above a cushioned flat benchtop. A stopwatch was used to record the time when the mouse fell off, or the mouse was removed when the criterion time of 60 sec was reached. A final total score was calculated as the fall-off time, and compared between Alpl −/− and Alpl +/+ mice.
For the horizontal bar test, the horizontal bar (4 mm in diameter and 38 cm in length) was held 50 cm above a cushioned flat benchtop. The mouse was held by its tail and aligned perpendicularly to the bar. The mouse was rapidly raised. Once its forepaws grasped the horizontal bar at the central point, its tail was released. A stopwatch was used to record the time of a fall from the bar. Maximum test time (cut-off time) was 30 s. A final total score was calculated as the fall-off time, and compared between Alpl −/− and Alpl +/+ mice.
TNAP Deficient Cranial Osteoprogenitor and Muscle Progenitor Cells
MC3T3E1 cells were generously provided by Dr. Renny Franceschi (University of Michigan, Ann Arbor, MI, USA). Sol8 cells were acquired from the American Type Culture Collection (ATCC, Manassas, VA, USA). MC3T3E1 murine cranial osteoprogenitor cells and Sol8 murine skeletal muscle progenitor cells were transduced with lentiviral particles expressing a puromycin resistance gene and Alpl specific shRNA (Sigma Mission) or nontarget shRNA (Sigma Mission, SHC002V) in the presence of 8 µg/mL hexadimethrine bromide. Puromycin-resistant colonies were expanded, confirmed for expression of TNAP and utilized for experiments (Supplementary Figure S3) [57].
Primary osteoprogenitor cells were isolated from the cranium of Alpl +/+ and Alpl −/− mice by sequential collagenase digestion, as previously described [58][59][60]. Briefly, bones were rinsed with media then serially digested in a solution containing 2 mg/mL collagenase P and 2.5 mg/mL trypsin. Cells from the third digestion were used for experimentation as earlier digestion isolations contain cell from residual soft tissues and later digestion isolations contain osteocytes. Passage 3 cells were used for experiments.
Proliferation and MTT Assays
To assay for cell proliferation and MTT reduction, cells were seeded at an optimal density for each individual cell type in 6 well plates (MC3T3E1: 2.5 × 10 3 /well; primary cranial cells: 1 × 10 5 /well; BMSC: 2.5 × 10 3 /well; Sol8: 2.5 × 10 3 /well), and grown in aMEM media containing 10% FBS plus P/S for indicated number of days. Cells were stained with trypan blue and counted in sextuplet at each time point. Cellular metabolic activity was initially monitored by measuring the reduction of MTT. In brief, cells were plated in 96-well plates and cultured in Dulbecco's modified Eagle's medium containing 10% fetal bovine serum for indicated growth periods. The medium was replaced with 1 µg/mL MTT in phosphatebuffered saline (pH 7.4), followed by incubation at 37 • C for 3 h. MTT solution was removed, and cells were incubated in DMSO at 37 • C for 1 h. Reduction of MTT was quantified by measuring absorbance at 570 nanometers using a multi-well spectrophotometer.
Seahorse Agilent Seahorse XF Cell Mito Stress Test
To evaluate the changes in mitochondrial function and cell metabolism, cells underwent Seahorse Agilent Seahorse XFe Cell Mito Stress test (Agilent, Santa Clara, CA, USA) to measure changes in mitochondrial function via oxygen consumption rate (OCR). Agilent Seahorse XFe cell mito stress test was performed according to the manufacturer's instructions. In brief, cells were seeded at the optimized cell density for each of the different cell lines in Seahorse XFe96 microplates and incubated at 37 • C/5% CO 2 for 24 h. On the day of assay, the cell culture growth medium in the cell culture microplate was replaced with pre-warmed (37 • C) assay medium (XF DMEM, 1 mM Pyruvate, 2 mM Glutamine, and 10 mM glucose). The cell culture microplate was incubated in a non-CO2 incubator at 37 • C for 1 h prior to the assay to allow media temperature and pH to reach equilibrium. The modulating agents (oligomycin, FCCP, and antimycin A + rotenone) were prepared in assay medium and injected into the injection ports. The OCR was measured and analyzed using the Seahorse XFe Mito Stress Test Report Generator. At the end of the assay, cells were stained with crystal violet. The number of stained cells was counted and used for normalization.
Immunofluorescent Staining for TNAP and Mitochondria
Undifferentiated MC3T3E1 cranial bone progenitor cells, bone marrow stromal cells and Sol8 skeletal muscle progenitor cells were stained by immunofluorescence using Alexa Fluor 488 phalloidin for detection of F-actin (Thermo Fisher, Waltham, MA, USA), a rabbit anti-ALPL primary antibody (ab65834; Abcam, Cambridge, MA, USA), goat anti-rabbit Alexa Fluor-555 secondary antibody (Invitrogen, Carlsbad, CA, USA), and DAPI (4 ,6diamidino-2-phenylindole) nuclear stain to initially establish cellular localization of TNAP. For potential detection of co-localization of TNAP with mitochondria, cells were treated with 100 nM MitoTracker Red CMXRos (Invitrogen, Carlsbad, CA, USA) for 45 min. The mitochondrial dye was removed, and the cells were fixed. Cells were then stained with a monoclonal rat anti-ALPL primary antibody (MAB29091; R&D Systems, Minneapolis, MN, USA), donkey anti-rat Alexa Fluor-488 secondary antibody (Invitrogen, Carlsbad, CA, USA), and DAPI nuclear stain. Immunofluorescent staining was imaged using a Nikon Eclipse Ti microscope.
Statistics
In vitro data were assessed using Student's t-test. For in vivo assays, data were tested for normality using the D'Agostino's K-squared test. Student's t-test was used for normal data, and Mann-Whitney U test was used for non-normal data analysis. A p-value less than 0.05 was considered statistically significant.
Conclusions
In this study, we investigated the need for TNAP in bone formation, mineralization, and mitochondrial function. We found that TNAP deficiency decreased trabecular bone volume fraction and trabeculation in addition to decreased mineralization, and interpret these results to mean that TNAP is essential for osteogenesis in addition to the mineralization of trabecular bone. We also showed for the first time that Alpl −/− mice (global TNAP knockout) exhibit muscle and motor coordination deficiencies that are similar to those found in individuals with hypophosphasia/TNAP deficiency. Subsequent studies showed diminished proliferation, with mitochondrial hyperfunction and significantly increased ATP levels in TNAP deficient bone and muscle progenitor cells. We also found that TNAP is expressed in a peri-nuclear intracellular pattern in these cells. Together, our results indicate that TNAP functions inside bone and muscle progenitor cells to influence mitochondrial respiration and ATP production. Future studies are required to establish mechanisms by which TNAP influences mitochondrial function, to determine the extent to which TNAP induced mitochondrial hyper respiration causes musculoskeletal defects seen in Alpl −/− mice and individuals with hypophosphatasia, and establish that modulation of TNAP can alter mitochondrial respiration in vivo.
|
v3-fos-license
|
2018-12-01T07:02:02.655Z
|
2017-12-27T00:00:00.000
|
54047624
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=81547",
"pdf_hash": "796d54ca670ffa1f563a291bf3869e5b7d674f1c",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41900",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "796d54ca670ffa1f563a291bf3869e5b7d674f1c",
"year": 2017
}
|
pes2o/s2orc
|
Impacts of Improved Supplemental Irrigation on Farm Income , Productive Efficiency and Risk Management in Dry Areas
This paper provides empirical evidence that improved supplemental irrigation (ISI) can be justified on both environmental and economic grounds. Results of a stochastic frontier model which explicitly and simultaneously accounts for technical inefficiency and production risk applied to data collected from 513 wheat farms in the rainfed areas of Syria show that the typical adopter farmer obtained yield and productive efficiency gains of 6% and 7% respectively. A stochastic dominance criterion also showed that the adopter farmers got 10% and 13% reductions in risk of obtaining yield levels below 4 tons/ha and 3 tons/ha respectively. Given its adoption level of 22.3% in 2010, ISI led to the production of 52 thousand metric tons (6%), more wheat and conservation of 120 million cubic meters of water (10%). ISI has the potential to reduce total irrigation water use by upto 45% and for further increases in yield if accompanied with sprinklers and other improved agronomic practices, thereby enhancing food security and environmental sustainability in the country. An important policy implication of these findings is that wider dissemination of ISI along with other complementary agronomic practices in postwar Syria could be a viable option to be considered by national and international efforts for the restoration and rehabilitation of agriculture in the country.
Introduction
Agriculture in the dry areas is exposed to a variety of risks which occur with high frequency where the stochastic nature of agricultural production is the major source of risk [1].The factors which cause variability in agricultural production include weather, pests and diseases.Risks in agricultural production are confounded by market fluctuations which are more significant in developing countries due to market imperfections, poor information, infrastructure and communication networks.
Yield variability is often explained by external factors such as weather, pests, diseases and input and output prices that are outside the control of farmers.
However, factors, such as variability in agronomic conditions including levels of inputs applied, which are under the control of farmers, also play important roles [2]- [7].
Farmers in developing countries are generally risk-averse [8] [9].This is mainly due to the absence of crop insurance and government support that buffer agricultural risk and provide the needed cushion at times of difficulty.Moreover, agricultural production in the developing world is highly associated with food security and hence the wellbeing of the family.Focusing on the developing world, poor farmers are more averse to risk and more likely to be reluctant to adopt technologies that increase risk [10].The same study argued that risk-averse farmers are likely to consider both the level of income and risk simultaneously and to reject a technology that they consider too risky.
Water scarcity is a critical constraint to agriculture in dry areas.This problem North African region has introduced the practice of improved supplemental irrigation (ISI) in predominantly rainfed areas.Improved supplemental irrigation (ISI) is the addition of small amounts of water to essentially rain-fed crops during times when rainfall fails to provide sufficient moisture for normal plant growth, in order to improve and stabilize yields [11] [12].In ISI, water is applied to rainfed crops which would normally produce some yield without irrigation.ISI is only applied when rainfall fails to provide essential moisture for improved and stable production, and the amount and timing are scheduled to ensure that a minimum amount of water is available during critical stages of crop growth [11].
The components of the new improved supplemental irrigation technology focused on irrigation scheduling: when to irrigate, how to irrigate, and how much water to use [13].The improved supplemental irrigation practice often recommended with a technology packages involving improved crop varieties (mainly wheat) and organic fertilizers.The introduction of ISI does not only help in stabilizing yield levels, but also in increasing the average yield in countries where farmers use the traditional supplemental irrigation (TSI) where irrigation water application rates are well above the marginal product and even yield maximizing levels.[14] found that ISI leads to maximum yield gains if coupled with sprinkler technologies.
Using data from 513 Syrian wheat farmers and the stochastic frontier production function, this paper argues and tries to provide empirical evidence that the adoption of ISI reduces the risks of yield variability and the associated variability in technical efficiency.The findings of this study are expected to be useful to researchers, policy makers, development organizations and extension personnel in their effort to help farmers in the dry areas cope with water scarcity induced by climate change.
Description of the Study Area
Syria is highly vulnerable to climate change.Located in the western part of the Mediterranean basin, Syria has a surface area of 185,518 km 2 and 32.2 percent is of this area is cultivable.Total irrigated land has more than doubled in 20 years between 1985 and 2005 [15].The demand for irrigation water has also increased steadily over the decades, almost doubling since 1985 [16].Excessive pumping is leading to rapid depletion of the ground water resource.Current water deficit in Syria ranges between 2.85 and 4.70 billion m 3 /year [17].As a result, groundwater levels in many parts of the country have dropped between 2 -6 meters and in some others even by more than 6 meters per year between 1993 and 2000 [18].
Wheat is the most important food grain grown in Syria.It is a crop of strategic political importance due to its high potential to enhance food security.In 2011, wheat was cultivated on nearly 1.8 million hectares, with a total production of 4.9 million metric tons [19], only 45 percent of the land area under wheat cultivation was irrigated, yet this irrigated area accounted for 72 percent of wheat production.The disparity between irrigated land area under wheat cultivation and its contribution to production and productivity indicates the importance of land and water resources management for wheat production, especially in the rainfed wheat area.The typical irrigation method at the field level in Syria is a surface gravity system [20].Traditional surface canal irrigation using open canal networks accounts for over 80% of total irrigated lands in Syria.
Water use efficiency (WUE), which is the ratio of the amount of water actually utilized by the crop to the total water pumped, for irrigated agriculture in Syria stands at about 40% -60% [17] [20].This is due to inefficient management of water resources, especially at the farm level, where traditional irrigation methods are practiced.In this area, transpiration and seepage alone account for 10% -60% of total water loss due to traditional surface canal irrigation [21].Traditional surface canal irrigation methods also lead to over-irrigation especially in the absence of adequate land leveling.In most cases, the design of the traditional furrow irrigation system in Syria is not optimal [20].Moreover, fields are not well drained, furrows are not well maintained and land leveling is not done regularly which results in some parts of the field receiving excessive water.
Prior to the introduction of the improved supplemental irrigation (ISI) by ICARDA and the Ministry of Agriculture (MoA), all Syrian wheat growers used irrigation techniques that resulted in high water use per unit area.Thus current adoption of improved supplemental irrigation (ISI) technology by 22.3% of Syrian wheat farmers has resulted in water savings and sustained wheat farming systems, generating huge environmental benefits.Applying supplemental irrigation in one or two well-timed applications at heading, anthesis, or milk stage can lead to increased and stabilized yield.To avoid confusion, we make distinction in this paper between improved supplemental irrigation (ISI), in which the recommended water application rates are used and traditional supplemental irrigation (TSI) where farmers use excessive irrigation over the recommended levels 1 .
Data
Owing to their relatively high share in total rainfed wheat land in the country and also the tremendous scope for ISI, zones 1 and 2 of Syria have been chosen for this study.From among the total of 14 governorates in the country, 12 have areas which fall in zones 1 and 2 out of which, the top three wheat producing governorates (Aleppo, Deraa and Al-Hassakeh) were chosen for this study.
These three governorates account for about 66% of total wheat land and 61% of total wheat production in Syria.
Using power analysis, the minimum sample size needed to ensure 95% confidence level for estimating the total number of ISI adopters was calculated to be 513.A stratified sampling procedure was then used to proportionally distribute the sample among the two zones where 241 and 272 households were drawn from Zones 1and 2 respectively.The distribution of these households into the two zones and 26 randomly drawn villages across the three governorates are provided in Table 1 below.
Methodology
The stochastic frontier production function approach has been widely applied to analyze technical efficiency in production [22] [23] [24] [25].By extending the application of the stochastic frontier production function approach, [26] propose a new measure of input-specific technical efficiency (TE) in production.
They also apply the new method to study the technical inefficiency of irrigation water among out-of-season vegetable growers in Crete, Greek.[27] also applied the method to study the technical inefficiency of irrigation water in citrus producing farms in Nabeul, Tunisia.Both studies find low (47.2% and 53%) mean 1 Scheduling of SI is determined for each year using the water balance method.For instance, in zones 1 and 2 of Syria, which are the study areas for this research, optimum yields were obtained with ISI of 600 to 1800 m 3 /ha [16].Hence, in this analysis, we used the higher end (1800 m 3 ) as the upper limit for the amount of water applied under ISI.( ) ( ) where i y is a scalar output of production unit I; i x is a vector of N inputs used by producer I; ( ) is the deterministic part of the production frontier; β is a vector of technology parameters to be estimated; and i v and i u are noise and inefficiency components which can take a number of forms, depending on specific assumptions.The specification given by (Equation ( 1)) is consistent with the typical Just-Pope framework [28] under the following assumption: where i z is an input vector which may or may not equal i x and γ is a vector of parameters.So the Just-Pope framework takes the form: where the function ( ) , i h z γ represents the output risk function.More recent advances in efficiency analysis showed that stochastic production frontier models can include the technical inefficiency and production risk simultaneously [29] [30].This approach allows for heteroscedasticity in the noise component to investigate risk effects while also allowing for heterogeneity in the mean of the inefficiency term during analysis of inefficiency effects.The model requires the estimation of Equation (1) with the following assumptions: Following the conventional specification in the stochastic production frontier model, the random error i v follows a normal distribution with zero mean and variance 2 vi σ , and the inefficiency term i u follows a truncated-normal distri- bution with mean i ū and variance 2 ui σ .To capture the heterogeneity of the efficiency and risk terms, the mean efficiency and risk functions are determined by exogenous factors.The vector i ω denotes exogenous variables that have influ- ence on the mean value of production inefficiency.
The risk function is assumed to have an exponential functional form with the vector of the exogenous factors i z as explanatory variables [28] [29].The nota- tion α is a vector of parameters associated with the mean of the production inefficiency while the notation γ is the vector of parameters associated with the production risk.The consistent estimators of Equation ( 3) can be obtained by using the maximum likelihood estimation method on the following log-likelihood function [28] [31]. where Following [29] [30], we estimate the stochastic production frontier models which included the technical inefficiency and production risk simultaneously for the wheat farmers in the study area.The list and description of variables included in the model are provided in Table 2.The measure of output oriented technical efficiency (TE) for the i th farmer (i.e., the ratio of the outputs with and without inherent inefficiencies) can then be computed as: where, 0 1 TE ≤ ≤ and the closer the TE score to 1, the higher the efficiency.In this specification, the parameters, β, σ, σ u , and δ have been estimated simultaneously using the maximum likelihood method.Thus, the log likelihood ratio (LR), which has a chi-square distribution, is used to test the significance of parameter estimates.
Results
Model results show that wheat area, application rates of Nitrogen and Phosphorus fertilizers, seed rate and quantity of Labor used had positive and significant effect on yield-showing that at their current average application levels, an increase in any of the five inputs leads to yield increase (Table 3).Yield responses to seeds, phosphorus fertilizers, labor, and wheat area respectively are 0.13, 0.08, 0.07 and 0.03.The insignificance of the linear irrigation water term should not come by surprise as the descriptive statistics from our sample survey show that the typical farmer is applying about 1110 m 3 /ha in excess of the maximum of the recommended range of 600 -1800 m 3 /ha.The profit maximizing level of irrigation water is 2032 m 3 /ha showing that the typical farmer is producing on the downward slopping part of the total product curve where marginal product of irrigation water is negative.
From the inefficiency model, the negative and significant coefficient on the use of improved supplemental irrigation indicates that improved supplemental irrigation reduce inefficiency-a result that is consistent with the theoretical expectation as improved supplemental irrigation is believed to ensure better utilization of water by plants.The positive and significant coefficient on the soil salinity variable shows that at its current average, an increase in soil salinity would lead to higher inefficiency.The coefficient on the years of schooling is negative and significant.This shows that more farmer education reduces inefficiency, which is consistent with what one can expect.
A closer look at the efficiency figures shows that 11.9% of the farmers who had used ISI have efficiency levels of between 90 to 100 percent.The corresponding figure for farmers who had used full irrigation (FI) and TSI is 0. Regardless of their irrigation method (surface canal or sprinkler), 77.4% of farmers who used ISI have efficiency rates of greater than 70 percent, which is much higher than that of those who had used FI and TSI which exhibit irrigation water efficiency levels of 38% and 52.6% respectively-a clear indication that using ISI leads to improvements in productive efficiency (Figure 1).
In the risk function, the coefficients on improved supplemental irrigation (ISI), Nitrogen fertilizer and, improved wheat variety are negative and significant showing that they contribute to the reduction of production risk.The negative and significant coefficient on ISI is consistent with the theoretical expectation as yield stability is one of the main benefits of ISI.The stochastic dominance made available to the crop is very low, fertilizers could possibly have burning effect and hence lead to lower yield levels than what is achievable without fertilizers.These results indicate that risk-averse farmers can use ISI, improved wheat varieties and fertilizers in order to reduce the production risk and hence the revenue variability.Further analysis of the data shows that risk-averse farmers are less likely to adopt supplemental irrigation with surface canal because adoption of SI with surface canal (instead of sprinklers) can increase the variability in production.
Conclusions and Recommendations
Using a survey of 513 Syrian wheat farms as case study and a stochastic frontier production function model which explicitly and simultaneously accounts for technical inefficiency and production risk, this paper provided empirical evidence that a shift from both flood irrigation (FI) and traditional supplemental irrigation (TSI) to improved supplemental irrigation (ISI) in rainfed agriculture, particularly in the dry areas, increases technical efficiency and reduces production risk and increases yield, thereby contributing to national food security.At current average application rate of 1490 m 3 /ha, the adopters of ISI are using about 1110 m 3 /ha (43%) irrigation water less than those using TSI.Therefore, at its current adoption level of 22.3%, ISI leads to the conservation of about 120 million m 3 (10% of total) irrigation water in the country.This shows that if all farmers in the country were to shift to ISI, it has the potential of cutting the total amount of irrigation water by about 45%.With a negative and significant coefficient in the inefficiency model, the use of improved supplemental irrigation reduces inefficiency-a result that comes without a surprise as improved supplemental irrigation is believed to ensure better utilization of water by plants.
Analysis of estimates from the inefficiency model shows that 11.9% of the farmers who had used ISI have efficiency levels between 90 and 100 percent.The corresponding figure for farmers who had used FI and TSI is 0. Likewise, regardless of the irrigation method used (surface canal vs. sprinklers), 77.4% of farmers who used ISI have efficiency levels greater than 70 percent, which is much higher than that of those who had used FI and TSI (8% and 52.6% respectively)-a clear indication that using ISI helps in the improvement of productive efficiency.
The stochastic dominance criterion also showed that the shift from TSI to ISI led to 10% and 13% reduction in risk of obtaining yield levels below 4 tons/ha and 3 tons/ha respectively.These results all together indicate that investment in improved supplemental irrigation (ISI) helps in the reduction of risk in wheat production.The use of sprinklers, improved wheat varieties particularly those which are drought tolerant, and the use of nitrogen fertilizers along with ISI played an important role in enhancing productive efficiency and hence productivity as well as in reducing income risks for wheat farmers in Syria.
ISI has the potential for enhancing food security and environmental sustainability in the Syria and other countries with dry land agriculture under similar production conditions.An important policy implication of these findings is that wider dissemination of ISI along with other complementary agronomic practices in postwar Syria could be a viable option to be considered by national and international efforts for the restoration and rehabilitation of agriculture in the country.
How to cite this paper: El-Shater, T., Yigezu, Y.A., Shideed, K. and Aw-Hassan, A. (2017) Impacts of Improved Supplemental Irrigation on Farm Income, Productive Efficiency and Risk Management in Dry Areas.Journal of Water Resource and Protection, 9, 1709-1720.https://doi.org/10.4236/jwarp.2017.913106T. El-Shater et al.DOI: 10.4236/jwarp.2017.9131061710 Journal of Water Resource and Protection is likely to become more severe because of population growth, climate change and deterioration of water quality.Characterized by low average amounts and high variability of rainfall, agricultural production in the dry areas carries substantial risk.In its effort to help farmers in the dry areas, the International Center for Agricultural Research in the Dry Areas (ICARDA) along with the national agricultural research institutions of many countries in the Middle Eastern and
Figure 1 .
Figure 1.Cumulative distribution of the estimated efficiency by irrigation categories.
Figure 2 .
Figure 2. Risk comparisons between irrigation methods using the stochastic dominance criterion.
Table 1 .
Number of villages and households selected randomly by zone and governorate.
Table 2 .
Explanatory variables included in the model.
Source: survey data.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2012-12-01T00:00:00.000
|
14717340
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/1471-2229-12-231",
"pdf_hash": "730d813feaf2aa61c80e71b7edfa9ad4a3433ae3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41901",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "fad49b1fd07a7bf543f9403872fe5e95326313dd",
"year": 2012
}
|
pes2o/s2orc
|
Root waving and skewing - unexpectedly in micro-g
Gravity has major effects on both the form and overall length of root growth. Numerous papers have documented these effects (over 300 publications in the last 5 years), the most well-studied being gravitropism, which is a growth re-orientation directed by gravity toward the earth’s center. Less studied effects of gravity are undulations due to the regular periodic change in the direction root tips grow, called waving, and the slanted angle of growth roots exhibit when they are growing along a nearly-vertical surface, called skewing. Although diverse studies have led to the conclusion that a gravity stimulus is needed for plant roots to show waving and skewing, the novel results just published by Paul et al. (2012) reveal that this conclusion is not correct. In studies carried out in microgravity on the International Space Station, the authors used a new imaging system to collect digital photographs of plants every six hours during 15 days of spaceflight. The imaging system allowed them to observe how roots grew when their orientation was directed not by gravity but by overhead LED lights, which roots grew away from because they are negatively phototropic. Surprisingly, the authors observed both skewing and waving in spaceflight plants, thus demonstrating that both growth phenomena were gravity independent. Touch responses and differential auxin transport would be common features of root waving and skewing at 1-g and micro-g, and the novel results of Paul et al. will focus the attention of cell and molecular biologists more on these features as they try to decipher the signaling pathways that regulate root skewing and waving.
In a recent paper published in BMC Plant Biology, Paul et al. [1] describe novel data that challenge a long-held hypothesis on how gravity affects patterns of root growth. When plants are grown on a solid surface, their roots show growth patterns of waving, which are undulations due to the regular periodic change in the direction root tips grow, and skewing, which is the slanted angle of growth roots exhibit when they are growing along a nearly-vertical surface. The generally accepted explanation for these patterns is that they result in large part from a combination of the touch stimulus arising from contact of the root tip with the surface and gravity, which increases the force of this contact. Using a specialized plant growth facility on the International Space Station, the Advanced Biological Research System (ABRS) with imaging hardware, Paul et al. collected digital photographs of plants every six hours during 15 days of spaceflight. The novel imaging system allowed the authors to observe the growth pattern of roots when their orientation was directed not by gravity but by overhead LED lights. Because the roots are negatively phototropic they grew away from the lights. In the micro-g environment of the ISS, plants grew more slowly than their 1-g controls grown under the same temperature and lighting on earth, but they still showed root waving and skewing. Their experimental design and controls made it unlikely that these growth patterns could be attributed to airflow, μg vectors or other directional environmental factors.
Waving and skewing are largely surface-dependent phenomena, and do not appear in roots that grow embedded in agar [2]. Certainly, gravity-driven interactions of roots with surfaces influence waving and skewing, but these novel results of Paul et al. now make it clear that these growth patterns can also occur in micro-g when directional root growth that is driven by negative phototropism interacts with surfaces. Until these observations the only prior report of a root skewing pattern in micro-g was that of Millar et al. , who reported an inherent skew to the right in roots of Arabidopsis ecotype Landsberg grown in darkness [3]. In contrast the roots of Arabidopsis ecotype Columbia (Col-0) showed random growth in darkness [4]. To further document differences between ecotypes in their root growth patterns in micro-g, Paul et al. compared Col-0 and Wassilewskija (WS) ecotypes in their report, and found that both showed differences in their skewing and waving patterns in micro-g, just as they do on earth, but these differences were exaggerated in micro-g. Moreover, unlike the random pattern of root growth shown by Col-0 seedlings grown in darkness, the roots of Co-0 seedlings whose growth was oriented by negative phototropism showed both waving and skewing.
Unlike the earlier work of Millar et al. [3], Paul et al. captured a time-lapse record of the growth of two ecotypes in micro-g every six hours for 15 days, and were able to observe the root growth patterns that occurred during this period in response to a directional light gradient. Remarkably, the initial root growth of both ecotypes in both ground controls and spaceflight plants was straight away from the light for 5 days, and only then did the roots of the WS flight plants begin to show a strong 40 o skew to the right, whereas the WS ground controls began to skew only slightly; i.e., 10 o to the right. These results highlight the fact that skewing is dependent strongly on both stage of development and on the force of contact the root tip has to the surface, which is clearly less in micro-g.
Prior models proposed strong involvement of both gravity and touch as key stimuli for both the waving and skewing growth patterns of roots [2]. Clearly, the novel results of Paul et al. focus more attention on the role of touch stimuli for these phenomena. Both waving and skewing are differential growth phenomena, and current understanding would predict asymmetric auxin distribution would be a critical intermediate change needed for both. Mutants in the auxin transport facilitator PIN2 grow randomly, so do not show waving or skewing [5], and auxin transport inhibitors block the waving response [6].
This raises the question of what signaling steps or intermediate cellular changes link touch stimuli to changes in auxin distribution? Two answers favored in the literature are changes in the actin cytoskeleton and ethylene production. Touch stimuli alter cytoskeletal organization [2], which can strongly impact auxin transport [7]. Ethylene mediates both touch responses [8] and auxin transport [9]. Another, more speculative answer to how touch stimuli are linked to auxin transport changes arises out of several interrelated publications that have appeared recently. Weerasinghe et al. showed that when root tips experience touch stimuli they release ATP, and this release plays a role in redirecting root growth [10]. Extracellular ATP (eATP), which is now a recognized signaling agent in plants [11], induces rapid changes in [Ca 2+ ] cyt [12], and changes in Ca 2+ transport can influence differential growth responses in roots [13]. These considerations predict that eATP could play a role in the control of auxin transport, and that applied nucleotides could alter skewing patterns, and both of these prediction have been confirmed experimentally (see Figures 1 and 5 in [14]).
Conclusions
The data of Paul et al. provide novel and valuable documentation that the force of gravity is not needed for the waving and skewing patterns of root growth on solid surfaces, and that in micro-g these patterns differ between different ecotypes of Arabidopsis. They thereby focus more attention on the role of touch in these patterns, and especially on how the touch stimulus is linked to altered auxin transport, which is likely to be a key controller of waving and skewing in roots.
|
v3-fos-license
|
2018-04-03T04:00:40.782Z
|
2011-04-15T00:00:00.000
|
37222444
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/article/S0021925819491175/pdf",
"pdf_hash": "8a8886635a8c6768b7983ff6739ad3e444832439",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41902",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "244c61e2b5e4b5511bda511e813bfc0a6f1a2379",
"year": 2011
}
|
pes2o/s2orc
|
Competing Interactions Stabilize Pro- and Anti-aggregant Conformations of Human Tau*
Aggregation of Tau into amyloid-like fibrils is a key process in neurodegenerative diseases such as Alzheimer. To understand how natively disordered Tau stabilizes conformations that favor pathological aggregation, we applied single-molecule force spectroscopy. Intramolecular interactions that fold polypeptide stretches of ∼19 and ∼42 amino acids in the functionally important repeat domain of full-length human Tau (hTau40) support aggregation. In contrast, the unstructured N terminus randomly folds long polypeptide stretches >100 amino acids that prevent aggregation. The pro-aggregant mutant hTau40ΔK280 observed in frontotemporal dementia favored the folding of short polypeptide stretches and suppressed the folding of long ones. This trend was reversed in the anti-aggregant mutant hTau40ΔK280/PP. The aggregation inducer heparin introduced strong interactions in hTau40 and hTau40ΔK280 that stabilized aggregation-prone conformations. We show that the conformation and aggregation of Tau are regulated through a complex balance of different intra- and intermolecular interactions.
bules (MTs) to regulate the cellular MT network. The dissociation of Tau from MTs is controlled by the phosphorylation of Tau at multiple sites (6,7). The longest human Tau isoform, hTau40 (441 amino acids (aa)), contains a ϳ250-aa long N terminus of unknown function, whereas the C terminus comprises the Tau repeat domain, which encompasses four ϳ31-aa long semi-conserved repeats (R1 to R4) flanked by proline-rich stretches (Fig. 1A). Both, binding to MTs and fibril assembly are mediated through the Tau repeat domain (8,9).
As most IDPs, Tau shows a high content of charged aa residues and a low hydrophobicity, which result in an extended solution conformation with a large radius of gyration (10). In solution, Tau has no stable secondary and tertiary structure, as judged by CD and Fourier transform infrared spectroscopy (10). The Stokes radius of Tau increases upon chemical denaturation with urea or guanidine hydrochloride (11,12) indicating some limited folding. NMR experiments revealed transient secondary structures in hTau40 that partially interact with other polypeptide regions (13). Two hexapeptide motifs, PHF6* in R2 and PHF6 in R3, can adopt -strand conformation and are predominantly responsible for Tau aggregation into fibrils (9,14). Using Förster resonance energy transfer (11), the transient "paper clip"-like folding of the C and N termini onto the repeat domain was detected in hTau40. After removing the N-and C-terminal domains, the Tau repeat domain exhibits faster aggregation than full-length Tau (15). This suggests an inhibitory effect of the Tau termini on aggregation. It remains to be determined, which intra-and intermolecular interactions maintain the soluble state of Tau or promote the aggregation of Tau into fibers.
Tau aggregation can be triggered in different ways. In Alzheimer disease, hyperphosphorylated wild-type Tau accumulates in neurofibrillar tangles (16). In frontotemporal dementia, point mutations in the Tau gene lead to malfunction and high aggregation propensity of Tau (17,18). For example, the "pro-aggregant" deletion mutation ⌬K280 triggers pre-tangle aggregation of Tau in mice (19) and leads to spontaneous aggregation of purified Tau (20). In vitro, aggregation of soluble Tau can be triggered by polyanions like heparin, arachidonic acid micelles, acidic peptides, and RNA (21)(22)(23). It is thought that these polyanions compensate positive charges of Tau that normally prevent aggregation. Furthermore, Tau fibril assembly is attenuated by a disulfide bridge between Cys 291 and Cys 322 in the repeat domain (24) and at high ionic strengths (25,26). This suggests that a complex interplay of interactions guides both fibril formation and fibril growth. Despite the over- supplemental Tables S1-S3, Figs. S1-S7, and "Materials." 1 all hydrophilic nature of Tau, hydrophobic interactions are essential for the integrity of the amyloid-like fibril core (25,27) that consists of stacked -strands in the Tau repeat domain. Regardless of the origin of aggregation, Tau fibrils show similar morphologies in electron microscopy (EM) (4,28) and atomic force microscopy (AFM) (29,30). We applied AFM-based single-molecule force spectroscopy (SMFS) to quantify the intramolecular interactions and the unfolding energy landscape of non-aggregated human Tau. Interactions folding polypeptide stretches of ϳ19 and ϳ42 aa were detected frequently in the repeat domain of hTau40. Diverse interactions randomly folding long polypeptide stretches Ͼ100 aa indicated irregular conformations of the terminal ends. The pro-aggregant mutant hTau40⌬K280 stabilized the folding of short polypeptide stretches, whereas the anti-aggregant mutant hTau40⌬K280/PP increased the folding of longer polypeptide stretches. Similarly, buffer conditions attenuating Tau aggregation increased interactions that folded polypeptide stretches Ͼ100 aa. Thus, Tau aggregation seems to be supported by specific interactions in the repeat domain and inhibited by irregular interactions of the protein termini. In both hTau40 and hTau40⌬K280, heparin strongly increased the number and strength of interactions. These results support a model in which heparin-induced electrostatic interactions stabilize conformations of Tau that are prone for aggregation. Our results provide insights into competing interactions that prevent or promote Tau folding and aggregation.
AFM Sample Immobilization for SMFS Experiments-Glass coverslips (Menzel Glaeser, Germany) were piranha etched, amino functionalized by silanization in an aminopropyl-triethoxy-silane gas phase and baked at 80°C for 30 min. Tau samples were diluted in PBS to a final concentration of 1 M. For adsorption, ϳ20 l of the Tau solution was placed onto aminopropyl-triethoxy-silane glass for ϳ60 min. Excess protein was removed by rinsing the sample with PBS.
SMFS-Tau was stretched using an AFM (Nanoscope IV, Veeco Metrology, Santa Barbara, CA) equipped with a Pico-Force module and silicone nitride (Si 3 Ni 4 ) cantilevers (BL-RC150 VB, Olympus Ltd., Japan; nominal spring constants ϳ30 pN/nm; resonance frequencies in buffer ϳ8 kHz). Real cantilever spring constants were determined in solution before each experiment using the equipartition theorem (33). Unless stated otherwise, stretching of Tau was performed in freshly prepared buffer (PBS, pH 7.4, 5 mM DTT). The buffer was exchanged every 2 h to account for evaporation and DTT degradation. Buffer with ϳ500 mM monovalent ions contained PBS buffer (ϳ150 mM), 5 mM DTT, and 350 mM NaCl. Buffer of ϳ50 mM ion strength consisted of 10 mM Tris, pH 7.4, 5 mM DTT, and 50 mM NaCl. SMFS on hTau40 without DTT was performed in pure PBS solution. For experiments in the presence of heparin, Tau adsorbed to amino-functionalized glass was incubated for 20 min in heparin buffer (PBS, 5 mM DTT, 0.33 mM heparin) and then stretched. For SMFS, the cantilever deflection (force) during approaching and retracting the cantilever stylus was recorded in a F-D curve. F-D curves of Tau were recorded at randomly chosen positions on the surface. The cantilever was pushed onto the glass surface with a force of ϳ1 nN for 0 -0.5 s to allow unspecific binding of Tau to the cantilever stylus. To initiate the stretching of the attached molecule, the stylus was retracted 200 -800 nm from the surface with a constant velocity. Such approach-retract cycles were repeated until a statistically significant amount of F-D curves was obtained. Dynamic SMFS (DFS) experiments were performed in buffer (PBS, 5 mM DTT) at 5 different pulling velocities (104, 249, 497, 1249, and 2490 nm/s for hTau40 and hTau40⌬K280; 104, 256, 497, 1090, and 2490 nm/s for hTau40⌬K280/PP). For each experiment and pulling velocity, at least three different cantilevers were used.
Selection and Analysis of SMFS and DFS Data-The wormlike chain model of elastic polymers (34) describes the force needed to elongate a polymer chain with the contour length, L C . Applying this model to proteins, the polymer persistence length, l P Ϸ 0.4 nm, describes the peptide bond length in the polypeptide backbone (monomer length per aa ϭ 0.36 nm) and fits the stretching of most polypeptides by SMFS (36,37). Using the worm-like chain model, a F-D curve (Fig. 1E) can be transferred into a F-L C curve (supplemental Fig. S1B) by calculating L C for each data point (36). This approach proved useful for handling the large number of F-D curves recorded for proper statistics. 1% of F-D curves could not be described by a wormlike chain using l P of 0.4 nm and were excluded from analysis. F-D curves obtained from hTau40 adsorbed to amino-functionalized glass, plain glass, gold, and mica showed no difference excluding artifacts induced by nonspecific protein-support interactions. Amino-functionalized glass provided the highest yield of F-D curves from full-length Tau and was chosen as supporting surface for the experiments.
Stretching of a non-self-interacting polymer chain results in a F-D curve, in which the force rises non-linearly with the chain extension until the "detachment peak" describes the detachment of the molecule from the support or the cantilever stylus.
Sufficiently strong intramolecular interactions established in the Tau polypeptide were detected as additional force peaks in the F-D curve (or spikes in the F-L C curve (36)). A force peak quantifies the force required to rupture the interaction. The distance between two successive force peaks, ⌬L i , gives the length of the polypeptide segment that becomes unfolded upon breaking the interaction (Fig. 1D). The relative contour length, ⌬L C , denotes the distance of a force peak to the detachment peak. This distance was used to classify the interactions detected in Tau.
Single Tau molecules were picked up by the cantilever stylus at random positions along their polypeptide chain. To analyze the stretching of the entire molecule, we analyzed only F-L C curves having contour lengths of 420 Ϯ 40 aa for hTau40 (441 aa), hTau40⌬K280 (440 aa), and hTau40⌬K280/PP (440 aa), contour lengths of 800 Ϯ 70 aa for the Tau fusion protein (Ig27) 3 -hTau40-(Ig27) 2 (849 aa), contour lengths of 130 Ϯ 20 aa for K18 (130 aa), and contour lengths of 250 Ϯ 30 aa for Nt40 (254 aa). To detect force peaks up to 250 pN (typical forces at which proteins unfold (37,75)), F-L C curves had to show detachment peaks Ͼ250 pN. To calculate density maps (supplemental Figs. S1C, S2, B and C, S3, C and F, and S5, C and F), F-L C curves matching both length and force criteria were manually aligned on their detachment peaks. The position of force peaks in F-L C curves was determined using a custom made procedure (IgorPro, Wavemetrics). Probabilities and most probable positions of rupture peaks were fitted by triple-(hTau40, hTau40⌬K280) and multi-Gaussian functions (hTau40⌬K280/ PP) to the ⌬L C distributions. Peak combinations (supplemental Fig. S6 and "Materials") in hTau40, hTau40⌬K280, and hTau40⌬K280/PP were analyzed by counting rupture peaks at ⌬L C of 19 Ϯ 6, 42 Ϯ 6, and 73 Ϯ 9 aa in single F-L C curves. Combinations involving other peak distances (⌬L C ) were neglected for low statistical relevance (Ͻ2% probability). The unfolding energy barriers of interactions at ⌬L C values of 19, 42, and 73 aa were determined as described (38) using the DFS data (see supplemental "Materials").
Human Tau Displays Variable Intramolecular Interactions-
Tau molecules fold kinetically unstable -stranded and ␣-helical structures in short structural regions (ϳ20% of the polypeptide) (12). Transient interactions of these short regions may induce the folding of both C and N termini onto the repeat domain (11). To quantify the probability and strength of the interactions stabilizing the folding of Tau, we mechanically stretched single hTau40 molecules by SMFS (Fig. 1B). PBS buffer containing 5 mM DTT provided physiological electrolyte, pH and reducing conditions. 32% of all force-distance (F-D) curves (n ϭ 312) described the stretching of an unstructured polymer followed by its detachment from the AFM stylus or support (Fig. 1C). The remaining 68% F-D curves detected additional force peaks at various stretching distances (Fig. 1E). Each force peak indicated the rupture of an intramolecular interaction that stabilized the folding of a polypeptide segment of hTau40 (Fig. 1B). Upon rupturing the interaction, this polypeptide segment was released (Fig. 1D). Despite whether such an interaction resulted in a secondary structure or not, we used the term "fold" to describe the polypeptide segment stabilized by a certain interaction.
We characterized each rupture peak in the F-D curves by its distance to the detachment force peak, ⌬L C ( Fig. 2A, supplemental Table S1). In ϳ15% of all hTau40 molecules, rupture peaks occurred at various ⌬L C positions Ͼ100 aa (n ϭ 312). The rupture force of these interactions was 107.6 Ϯ 47.4 pN (most probable Ϯ S.D.; n ϭ 45; pulling velocity 875 nm/s; supplemental Table S2). At shorter ⌬L C distances, three prominent force peaks p1, p2, and p3 were reproducibly detected at ⌬L C of 19.2 To prove that these interactions did not originate from protein-surface adhesion but resembled intramolecular interactions, we stretched hTau40 attached to different supports (mica, gold, and highly ordered pyrolytic graphite; data not shown). These control experiments revealed similar force peak patterns and probabilities as observed for hTau40 attached to amino-functionalized glass ( Fig. 2A).
To further investigate the specificity of the detected interactions, we engineered a fusion protein of hTau40 flanked by five identical immunoglobulin 27 (Ig27) molecules, each having 89 residues, termed (Ig27) 3 -hTau40-(Ig27) 2 (Fig. 3A). In F-D curves of this construct, both Ig27 fingerprint and hTau40 force peaks co-occurred (Fig. 3B). The mechanical unfolding of each Ig27 resulted in a characteristic force peak of 226.5 Ϯ 17.8 pN (n ϭ 69, pulling velocity 1000 nm/s; supplemental Fig. S2D) that unfolded a polypeptide stretch of ⌬L i ϭ 77.5 Ϯ 4.7 aa (ϳ28 nm) (Fig. 3C). This is typical for the unfolding of Ig27 (39 -42). Observing the unfolding of at least four Ig27 domains (marked asterisk in Fig. 3B) in a F-D curve proved the stretching of the "sandwiched" hTau40. The interactions recorded upon stretching the sandwiched hTau40 (marked E in Fig. 3B) ruptured mostly before the first Ig27 at various positions and forces. However, because interactions in hTau40 ruptured at forces (ϳ50 to 300 pN) similar to Ig27 (ϳ150 to 250 pN), some hTau40 interactions unfolded between and after the unfolding force peaks of Ig27. The most probable lengths of unfolded polypeptide stretches, ⌬L i , were determined as ⌬L i 1 ϭ 17.9 Ϯ 7.9 aa and ⌬L i 2 ϭ 38.4 Ϯ 17.5 (n ϭ 69) aa for sandwiched hTau40, and ⌬L i 1 ϭ 18.8 Ϯ 5.9 aa and ⌬L i 2 ϭ 41.1 Ϯ 9.2 aa (n ϭ 227) for isolated hTau40 (Fig. 3C). This similarity of hTau40 interactions detected in (Ig27) 3 -hTau40-(Ig27) 2 and isolated hTau40 indicated their specificity for the stretching of individual hTau40 molecules.
The N-terminal fragment, Nt40, established random interactions of low probability suggesting that it was largely unstructured. The pronounced force peaks p1, p2, and p3 in hTau40 and K18 suggested that fold1, fold2, and fold3 were mainly established in the repeat domain. However, when unfolding the hTau40⌬K280/PP, an anti-aggregant mutant of hTau40⌬K280, in which Ile 277 and Ile 308 where exchanged against two prolines (red lines). B, for SMFS, Tau proteins adsorbed to amino-functionalized glass supports were picked up by the AFM tip and stretched by the AFM cantilever (probe) until their connection to the tip or the glass (gray closed circles in B-D) ruptured. Tau consists of unstructured protein regions (black lines) with intramolecular interactions (green and yellow filled circles) that fold peptide stretches of different lengths (yellow and green lines). Recording deflection and distance of the cantilever during consecutive approach-retract cycles provides force-distance (F-D) curves of single Tau molecules. C, stretching of a molecule having no intramolecular interactions results in a F-D curve that shows one major detachment peak at the contour length, L C , of the fully extended molecule. D, intramolecular interactions (green and yellow filled circles) can establish force barriers that are detected as additional force peaks in the F-D curve. For every additional force peak, the contour length relative to the detachment peak, ⌬L C , and the rupture force was derived. The distance to the next rupture peak, ⌬L i, in an F-D curve gives the length of the polypeptide stretch that unravels upon breaking an interaction. E, F-D curves recorded upon stretching single hTau40 molecules in PBS containing 5 mM DTT. The curves show a major detachment peak at the contour length of the Tau molecule (1 aa Ϸ 0.36 nm) plus smaller force peaks at shorter contour lengths (open black circles) originating from intramolecular interactions.
repeat domain construct, K18, the probability to detect fold1, fold2, and fold3 decreased compared with full-length hTau40. This showed that the frequency of interactions in the repeat domain increased in the presence of the termini. To further elucidate these interactions, we next stretched hTau40 in the presence of the disulfide bridge between R2 and R3 (24).
Cross-linking Repeats R2 and R3 Increases Intramolecular Interactions-The in vitro aggregation rate of Tau decreases in the absence of DTT (24). It is assumed that the Cys 291 -Cys 322 disulfide bridge between R2 and R3 (Fig. 1A) established in oxidizing conditions (no DTT) enables alternate conformations of the repeat domain that disfavor aggregation. The mechanical rupturing of a disulfide bond requires forces of 1-2 nN (44) and could be discernable from the much lower forces stabilizing the folds of Tau (Fig. 1E, supplemental Table S2). In the following we probed the interactions of hTau40 in the absence of DTT.
The average contour length of mechanically stretched hTau40 decreased by ϳ34 aa in the absence of DTT (supplemental Fig. S4E). This suggested that the Cys 291 -Cys 322 bond has been formed (24). The frequency of interactions substantially increased from 1.7/molecule in PBS ϩ DTT to 2.5/molecule in the absence of DTT (Fig. 2D). Force peaks with contour lengths of ⌬L C Ͼ100 aa increased ϳ6-fold from 0.15/molecule to 0.96/molecule in the absence of DTT. The wide distribution of these force peaks suggests that reducing the Cys 291 -Cys 322 bond increased the frequency of interactions established between random polypeptide regions. The probability to detect rupture peak p1 (⌬L C ϳ19 aa) decreased from 52% in presence of DTT to 29% in the absence of DTT, that for rupture peak p2 (ϳ42 aa) decreased from 34 to 32%, and that for p3 (ϳ73 aa) increased from 14 to 22% (supplemental Table S1). Thus, the structural fold1 (ϳ19 aa) of the Tau repeat region was established less frequently in the presence of the disulfide bond bridging R2 and R3, whereas structural fold2 remained mainly unaffected, and fold3 occurred at slightly increased probability. Next, we characterized electrostatic interactions contributing to the folding of the Tau repeat domain.
ing electrostatic interactions of the polypeptide (45). When increasing the monovalent electrolyte concentration from ϳ150 mM (PBS, 5 mM DTT) to ϳ500 mM (PBS, 5 mM DTT, 350 mM NaCl), the frequency of force peaks in hTau40 decreased from 1.7/molecule to 1.2/molecule (Fig. 2E). Similarly, the probability to detect force peaks p1 and p2 decreased from 52 to 17% and from 34 to 16%, respectively (supplemental Table S1). Decreasing the monovalent electrolyte concentration to ϳ50 mM (10 mM Tris-HCl, pH 7.4, 5 mM DTT, 50 mM NaCl; Fig. 2F) increased the overall frequency of force peaks from 1.7/molecule (PBS, 5 mM DTT) to 2.1/molecule, and decreased that of p1 from 52 to 31% and p2 from 34 to 26% (Fig. 2D), respectively. The average forces of p1 and p2 increased for both high (PBS, 5 mM DTT, 350 mM NaCl) and low (10 mM Tris-HCl, 5 mM DTT, 50 mM NaCl) electrolyte concentrations (supplemental Table S2 and Fig. S4, B and C). These results indicate that the stability of the folds detected by force peaks p1 (ϳ19 aa) and p2 (ϳ42 aa) partly depend on electrostatic interactions. However, the majority of interactions stabilizing these folds was not affected by the electrolyte concentration and may thus not be of electrostatic nature.
The strength of interactions at p3 and ⌬L C Ͼ100 aa decreased at ϳ500 mM electrolyte concentrations from 129.5 Ϯ 51.7 and 107.6 Ϯ 47.4 pN (PBS, 5 mM DTT) to 55.7 Ϯ 36.4 and 51.7 Ϯ 29.5 pN, respectively (supplemental Table S2 and Fig. S4B). In ϳ50 mM NaCl, the probability to detect force peak p3 increased from 14 to 21% and force peaks Ͼ100 aa from 15 to 57% (supplemental Table S1). This suggests that interactions forming fold3 and interactions of the termini Ͼ100 aa are largely of electrostatic nature. This finding is in agreement with NMR experiments indicating that electrostatic interactions between the N terminus and the hTau40 repeat domain disappear at 600 mM NaCl (12). To specify which interactions of the repeat domain play a role during Tau oligomerization and fibrillization, we investigated mutants that favor or disfavor Tau aggregation.
The distribution of force peaks in F-D curves of hTau40⌬K280 ( Fig. 2G and supplemental Fig. S5, A-C) was remarkably similar to that of hTau40 ( Fig. 2A). Peaks p1, p2, and p3 occurred at positions and probabilities similar to hTau40 (supplemental Table S1). However, compared with hTau40, the rupture force of p1 increased from 90. 8 C, length distribution of unraveled polypeptide stretches, ⌬L i , detected in F-D curves of (Ig27) 3 -hTau40-(Ig27) 2 (gray) resemble that detected of isolated hTau40 (black). Triple Gaussian fit to the ⌬L i distribution of (Ig27) 3 -hTau40-(Ig27) 2 (gray line; n ϭ 69) reveals interaction contour lengths of ⌬L i 1 ϳ18 aa and ⌬L i 2 ϳ38 aa of the sandwiched hTau40, plus the characteristic contour length of the Ig27 domains of ϳ78 aa. A double Gaussian fit to the ⌬L i distribution determined for hTau40 only (black line; n ϭ 223) reveals most probable ⌬L i of ⌬L i 1 ϳ19 aa and ⌬L i 2 ϳ41 aa. n, gives the number of analyzed F-D curves.
Mutants hTau40⌬K280 and hTau40⌬K280/PP showed the three force peaks, p1, p2, and p3, at similar contour lengths as hTau40. In the case of hTau40⌬K280, the interaction strength stabilizing fold1 and fold2 increased, whereas that stabilizing fold3 and longer peptide stretches decreased. hTau40⌬K280/PP exhibited a reduced probability of fold1, fold2, and fold3 but favored the folding of long polypeptide stretches. Two prominent folds of hTau40⌬K280/PP involved ϳ101and ϳ151-long polypeptide stretches.
Heparin Strengthens Intramolecular Interactions-In vitro aggregation of hTau40 can be induced in the presence of sulfated glycosaminoglycans, such as heparin (47) or other polyanions (22). Sulfated glycosaminoglycans were shown to enhance the phosphorylation of Tau by different kinases (48,49), to inhibit Tau binding to MTs, and to co-localize with hyperphosphorylated Tau in AD brain tissue (21). To elucidate to which extent heparin-binding changes the interactions in Tau, we characterized hTau40, hTau40⌬K280, and hTau40⌬K280/PP in the presence of 0.33 mM heparin (PBS ϩ DTT ϩ heparin).
Stretching hTau40 and hTau40⌬K280 in the presence of heparin revealed a significant increase of force peaks from 1.7/ molecule (PBS ϩ DTT) to 2.9/molecule for hTau40 (PBS ϩ DTT ϩ heparin; Fig. 4A, supplemental Table S1), and from 1.3/molecule to 3.1/molecule for hTau40⌬K280 (Fig. 4C). In the presence of heparin, the force peaks were distributed over the entire contour length of hTau40 and reached rupture forces up to 1500 pN. The probability to detect force peak p1 (ϳ19 aa) decreased in hTau40 from 52 to 22%, and in hTau40⌬K280 from 48 to 21% (supplemental Table S1). In contrast, the probability to detect p3 (ϳ73 aa) increased from 14 to 24% and from 10 to 27%, respectively. Heparin strongly increased the number of interactions detected at ⌬L C Ͼ100 aa from 0.15/molecule to 1.3/molecule in hTau40 and from 0.14/molecule to 1.87/molecule in hTau40⌬K280. To prove the electrostatic nature of the heparin-induced interactions, we stretched hTau40 in the presence of heparin and ϳ500 mM monovalent electrolyte (PBS ϩ DTT ϩ heparin ϩ 350 mM NaCl; Fig. 3B). 90% F-D curves of hTau40 (n ϭ 355) resembled those recorded in ϳ500 mM monovalent electrolyte without heparin (PBS ϩ DTT ϩ 350 mM NaCl; supplemental Fig. S4B). Most force peaks with rupture forces Ͼ300 pN disappeared and the overall number of force peaks dropped from 2.9/molecule (PBS ϩ DTT ϩ heparin) to 1.9/molecule (PBS ϩ DTT ϩ heparin ϩ 350 mM NaCl).
When stretching hTau40 and hTau40⌬K280 in the presence of heparin, the rupture force distributions tailed toward high forces of ϳ1500 pN (Fig. 4, A and C, insets). In contrast, the force peak pattern and the rupture forces of hTau40⌬K280/PP showed only minor deviations upon addition of heparin (Fig. 4D).
Exposure of hTau40 and hTau40⌬K280 to heparin increased the frequency of strong electrostatic interactions established along the entire polypeptide chain. The heparin-induced decrease in force peaks that unravel short peptide stretches was similarly observed at enhanced electrolyte concentrations (PBS, 5 mM DTT, 350 mM NaCl; Fig. 2E) and may, thus, be attributed to the ionic character of heparin.
Energy Landscape of Folded Conformations-The force required to unfold a polypeptide depends on the applied forceloading rate (50,51). Quantifying the most probable unfolding forces over a range of force-loading rates enables estimation of the width and height of the free unfolding energy barrier stabilizing a folded structure. From this barrier, the kinetic and mechanical properties of the folded structure can be derived (38, 52) (supplemental Fig. S7A). In the following, we determined the energy barriers of the three folds, fold1, fold2, and fold3, detected by force peaks p1, p2, and p3. Therefore, we unfolded hTau40, hTau40⌬K280, and hTau40⌬K280/PP at five different pulling velocities. Fitting of the DFS spectra (supplemental Fig. S7, B-L) approximates the parameters characterizing the unfolding energy barriers (see "Experimental Procedures" and supplemental "Materials").
In hTau40, the equilibrium unfolding rates, k 0 , of structure fold1, fold2, and fold3 ranges were determined as 0.12 s Ϫ1 (fold1), 0.17 s Ϫ1 (fold2), and 0.13 s Ϫ1 (fold3). These rates slightly decreased in hTau40⌬K280 ranging from 0.05 s Ϫ1 (fold1) to 0.06 s Ϫ1 (fold2 and fold3). Accordingly, the transition barrier heights, ⌬G ‡ , varied in hTau40 between 20.2 k B T (fold2) and 20.5 k B T (fold1 and fold3), and ranged from 21.2 k B T (fold2 and fold3) to 21.5 k B T (fold1) in hTau40⌬K280 (supplemental Table S3). In hTau40⌬K280/PP, the unfolding rates and barrier heights of fold1, fold2, and fold3 were similar to hTau40 and hTau40⌬K280 (supplemental Table S3, supplemental Fig. S7, The distance, x u , between folded and transition state approximates the width of the unfolding energy barrier (supplemental Fig. S7A). Together, ⌬G ‡ and x u were used to estimate the spring constants, k, and mechanical properties of the folded structures. Fold1 had a x u of ϳ0.2 nm in hTau40, and a x u of ϳ0.1 nm in hTau40⌬K280 and hTau40⌬K280/PP. Thus, k of fold1 increased ϳ3-fold from 4.2 N/m in hTau40 to 14.5 N/m in hTau40⌬K280 and 13.9 N/m in hTau40⌬K280/PP. In contrast, k of fold2 and fold3 decreased from 13.7 and 13.9 N/m in hTau40 to 5.4 and 4.8 N/m in hTau40⌬K280, and to 5.4 and 1.5 N/m in hTau40⌬K280/PP (supplemental Table S3), respectively.
Compared with hTau40, the three folds of the repeat domain showed a slightly increased lifetime (ϳ1/k 0 ) in hTau40⌬K280. In hTau40⌬K280 and hTau40⌬K280/PP fold1 exhibited a higher structural rigidity, whereas fold2 and fold3 showed lower rigidity than in hTau40. Both results suggest that the deletion of Lys 280 stabilized fold1 and softened fold2 in the repeat domain of hTau40. However, the proline insertions that prevent hTau40⌬K280/PP from aggregation did not affect fold1 and fold2. In contrast, fold3 of the hTau40⌬K280/PP mutant showed a ϳ3-fold increased lifetime and a ϳ9-fold reduced rigidity (ϳ), indicating a softening of fold3.
Different Interaction Sites Contribute to Order and Disorder-
Intrinsic disorder plays a pivotal role for the aggregation of proteins into amyloid-like structures (53). Only ϳ70% of hTau40 molecules established intramolecular interactions strong enough (Ͼ10 pN) to be detected by SMFS. The force peak pattern of Tau appeared highly variable. Reproducibly occurring force peaks reflected reproducibly folded structures that correlated with the hTau40 repeat domain. Randomly distributed force peaks reflected interactions dispersed along the polypeptide. These insights confirm the IDP character of Tau, which is largely characterized by random coil-like conformations with a propensity to adopt certain residual conformations (54).
Hierarchy of Folding-The sequence in which serial interactions of a polypeptide rupture during mechanical unfolding relies on their interaction strengths (51). Generally, weak interactions rupture before stronger ones. However, if a structure stabilized by a weak interaction is embedded in a more stable structure, the stronger interaction ruptures before the weaker one. In hTau40, interactions folding the shorter polypeptide stretches fold1 (p1, ϳ19 aa, ϳ90 pN) and fold2 (p2, ϳ42 aa, ϳ80 pN) required lower unfolding forces than unfolding the longer polypeptide stretches of fold3 (p3, ϳ73 aa, ϳ130 pN) and Ͼ100 aa (ϳ100 pN) (supplemental Table S2). Thus, fold1 and fold2 were embedded into fold3, which was stabilized by stronger interactions (supplemental Fig. S1F). Force peaks detected at contour lengths Ͼ100 aa reflect interactions of the Tau termini. Our findings likely combine two current Tau folding models. One model proposes various interactions of transient small structures (6 -strands, 2 ␣-helices, 3 polyproline II helices (PPII)) formed in the Tau repeats and the flanking regions, which are spaced by Ͻ50 aa (12). The other paper clip model (11) describes the transient interaction of the Tau termini with the repeat domain and with each other. This paper clip folding of Tau involves longer polypeptide stretches of Ն100 aa.
Specific and Unspecific Interactions of the Repeat Domain-Approximately 70% of full-length Tau molecules established interactions that fold 19-(fold1) and 42-aa (fold2) long polypeptide stretches of the repeat domain. At high (500 mM) and low (50 mM) monovalent electrolyte concentrations, the frequency of these two characteristic folds decreased (Fig. 1, E and F, sup-plemental Table S1). The frequency of fold3 (ϳ73 aa) decreased slightly with increasing electrolyte and increased with decreasing electrolyte concentration. This indicates that fold1, fold2, and fold3 are stabilized by electrostatic and by other interactions, e.g. hydrophobic or polar ones. Electrostatic interactions folding the repeat domain may involve negatively charged residues in R4 (Glu 338 , Glu 342 , Asp 345 , Asp 348 ) and positive charges such as Lys 267 /His 268 , Lys 298 /His 299 , His 329 /His 330 /Lys 331 , and His 362 (9).
The stability of the most frequent folds, fold1 and fold2, increased in hTau40⌬K280 and hTau40⌬K280/PP (supplemental Table S2). Thus, the ⌬K280 mutation strengthened fold1 and fold2, presumably through the enhanced -strand character of PHF6* in R2. The two proline mutations of hTau40⌬K280/PP, located near the N-terminal ends of PHF6* in R2 and PHF6 in R3 (Fig. 1A), did not affect the stability of fold1 and fold2 (supplemental Table S2). However, the frequency of fold1 and fold2 decreased in hTau40⌬K280/PP, whereas that of fold3 and longer polypeptide stretches (Ͼ100 aa) increased (supplemental Table S1). It may be concluded that interactions introduced by the proline substitutions stabilize protein conformations that prevent Tau aggregation, far beyond the local disruption of the -structure in R2.
Our results indicate that interactions stabilizing fold1 and fold2 involve the PHF6* -strand, in which the ⌬K280 mutation locates and elevates the propensity to form -stranded structures. A plausible explanation for our findings is provided by an intramolecular stacking of -strands in repeats R2, R3, and R4 (9). Assuming an anti-parallel stack of -strands (Ser 285 -Gly 304 for R2 stacked with R3, Ser 316 -Gly 335 for R3 stacked with R4; Fig. 1A), the mechanical unfolding of the repeat domain would unravel polypeptide segments having the length (⌬L i ϳ20 aa) of fold1 and fold2 (supplemental Fig. S1D). Interestingly, the rupture forces of fold1 (ϳ90 pN) and fold2 (ϳ70 pN) in hTau40 resemble those observed for the mechanical unzipping of anti-parallel -strands (55). However, the precise structural localization of fold1 and fold2 remains unclear. It may be possible to locate these folds by applying SMFS to Tau mutants, in which amino acid residues of the repeat domain are systematically manipulated.
N-terminal Interactions Prevent Tau Aggregation-All fulllength Tau isoforms exhibited irregularly distributed interac-tions that folded polypeptide stretches Ͼ100 aa (Fig. 2, A, G, and H, and supplemental Table S1). Randomly distributed interactions observed for the N-terminal fragment of hTau40 are consistent with the unstructured projection of the N-terminal half of Tau when bound to MTs (56,57) and its brush-like protrusion from Tau fibers (58,59). Such random-coil behavior may originate from the high charge density and low hydrophobicity of the N terminus (60). About 50% of "randomly established" interactions depended on electrolytes indicating that they were partly electrostatic. Similarly, electrostatic interactions established between the positively charged repeat domain and proline-rich regions, and negatively charged regions in the N-and C termini (Fig. 1A) are thought to fold hTau40 into a paper clip conformation (11). Whereas interactions of the N and C termini with the repeat domain inhibit Tau aggregation (61,62), the C-terminal truncation of hTau40, facilitated in vivo by caspase (63), accelerates Tau aggregation (15).
The number of dispersed interactions stabilizing polypeptide stretches Ͼ100 aa increased ϳ6-fold when forming the disulfide bond between repeats R2 and R3 (Fig. 2D, supplemental Table S1, supplemental Fig. S4D). The most probable rupture force of these interactions decreased from ϳ110 to ϳ70 pN in the absence of DTT (supplemental Table S2). Thus, the intact disulfide bond favors Tau conformations that enhance weak unspecific interactions of the termini. Opening of the disulfide bond by addition of DTT favors intermolecular interactions
Interrelation of aggregation propensities and interactions for hTau40 in different buffer conditions, hTau40 constructs, and pro-and antiaggregant mutants of hTau40
Interaction frequencies determined by SMFS for hTau40 in PBS/DTT were taken as reference values to indicate the increase (arrows pointing upwards) and decrease (arrows pointing downwards) of interaction frequencies. The number of arrows depicts the amount of increase and decrease (0 indicates no aggregation, ϭ 0 -20% indicates in-/decrease, one arrow indicates 21-100% in-/decrease, two arrows indicate 101-500% in-/decrease, three arrows indicate Ͼ500% in-/decrease). Changes of interactions in the presence of 0.33 mM heparin ( o ) are given in respect to the Tau isoform in absence of heparin (high-force interactions in presence of heparin are marked #).
that accelerate Tau aggregation in vitro (24,64). Accordingly, unspecific interactions of the C and N termini, which prevent hTau40 from aggregation, increased in the absence of DTT. Similarly, the formation of such unspecific interactions was supported at elevated electrolyte concentrations of 500 mM (Fig. 2E) that hinder Tau aggregation (25). Low electrolyte concentrations (50 mM), which prevent Tau molecules from binding to each other in the absence of heparin (26), fold3, fold2, and fold1. B, schematic energy landscapes for the stretching of hTau40 (i), hTau40⌬K280 (ii), and hTau40⌬K280/PP (iii). The three main intramolecular interactions p1, p2, and p3 (green ellipses) establish energy wells in the unfolding landscape of Tau. The widths (x axis) of the three ellipses at 19 (p1), 42 (p2), and 73 aa (p3) estimate the widths x u of the energy wells stabilizing fold1, fold2, and fold3. The depth of every energy well, ⌬G ‡ , is indicated by the color-coded scale bar. Different Tau conformations show different combinations of interactions in the termini and the repeat domain and stepwise unfold through various unfolding pathways (arrows in B, i, ii, and iii). Weak interactions of the termini break before the three abundant folds, fold1, fold2, and fold3, in the repeat domain. The larger number of unspecific, long peptide folds and the low stiffness of the fold3 interaction in hTau40⌬K280/PP (iii) may protect this Tau mutant from establishing aggregation conformations. In contrast, the ϳ19-(fold1) and ϳ42-aa (fold2) interactions are strengthened in hTau40⌬K280 (ii) and may thus be important for the aggregation of Tau.
increased the interactions at p3 (ϳ73 aa) and ⌬L C of ϳ130 aa (Fig. 2F). These two interactions may, thus, favor conformations of soluble Tau that prevent aggregation. Similarly, the frequency of unspecific interactions stabilizing polypeptide stretches Ͼ100 aa was ϳ2-fold increased in the anti-aggregant mutant hTau40⌬K280/PP compared with wild-type and pro-aggregant mutant Tau.
Our data support a mechanistic model in which unspecific weak electrostatic interactions stabilize a variety of protein conformations that prevent Tau from aggregating (Table 1, Fig. 5A). In conditions that disfavor aggregation, Tau establishes specific short (Յ40 aa) and unspecific long (Ͼ100 aa) polypeptide folds. The balance of these interactions stabilizing short and long polypeptide stretches changes in conditions that favor Tau aggregation. At aggregation conditions, interactions stabilizing long polypeptide stretches are reduced and interactions stabilizing the repeat domain dominate (Table 1). This mechanism shows the complexity of interactions guiding the (mal)function of Tau. It remains to be shown whether such balance of intramolecular interactions folding different polypeptide stretches resembles a general mechanism for regulating the misfolding and aggregation of amyloidogenic IDPs (65)(66)(67).
Aggregation Accelerator Heparin Changes Molecular Conformations of Tau-Heparin and other polyanions accelerate the in vitro aggregation of hTau40 and hTau40⌬K280 (21,68,69). Heparin exposes a homogenous high density of negative charges along its sugar backbone and binds via nonspecific electrostatic interactions to proteins of opposite charges (70). The binding sites of heparin are suggested in the Tau repeat domain and the up-and downstream flanking P2 and RЈ regions (71,72), where lysine and arginine residues expose positive charges (Fig. 1A).
SMFS showed that heparin introduces a large number of strong interactions in hTau40 and hTau40⌬K280 requiring rupture forces up to ϳ1500 pN (Fig. 4, A and C). These interactions were randomly distributed along the polypeptide chain of Tau and superimposed on the interactions established in the absence of heparin. High electrolyte concentrations (500 mM monovalent ions) cancelled most heparin-induced "high force" interactions ( Fig. 4B), which are thus assumed to be electrostatic in nature. Stabilization of certain molecular conformations upon substrate binding is a common mechanism for IDPs to fulfill variable functions (73). Because heparin catalyzes Tau aggregation into fibrils, we assume that the strong interactions established in the presence of heparin force Tau into conformations that favor the assembly of -strand motifs in the repeat domain with those of other Tau molecules. This model of heparin-induced Tau aggregation is based on conformational restrictions of monomeric Tau, regardless of the a priori aggregation propensity of the Tau isoform. It also applies to the proaggregant mutant hTau40⌬K280 and explains the elevated aggregation speed of hTau40⌬K280 in the presence of heparin.
The interactions of the anti-aggregant mutant hTau⌬K280/PP did not change in the presence of heparin. We conclude that the proline mutations prevented heparin to establish strong interactions with hTau⌬K280/PP (Fig. 5B). This agrees with the finding that heparin fails to induce aggregation of hTau40⌬K280/PP (19). Apparently, both -strand breaking proline substitutions (Fig. 1A) efficiently suppress interactions between repeat domains of two adjacent hTau40⌬K280/PP molecules, which are essential for aggregation (20).
The Unfolding Energy Landscape of Tau-DFS experiments can describe the unfolding free energy barriers stabilizing a folded protein. The sequence of all rupture events in a F-D curve describes the unfolding barriers taken by the protein funneling along the unfolding energy landscape (74). We observed that Tau establishes multiple combinations of interactions that stabilize different unfolding intermediates. The three force peaks at p1, p2, and p3, which quantify the interactions stabilizing the three folds fold1, fold2, and fold3, occurred independently of each other (supplemental "Materials" and Fig. S6). Tau can thus unfold via one, two, or all three of these unfolding intermediates (Fig. 5). These prominent unfolding intermediates were superimposed by highly variable interactions of the termini that folded long polypeptide stretches of Ͼ100 aa and induced a heterogeneous set of conformations. Such highly dispersed interactions of low probability introduce many energy wells into the unfolding energy landscape. Each of these wells potentially traps a conformational substrate of Tau. At conditions that disfavor aggregation in vitro, such as elevated electrolyte concentrations and absence of DTT, these interactions became more frequent resulting in a rugged unfolding energy landscape of Tau (Table 1).
Similarly, the anti-aggregant mutant hTau40⌬K280/PP strongly increased the frequency of interactions stabilizing longer polypeptide stretches. Thus, this mutant exhibits a rougher energy landscape surface with an increased number of energy wells compared with hTau40. We conclude that interactions folding long polypeptide stretches create energy wells that trap Tau conformations that prevent aggregation. In contrast, the pro-aggregant mutant hTau40⌬K280 strengthened and favored interactions stabilizing the folds of the repeat domain but did not alter interactions folding longer polypeptide stretches. Interactions that fold the repeat domain, thus, appear to stabilize Tau conformations that promote aggregation (Table 1).
Heparin introduced numerous strong electrostatic interactions in both hTau40 and hTau40⌬K280. These interactions occurred in addition to the ones folding the repeat domain and the termini in the absence of heparin. We assume that heparininduced interactions promote Tau aggregation by stabilizing conformations that favor interactions between the hexapeptide motifs PHF6* and PHF6 in the repeat domains of adjacent Tau molecules. Being kinetically trapped in such aggregation prone conformations would substantially increase the probability for intermolecular interactions of Tau and, thus, initiate aggregation. Furthermore, the conformational restriction of Tau by heparin binding may also provide better access for kinases to the phosphorylation sites in Tau. This would explain the heparin-induced increase in phosphorylation and the co-localization of heparin with hyperphosphorylated Tau in vivo (21,48,49).
|
v3-fos-license
|
2023-10-06T13:53:38.126Z
|
2023-10-06T00:00:00.000
|
263673939
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcmededuc.biomedcentral.com/counter/pdf/10.1186/s12909-023-04722-2",
"pdf_hash": "3efa373e49105371fed491eaecf338d24c07a7a1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41903",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"sha1": "88f1bb9e826970b4061efa8c493de61bd0d1d384",
"year": 2023
}
|
pes2o/s2orc
|
Pre-clerkship procedural training in venipuncture: a prospective cohort study on skills acquisition and durability
Background The effectiveness of simulation-based training for skill acquisition is widely recognized. However, the impact of simulation-based procedural training (SBPT) on pre-clerkship medical students and the retention of procedural skills learned through this modality are rarely investigated. Methods A prospective cohort study was conducted among pre-clerkship medical students. Learners underwent SBPT in venipuncture in the skills laboratory. Assessments were conducted at two main points: 1) immediate assessment following the training and 2) delayed assessment one year after training. Learner self-assessments, independent assessor assessments for procedural competency, and communication skills assessments were conducted in both instances. The students were assessed for their competency in performing venipuncture by an independent assessor immediately following the training in the simulated setting and one-year post-training in the clinical setting, using the Integrated Procedural Protocol Instrument (IPPI). The student’s communication skills were assessed by standardized patients (SP) and actual patients in the simulated and clinical settings, respectively, using the Communication Assessment Tool (CAT). Results Fifty-five pre-clerkship medical students were recruited for the study. A significant increase was observed in self-confidence [mean: 2.89 SD (Standard Deviation) (0.69)] and self-perceived competency [mean: 2.42 SD (0.57)] in performing venipuncture, which further improved at the delayed assessment conducted in the clinical setting (p < 0.001). Similarly, the IPPI ratings showed an improvement [immediate assessment: mean: 2.25 SD (1.62); delayed assessment: mean: 2.78 SD (0.53); p < 0.01] in venipuncture skills when assessed by an independent assessor blinded to the study design. A significant difference (p < 0.01) was also observed in doctor-patient communication when evaluated by SPs [mean: 2.49 SD (0.57)] and patients [mean: 3.76 SD (0.74)]. Conclusion Simulation-based venipuncture training enabled students to perform the procedure with confidence and technical accuracy. Improved rating scores received at a one-year interval denote the impact of clinical training on skills acquisition. The durability of skills learned via SBPT needs to be further investigated. Supplementary Information The online version contains supplementary material available at 10.1186/s12909-023-04722-2.
Background
The achievement of clinical competency is a gradual process, with repetitive training being a central element in the continuum of medical education [1,2].The pre-clinical period fraught with teaching basic sciences, is used less to equip students with skills needed at the bedside to participate in patient care during clerkships [3].Thus, clerkships are still the primary source for learning and acquiring clinical skills in traditional medical curricula [4][5][6].However, traditional curricula are no longer recommended, and many medical schools have undertaken curricula reforms to move towards integrated curricula [7].
However, basic clinical skills acquisition during clerkships occurs in a rather "haphazard" fashion [6,[8][9][10][11][12].Practicing invasive procedures on patients without proper training imposes an ethical issue [13].A growing number of learners, finite resources, and increasing emphasis on patients' right to trained care hinder medical students' learning procedural skills in the clinical setting [14].Further, students report inadequate supervision by the clinical teachers, lack of assessments and feedback on learner performance, and reduced opportunities for learning [6,8,12] as barriers to learning procedural skills at the bedside.Although patients are willing to accept trainee involvement in nonprocedural care, they usually are reluctant to allow medical students to perform procedures on them [15,16].Therefore, the opportunity to develop basic procedural skills in the ward-based setting has become a challenge.
Consequently, several studies report a lack of clinical experience and competency in performing essential procedures by medical students and resident physicians [17][18][19][20][21].In a single-center study, residents experienced a discrepancy between the actual and desired competency levels for basic procedural skills [22].However, mastering these procedures is essential for medical students [23][24][25].Hence, to bridge the gap between expectations and learning experiences in clinical clerkships, simulationbased procedural training (SBPT) has been increasingly integrated into medical curricula [26,27].
Hence, SBPT in skills laboratories has taken on a central role in training procedural skills.SBPT allows students to learn in a safe environment where they can engage in deliberate practice to achieve proficiency [28].Teaching/ learning with SBPT is usually structured and employs different instructional approaches, including the "Four-Step Approach" devised by Rodney Peyton [29,30].Each learning session is reinforced by a debrief session, where students are encouraged to reflect upon their performance fortified by educational feedback, a unique feature of simulation-based medical education [31].SBPT employs part-task trainers [32], peers or near peers [33], and hybrid simulators (part-task trainers coupled with Standardized Patients-SPs) [34].
The effectiveness of SBPT is widely recognized.Compared with standard or no training, SBPT was found to enhance learner competency [35] and improve the performance of basic clinical skills when assessed in OSCEs [36,37].Peer-led learning has demonstrated effectiveness in skills acquisition equal to teacher-led instruction with SBPT [38,39].SBPT has led to an increase in the number of procedures students perform in the wards [40].Thus, Remmen et al. assumed that skills training better prepares students for clinical clerkships [41].Students were found to be less anxious and more confident at the bedside with procedural training in the pre-clerkship period [42].Therefore, SBPT is recommended to be integrated as a longitudinal training course into medical curricula [43], starting from the pre-clerkship period [44].
In contrast to a growing literature on procedural performance among undergraduates in the West [45,46], there has been no previous objective skills assessment of undergraduates in South Asia, where the curricula, resources, and educational opportunities are in stark contrast.Specifically, we did not find evidence of implementation or the effectiveness of a pre-clerkship SBPT course available for medical students across the South Asian subcontinent, including Sri Lanka.In addition, literature on the retention of procedural skills acquired through SBPT lacks robust evidence [35,[46][47][48][49][50], with critical reviews of simulation for procedural skills training rarely conducted in the last decade [45,51].Despite the well-established phenomenon of technical skill decay [52], no study has assessed procedural skill retention in this population.The limited available research on the natural history of technical skills among undergraduates has focused on basic and advanced life support [53,54], with recommendations to investigate the skill decay in relation to context and tasks [46].
We aimed to address two gaps in the literature.This study aimed to assess the impact of a simulation-based procedural skills training program among pre-clerkship medical students.Second, we aimed to measure the durability of medical students' venipuncture skills.Specifically, the study asks the following questions: 1) Do pre-clerkship medical students demonstrate improved self-confidence and perceived competency with simulation-based venipuncture training?2) Do pre-clerkship medical students demonstrate competency in technical and communication skills in performing venipuncture when assessed by an independent assessor?3) Do students retain the skills learned through SBPT when assessed by an independent assessor at a one-year interval?
Context of the study
Sri Lanka, the setting for this study, is a South Asian island nation with its' medical education influenced by the British [55].All medical schools in Sri Lanka are affiliated with public Universities.Eleven government-funded Universities that provide undergraduate medical education, including the Faculty of Medicine, University of Kelaniya, where this study was conducted, have undergone curricula reforms to shift away from traditional didactic methods, advocating for studentcentered teaching-learning approaches [55].However, most of these changes focus on delivering the taught curriculum, with minimal attention to teaching/learning methods used during clinical training.
The undergraduate medical curricula of Universities in Sri Lanka, including where we conducted this study, comprise five years.The medical course is divided into a 2-year pre-clinical, 2-year para-clinical, and 1-year clinical phases.The pre-clinical phase included no clinical contact and was focused on teaching basic sciences.At the time of the study, most medical schools, including the University of Kelaniya, were equipped with skills laboratories where simulation-based procedural and communication skills training were conducted to varying extents.A single skills laboratory group at the University of Kelaniya would have about 60 students.Opportunities for redundant training and deliberate practice are virtually nonexistent due to the resource-limited nature of the local context.The few procedures trained during the pre-clinical phase are thus not revisited in the following years.Although these skills laboratory classes were mandatory, the skills taught were not formally assessed.
The focus of the para-clinical phase was on teaching applied sciences.In affiliated state hospitals, these students participated in half-day clinical rotations in General Medicine, General Surgery, Pediatrics, Gynecology and Obstetrics, Psychiatry, and related subspecialties.The educators rely on the clerkships for students to learn and practice procedural skills, which start after a 21-month (mean) interval.
The clinical phase was entirely dedicated to clinical rotations in General Medicine, General Surgery, Pediatrics, Gynecology and Obstetrics, and Psychiatry.Each clinical rotation between years 3-5 is 4-8 weeks, with students 'attached' to one or more consultants in the ward/unit.During the clerkships, the students are required to achieve procedural competency by observation, legitimate peripheral participation [56], and practicing procedures on actual patients.A single clerkship group (years 3-5) at the institution where we conducted this study consisted of 30-40 students.
Study setting
A prospective cohort study [57] was conducted among pre-clerkship medical students in a metropolitan University in Sri Lanka from 2020-2021.The study focused on venipuncture, a basic procedural skill required of a resident physician.All second-year medical students who agreed to participate were included in the study.
The study was conducted in two phases.In phase I, all 55 second-year medical students of the Faculty of Medicine, University of Kelaniya, Sri Lanka, who volunteered and were eligible for the study, were recruited.All students underwent SBPT on venipuncture using hybrid simulators (part-task trainers coupled with SPs) in the skills laboratory.The self-confidence and perceived self-competency in performing venipuncture were assessed before and after the training.An independent assessor assessed venipuncture performance, and the SPs assessed communication skills.
In phase II of the study, this cohort of students was re-assessed in the clinical setting one year after SBPT.The students rated their self-confidence and perceived self-competency.Subsequently, they performed venipuncture on actual patients, and an independent assessor assessed the skills.Actual patients assessed their communication skills in the clinical setting.Figure 1 demonstrates the methodology.
Student sample
The primary inclusion criterion was second-year medical students at the Faculty of Medicine, University of Kelaniya.Students with previous experience performing procedures such as venipuncture, IV cannulation, or intravenous injections were excluded from the study.Information about these exclusion criteria and student characteristics (age and gender) were obtained through a self-administered questionnaire handed over to the students one week before the commencement of the study.We were intentionally inclusive to give all volunteering second-year medical students the opportunity to receive training.
Standardized patients (SP) sample
SPs were recruited for phase I of the study.They were coupled with mannequins to enable role-play.All SPs who acted as patients in the study received written role-play instruction.The SPs were used in the simulated setting for the participants to learn communication skills concerning venipuncture while performing venipuncture on the task trainer.SPs were instructed on using the assessment tool to evaluate the student's communication skills.
Patient sample
Patients were recruited for the study in phase II.Individuals taking anticoagulant drugs, diagnosed patients with Hepatitis B, C, or HIV, critically ill patients, and patients unable to give written consent were excluded from participation.In addition, patients diagnosed with coagulopathies and heavy smokers were also excluded from the study.Only patients indicated for blood sampling were recruited for the study.These details were gathered from the bed-head tickets of patients, and only the eligible patients were requested to participate in the study.
Ethics
Ethics approval was granted by the ethics review committee of the Faculty of Medicine, University of Kelaniya (P/233/11/2019).Informed written consent was obtained from all volunteering students, SPs, and patients.Further, permission for conducting the study in the clinical setting was obtained from the Director of the Colombo North Teaching Hospital and relevant Consultants in the wards.Study participation was voluntary, and all participants were assured of anonymity and confidentiality.The student participants were ensured that the participation, withdrawal from the study, or assessment scores they received during the study would not affect them in any way in their clinical training and assessments.All volunteering participants were allowed to refuse or withdraw from the study at any time.They were assured that refusal to participate or withdrawal from the study would not affect the care and treatment they received from the ward.All criteria were applied in order to minimize risks of potential harm for both students and volunteer patients.
Phase I Pre-interventional questionnaire for student participants
Students were given a self-administered questionnaire to gather baseline data (i.e., age, gender) and to rate their self-perceived confidence and competency levels in performing venipuncture on actual patients.The questionnaire was developed from published literature [37,58].We pre-tested the developed questionnaire with a selected group of ten first-year medical students and five clinicians who were medical educators.Although second-year medical students would best represent the study participants, we could not invite them due to the possibility of any one of them being included in the study.We chose clinician academics for the pre-test group to further improve the quality of the questionnaire.The pre-test group participants were asked to complete and critique the questionnaire using several criteria, including the adequacy of instructions, clarity of questions to identify incongruent and vague statements, comprehensiveness, and rating methods.Pre-testing was done to ensure the relevance and acceptability of the participating students.In addition, the pre-test group was asked to suggest corrections and recommendations for inclusion in the instrument.The modified and refined questionnaire was used for data collection.
The students were asked to rate their self-confidence on a 5-point Likert scale (1 = beginner, 5 = master).The students rated the overall self-perceived competency to perform venipuncture on a 4-point Likert scale which ranges from 1-unable to perform, 2 = competent to perform with major assistance, 3 = competent to perform with minor assistance, to 4 = competent to perform independently.Major assistance was defined as assistance required in one or more of the three major steps considered essential in performing the task; a) selecting the vein, b) selecting the site of venipuncture, or c) insertion of the needle into the vein.Minor assistance was defined as needing assistance in one or more of a) asepsis, b) tying the tourniquet, c) dressing the venipuncture site, etc.
Intervention
The cohort of second-year medical students recruited to the study underwent SBPT in venipuncture in the skills laboratory during 2-h training sessions.The training was conducted in small groups.Each training group consisted of 3-4 students per hybrid simulator.They also revisited and trained on venipuncture-associated patientphysician communication during the training session.The intervention was carried out as a role-play using a hybrid simulator.The part-task trainer was used to train the technical skills of venipuncture.The SP was used for learners to practice communication during venipuncture.
The students received instruction on the technical aspects of venipuncture according to Rodney Peyton's 'Four-Step Approach' [30].They trained on venipuncture on a part-task-trainer model in the shape of a human arm (serial number: 312029 T; name: Multi-Venous IV Training Arm", purchased via Laerdal, New York, USA).
The students were trained in communication skills using SPs with whom they practiced doctor-patient communication.They had been trained in communication skills using the Agenda Led Outcome-Based Analysis scheme (ALOBA) developed for simulation-based learning of communication skills [59] prior to the study's recruitment.During the intervention, the instructors revisited the concepts of doctor-patient communication.They encouraged learners to practice venipuncture-associated communication skills with the SP while performing venipuncture on the task trainer.The exercise was carried out as a role-play to create a more realistic environment that enhanced the student's involvement in the learning experience and to support the acquisition of doctor-physician communication [33].The SPs were given detailed role-play instructions by the instructor.
After the instructor demonstrated the procedure using the hybrid simulator, the instructor allowed the participant to practice on the simulator while providing direct, specific feedback on technical performance and communication.We allowed the participants to practice as many times as they desired, either on particular steps or on the entire procedure from start to finish.Per mastery learning practices [60], they iteratively received direct feedback and targeted practice on the steps that were not achieved until they could independently complete the entire procedure.Equal emphasis was placed on the selfcontained, practical exercise of venipuncture on a parttask trainer and doctor-patient communication.
Outcome assessments
The participants were assessed for their competency in performing venipuncture in two instances: immediately following the training session and as a delayed assessment.The immediate assessment took place following the conclusion of the venipuncture training sessions.The participants performed venipuncture using hybrid simulators at the clinical skills laboratory.The delayed assessment occurred at the clinical (i.e., ward) setting one-year post-venipuncture training, where the participants were requested to perform venipuncture on actual patients.The outcomes measured in both instances were: 1) selfassessments of confidence and competency, 2) independent assessor assessment of procedural competency, and 3) SP/patient assessment of the communication skills.Additionally, the number of procedures each student recalled performing in the prior year was collected using the self-administered questionnaire in phase II of the study that gathered data on self-confidence and selfperceived competency.
1) Assessment of self-confidence and self-competency
Post-intervention, the same pre-test questionnaire was given again.Participating students rated their confidence and competency in performing venipuncture on real patients.The students rated their self-confidence on a 5-point Likert scale (1 = beginner, 5 = master) and the overall self-assessed competency to perform venipuncture on a 4-point Likert scale (1-unable to perform, 2 = competent to perform with major assistance, 3 = competent to perform with minor assistance, 4 = competent to perform independently).In addition, they were also asked about their perception of the teaching session in an open-ended question.
2) Assessment of trained skills by an independent assessor Students' performance was assessed by an independent assessor blinded to the study design.Using an independent and blinded assessor removes possible bias in assessment.The assessor was an experienced clinician actively involved in medical student education with vast experience in teaching and assessing clinical skills.The assessor received instruction on how to use the IPPI from the principal investigator of this project prior to the commencement of the study.This assessment was conducted using the Integrated Procedural Performance Instrument (IPPI) proposed by Kneebone et al. [61].The IPPI is designed to assess procedural competencies where task trainers are combined with SPs to better approximate actual clinical situations.IPPI consists of nine items: introduction/establishing rapport, explanation of the procedure, consent, preparation for the procedure, technical performance of the procedure, maintenance of asepsis, closure of procedure, professionalism, and overall ability to perform the procedure (technical and professional skills).The performance is graded as below average to above average (Additional file 1).In this study, we divided the IPPI into three main subcategories as items that describe "technical aspects," items that mainly describe "communication skills," and items that describe the overall ability as "overall performance," and a sub-analysis was conducted.
All participants were given a maximum of three attempts to perform venipuncture, after which the student was refrained by the facilitator from performing the procedure.
3) Assessment of communication skills by SPs
The SPs assessed the students' performance with a modified and translated Communication Assessment Tool (CAT).CAT, developed for patients to rate clinicians' interpersonal and communication skills, has shown evidence of validity in various contexts [62].The translated and modified CAT was pre-tested and piloted prior to the commencement of the study.SPs were trained by the principal investigator to assess communication skills using the translated CAT.
Phase II Delayed assessment of competency in the clinical setting
The cohort of pre-clerkship students who received SBPT on venipuncture was recruited back to the study one-year after venipuncture training.This one-year was mainly dedicated to pre-clerkship learning and end-of-year assessments.By the time they were recruited to the study, they had proceeded to the 3rd year of the medical course.They had undergone one month of clerkships in medical or surgical wards at the Colombo North Teaching Hospital, Sri Lanka.At the time of this assessment, the participants have performed venipunctures on real patients, with a frequency ranging between 2-5 venipunctures per student during the month of training in their medical or surgical rotation.
1) Assessment of self-confidence and self-competency
Before performing venipuncture on real patients, the students were given the same questionnaire used to assess the level of self-confidence and perceived selfcompetency in performing venipuncture on an actual patient.
2) Assessment of trained skills by an independent assessor
All study participants performed venipuncture on real patients in the clinical setting.The participants were given only a maximum of two attempts for venipuncture under the supervision of the principal investigator or a qualified, trained medical officer.
The same independent assessor who was blinded to the study design assessed the students' performance.This assessment was conducted using the IPPI [61].
3) Assessment of communication skills by real patients
The patients rated the students' communication skills using the CAT [62].The questionnaire was intervieweradministered. Extended faculty staff members blinded to the study design were responsible for administering the questionnaire to the patients.
Statistical analysis
In this study, medical students served as their own controls for statistical analyses [57].Comparisons between pre and post-intervention variables were made using Wilcoxon signed rank test.Data are presented as means with standard deviations (SD) and medians with 25th and 75th centiles.Distribution of group characteristics referring to age and gender are presented as percentages.Responses for each variable were tested for normality by the Shapiro-Wilks test.The Wilcoxon sign rank test was used to compare the means of pre-and post-interventional variables, including self-assessments and IPPI, and the medians of CAT ratings.G*Power was used to estimate that a sample of 26 would be sufficient to detect an effect size of Cohen's d value of 0.45 (α = 0.05) with 80% power for Wilcoxon signed rank test.A P-value less than 0.05 was considered to be statistically significant.Raw data were processed using Microsoft EXCEL.SPSS software version 22 (Armonk, NY) was used for statistical analysis.
Group characteristics of the study participants
All 55 students recruited to phase I of the study were second-year medical students.The student group comprised 35 (63.64%) female and 20 male participants.The mean age was 21.78 ± 0.98 years.
Pre-intervention
Most participants (n = 45; 81.82%) rated their self-confidence to perform venipuncture as beginner.None of the students rated their self-confidence at the level of 'Master.' Most students (n = 39; 70.91%) felt they knew the steps but could not describe the steps in performing venipuncture.
Most students (n = 31; 56.36%) felt they could not perform the skills independently.Whereas 41.82% felt they could perform venipuncture with major assistance.
Post-intervention
The majority of the students (n = 29; 52.73%) felt their self-confidence increased to level 3 from level 1 (beginner) following SBPT.Although none felt they reached the level of master following the training, ten students (18.18%) felt their self-confidence increased to level four.40% felt they could perform with minor help, while 56.36% stated their self-competency as being able to perform venipuncture under supervision with major assistance.A statically significant mean increase in self-confidence and self-competency was observed in the participants following SBPT (p-value < 0.05).
Self-assessment in the clinical setting
When assessed for self-confidence to perform venipuncture, one year after venipuncture training in the skills lab, most (n = 38; 69.10%) of the participants rated self-confidence as level four.In addition, 20% rated their self-confidence to have reached the level of master in performing venipuncture.Most students (n = 44; 80%) felt they could perform the skills independently.Table 1 compares the mean scores of self-ratings.
IPPI ratings Post-intervention (simulated setting)
The assessor rated the overall performance of the students as competent/ borderline or incompetent.Most participants were rated borderline (n = 28; 50.91%), and 41.82% were rated competent.Interestingly, four students (7.27%) were rated incompetent to perform venipuncture following simulation-based training.These students refrained from performing venipuncture after three attempts.They were given remedial training after the conclusion of the training and assessment of the rest of the group.Amongst these four students, three were rated borderline, and one was rated competent when assessed by the same independent assessor after the completion of the remedial training session.
Clinical setting
When assessed one year later, the overall performance in venipuncture of the participants was rated as competent (n = 40; 72.7%).Table 2 compares the IPPI ratings given to participants in the simulated setting following SBPT and the delayed assessment conducted in the clinical setting.
A significant difference was observed between the two settings across all categories and subcategories of the IPPI, as shown in Table 3.
CAT ratings
Rated on a scale of 1-5 (poor-excellent) by SPs on doctor-patient communication, the median score for study participants was 3.0, corresponding with the "fair" response.When rated by patients in the clinical setting, the students were rated as good (median: 4.0; p < 0.01).
Medical students' reaction to simulation-based training
Most students (n = 50) indicated they were satisfied with the learning experience.Students felt that this learning environment motivated learning and that they felt prepared for the clerkships owing to this experience.Most students recommended this training to the rest of the medical students in the pre-clinical phase.They suggested that the training be conducted for other common procedures they would encounter in the clinical setting.
Discussion
Our work is the first to document the impact of SBPT on procedural competency among pre-clerkship medical undergraduates in South Asia.Our cohort study of pre-clerkship medical students undergoing SBPT for venipuncture demonstrated significant improvements in self-assessment and procedural competency.They reported enthusiastic and positive attitudes toward SBPT.Although we expected decreased scores for competency and self-assessments in the delayed assessment, we noted improved ratings when assessed for competency oneyear after training.We were not surprised by the baseline (pre-intervention) self-ratings of confidence and competency of the students before SBPT since these students were in the pre-clerkship period and, therefore, were not exposed to procedural training.Important to note are the ratings of the post-SBPT IPPI.In a BEME systematic review, Issenberg et al. (2005) showed simulator validity and feedback as critical features of simulation-based training, which leads to "most effective learning" [1,27,31].In our study, the validity of the SBPT was improved by incorporating SPs which led to a learning exercise through role play.Role-playing enhances the realism of skills training and aids in learning doctor-patient communication during the training sessions [33].In addition to incorporating role play, we provided immediate constructive feedback to the students.Both these features may also have contributed to the observed results in this study.Previous work in evaluating the effectiveness of SBPT identified that the inclusion of several procedures within a single study limited the time and capacity for proper assessment [37].In addition, incorporating video assessors also have inherent difficulties with logistical challenges to capture each student's performance in a high-quality video that provides a detailed view of all necessary angles for an accurate procedural skills assessment [37,63].Thus, this study was planned to mitigate these issues by using real-time skills assessment of a single procedure, which enabled us to gather robust data in this study.
Although the effects of pre-clerkship SBPT are well established in the West [45,46], evidence for simulation-based education among pre-clinical medical undergraduates in Asia, where the curricula, resources, and educational opportunities are at a stark contrast, are lacking.Of note, a study from East Asia showed a marked improvement in procedural competency with SBPT for clerkship students [64].
Our study demonstrated and confirmed satisfactory technical and communication skills gain among preclerkship medical students.The students in this study were enthusiastic and positive toward SBPT, which reflects the existing literature on learner satisfaction with simulation [45].Pre-clerkship procedural training in Sri Lanka remains ad-hoc, and currently, work is underway to identify essential procedural skills competencies required as exit qualifications from the undergraduate medical program.The findings in this study contribute to this endeavor to develop a pre-clerkship procedural training course for undergraduate medical curricula.
The second aspect we wanted to investigate was the durability of skills gained through simulation-based training.Opportunities for re-training and deliberate practice are virtually nonexistent for medical undergraduates in Sri Lanka due to the resource-limited nature in the local context.The few procedures trained during the pre-clinical phase are thus not revisited in the following years.The educators rely on the clerkships for students to learn and practice procedural skills, which start after an 21-month (mean) interval.Therefore, we expected to investigate whether students would benefit from skills training way before the start of clerkships and how much of a skills retention we could observe one year after training, a phenomenon investigated in postgraduate medical education [65,66].The findings of this study have the potential to inform current educational practices and instructional design with high implications for the local context, which also applies to similar settings.
Consequently, we were highly surprised by the improved self-assessment and IPPI ratings reported during the delayed assessment.We expected medical students to be unable to sustain procedural competency when assessed a year later due to a lack of ongoing experience.After a one-year gap, we also expected diminished self-confidence and perceived competency levels to perform skills.The ratings on communication skills by the SPs differed significantly from those of the patients, a finding we anticipated in the study.This finding complies with literature where patients are reported to rate the students' performance more benevolently than SPs [67].
Many studies have investigated skill retention in relation to cardio-pulmonary resuscitation skills or advanced cardiac life support skills following simulation-based training [54,68].Studies on simulation-based training of postgraduate doctors on hemodialysis [65] and lumbar puncture [66] have shown to retain skills one-year after training.Notably, in undergraduate medical education, Lee and colleagues recruited ten medical students who had undergone a single simulation-based training on cardiovascular system examination one year ago [69].These students have not had further training after the initial training session.The cardiovascular examination skills of the students were evaluated through MCQ (Multiple Choice Questions) and OSCE (Objective Structured Clinical Examination) one year after training.They concluded that the students were able to retain the skills learned through simulation-based training for one year despite the lack of training in between.However, more recent studies have shown evidence of steep skills decay following SBPT [70], with recommendations for booster training at intervals to maintain procedural competency [65,71].
Practicing invasive procedures without proper training imposes an ethical issue [13].It was deemed unethical to request students to perform venipuncture on actual patients with training limited to SBPT on venipuncture one year ago.Thus, we expected a skill decay in accordance with previous research [70,71].Hence, to overcome the ethical issues arising from requesting students (who only had SBPT on venipuncture one year ago and thus, deemed not to have adequate exposure) to perform venipuncture on actual patients, we designed the study so that the students were given four weeks of clinical training where they would be able to perform venipuncture on actual patients.
The improved ratings reported in the delayed assessment were highly intriguing.This effect goes against the principle of deliberate practice by McGaghie [28].We speculate that the improved ratings received in the delayed assessment cannot be directly related to the effects of SBPT.Due to the design of this study, the participants' skill decay may have been masked by superimposed clinical training, albeit one month, and low procedural volumes.The number of venipunctures our participants self-reported in the clerkship month prior to the second assessment was quite low (2-5 venipunctures per student), which, if representative of the procedures available to them, they would have had a scarce opportunity to benefit from the booster effect of procedural volume on skill refinement [72].
The improved ratings we observed in the delayed assessment are unlikely to be an effect of the SBPT they received a year back.Although the practice opportunities were low, it is possible that they were motivated to learn and perform better after being recruited to the study in the second phase.Knowing they might have to perform for the study may have improved efforts to learn.However, we foresaw the possibility of the Hawthorn effect [73] and minimized it by reducing the time between recruitment and assessment to a maximum of five days.We could also argue that the students were accustomed to the local context, where they had to learn and perform procedures with minimal training, which may allow us to generalize this surprising finding to the larger student body.Another possibility is the effect the raging COVID-19 pandemic had on medical students' learning.We conducted this study at the height of the pandemic when non-COVID admissions were low and clinical teachers were heavily burdened by the increased workload, taxing the typical 'ward classes' .Thus, students had more time than was standard during these clerkships to be involved with more handson learning, including procedural practice, which may have been reflected in the results of the second phase of this study.We, as researchers, wish to disseminate this unusual finding in the hope that these results may open avenues to discuss current educational practices and what works for different learner communities.Nevertheless, we are cognizant of the many confounding factors that hindered a robust evaluation of procedural skills retention, and a randomized controlled trial is on the way to evaluate the same.
Our study findings also comply with the concept of situated learning theory [74].In the SBPT on venipuncture, the students could just insert the needle without manipulating the mannequin's skin.In the clinical setting, they encountered soft skin and veins that looked and were positioned differently.These differences in the conditions and appearances required the students to assess the patient's vein by touching, checking, and choosing the most appropriate vein.Although this is different from the learning at the skills laboratory, the students were able to grasp new experiences and construct new knowledge [74].
We used the IPPI to evaluate students' performance in simulated and clinical settings.IPPI has been developed by Kneebone et al. (2006) based on DOPS (Direct Observation of Procedural Skills) for use in a simulated setting for teaching and assessment of clinical procedures where technical skill and professional behavior are given equal value.Although tools such as DOPS and Observed Structured Assessment of Technical Skills (OSATS) have been validated to be used in the clinical setting for procedural assessment [75,76], the use of such tools for procedural skills assessments at the undergraduate level is limited [77].Moreover, DOPS and IPPI have been used interchangeably in both the simulated and clinical settings [63,76,78].Some salient features in IPPI were on par with our study component in the clinical setting (e.g., patient providing feedback, no engagement of the assessor and the student, and assessor unknown to the student).Furthermore, given the pre-post design of our study, we opted to use the IPPI in both settings to facilitate comparisons and draw on conclusions.Additionally, Kneebone recommended comparing IPPI with DOPS outcomes [61].We aim to investigate the alignment between IPPI and workplace-based DOPS in a cross-over study to extend the valuable work by Kneebone.This would advance our understanding of the relationship between clinical procedures in real and simulated settings.
Our study shed light on the impact of pre-clerkship procedural training through simulation.However, it opened room for deliberation about what, why, and how procedural training worked for this largely overlooked study population.We also highlight the practical realities that must be overcome to extend this work to generate robust evidence on the retention of skills acquired through SBPT.
Limitations
Several limitations of our study should be mentioned.This study was carried out in a single institutional setting with a single cohort of students.Although the students themselves were taken as the controls, a case-control design may produce further insight into the effectiveness of procedural skill acquisition through this learning modality.
All 55 second-year medical students who took part in the study were volunteers.Thus, they might have had bias or interest toward simulation-based training compared to the larger population of students.Hence, students' positive attitudes toward SBPT reported in this study may not apply to the larger student population.
Our study assessed the effectiveness of venipuncture skills training among pre-clerkship medical students.
The results of this study could be applied to the larger domain of procedural skills, especially in relation to techniques that require venipuncture (i.e., cannulation, blood cultures).Thus, the generalizability of our results to less related techniques needs to be evaluated by future studies.
Considering that the students have undergone a month of clerkships, the findings of the delayed test do not say anything in terms of the effect of the intervention.The findings in the delayed tests may not be an accurate proxy of skills retention as many confounding factors, such as prior clinical training, appear to have masked the expected skills decay.However, we reported these unique and early-stage findings in an overlooked line of inquiry to inform future research.Although unexpected, the improved ratings in the delayed assessment require further investigation in terms of understanding what factors were at play to generate the findings of this study.
Although the recommendation is to use multiple assessors [79], we had to rely on a single assessor due to the lack of human resources for the study.However, we used the same assessor for both assessments to minimize the bias.We did not control the procedural volume during the clerkship month between training and retesting, which disclosed the extent to which students had occasion to apply their training, which may have affected the ratings received in this study.
Other than the age and gender, we did not collect data on the number of times needed to accurately perform the procedures, time to completion or complication rates, or past examination performance to extrapolate a possible generalizability of the results.
Conclusions
The SBPT on venipuncture allowed medical students to experience clinical procedural skills during their early years of training.The findings of this study show that preclerkship procedural training facilitates the acquisition of clinical skills and improves students' confidence.Most students found SBPT to be a useful and valuable learning method.
Though intriguing and unexpected, the findings we observed in the second phase of this study question the notion of skills retention and the value and adequacy of skill exposure to achieve procedural competency.Exploring this potential source of unique findings may hold answers to the questions our study brought forthwith, which may have significant implications and consequences on the current educational practices and educationists who seem reluctant to change.
• fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year
•
At BMC, research is always in progress.
Learn more biomedcentral.com/submissions
Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from:
Table 1
Comparison of self-ratings
Table 2
IPPI ratings of the participants in performing venipuncture.Data are presented as numbers and (percentages) a SS Simulated setting, CS Clinical Setting
Table 3
Comparison of IPPI ratings
|
v3-fos-license
|
2024-07-07T15:13:25.887Z
|
2024-06-17T00:00:00.000
|
271017680
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/jcm13133949",
"pdf_hash": "188de05e99c1b3d0ec229e12e6839ddd1f3ec617",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41906",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "c758d579fb1dfd0cd089e225ea68a342f694e206",
"year": 2024
}
|
pes2o/s2orc
|
Surgical Management of Complex Ankle Fractures in Patients with Diabetes: A National Retrospective Multicentre Study
Objectives: Patients with ankle fractures associated with diabetes experience more complications following standard open reduction–internal fixation (ORIF) than those without diabetes. Augmented fixation strategies, namely extended ORIF and hindfoot nails (HFNs), may offer better results and early weightbearing in this group. The aim of this study was to define the population of patients with diabetes undergoing primary fixation for ankle fractures. Secondarily, we aimed to assess the utilisation of standard and augmented strategies and the effect of these choices on surgical outcomes, including early post-operative weightbearing and surgical complications. Methods: A national multicentre retrospective cohort study was conducted between January and June 2019 in 56 centres (10 major trauma centres and 46 trauma units) in the United Kingdom; 1360 patients with specifically defined complex ankle fractures were enrolled. The patients’ demographics, fixation choices and surgical and functional outcomes were recorded. Statistical analysis was performed to compare high-risk patients with and without diabetes. Results: There were 316 patients in the diabetes cohort with a mean age of 63.9 yrs (vs. 49.3 yrs. in the non-diabetes cohort), and a greater frailty score > 4 (24% vs. 14% (non-diabetes cohort) (p < 0.03)); 7.5% had documented neuropathy. In the diabetes cohort, 79.7% underwent standard ORIF, 7.1% extended ORIF and 10.2% an HFN, compared to 87.7%, 3.0% and 10.3% in the non-diabetes cohort. Surgical wound complications after standard-ORIF were higher in the diabetes cohort (15.1% vs. 8.7%) (p < 0.02), but patients with diabetes who underwent augmented techniques showed little difference in surgical outcomes/complications compared to non-diabetes patients, even though early-weightbearing rates were greater than for standard-ORIF. Conclusions: Ankle fractures in diabetes occur in older, frailer patients, whilst lower-than-expected neuropathy rates suggest a need for improved assessment. Augmented surgical techniques may allow earlier weightbearing without increasing complications, in keeping with modern guidelines in ankle fracture management.
Introduction
Patients with diabetes account for 13% of those undergoing acute ankle fracture fixation.This group of patients frequently have evidence of complications of diabetes, such as peripheral neuropathy, nephropathy, peripheral arterial disease, and Charcot neuroarthropathy, which can make surgical decision-making challenging [1,2].Non-operative management can result in catastrophic complications of immobility and pressure ulceration; thus, surgery is often indicated [3][4][5].Unfortunately, even after ankle fracture surgery, patients with diabetes experience more complications than their non-diabetes counterparts, including impaired wound healing, malunion, non-union, soft tissue complication, and Charcot neuroarthropathy [6][7][8][9].The infection rate in operatively treated diabetic ankle fracture can be up to 30% and is potentially higher with poorly controlled diabetes [10,11].Furthermore, the period of post-surgical immobilisation is often extended from 6 weeks to 8-12 weeks following open reduction and internal fixation (ORIF), because of longstanding concerns regarding poor bone healing in diabetes, and the dogma "patients with diabetes take twice as long to heal" [12][13][14].Some surgeons do not allow early weightbearing, to avoid a perceived increase in wound problems or pressure sores from the associated plaster cast [15,16].Early weightbearing in patients with diabetes may be facilitated by two alternative (termed augmented) surgical approaches to standard ORIF.Firstly, there is "extended ORIF" which encompasses multiple plates or screws that cross from the fibula to the tibia (Figure 1) [17].
as peripheral neuropathy, nephropathy, peripheral arterial disease, and Charcot neuroarthropathy, which can make surgical decision-making challenging [1,2].Non-operative management can result in catastrophic complications of immobility and pressure ulceration; thus, surgery is often indicated [3][4][5].Unfortunately, even after ankle fracture surgery, patients with diabetes experience more complications than their non-diabetes counterparts, including impaired wound healing, malunion, non-union, soft tissue complication, and Charcot neuroarthropathy [6][7][8][9].The infection rate in operatively treated diabetic ankle fracture can be up to 30% and is potentially higher with poorly controlled diabetes [10,11].Furthermore, the period of post-surgical immobilisation is often extended from 6 weeks to 8-12 weeks following open reduction and internal fixation (ORIF), because of longstanding concerns regarding poor bone healing in diabetes, and the dogma "patients with diabetes take twice as long to heal" [12][13][14].Some surgeons do not allow early weightbearing, to avoid a perceived increase in wound problems or pressure sores from the associated plaster cast [15,16].Early weightbearing in patients with diabetes may be facilitated by two alternative (termed augmented) surgical approaches to standard ORIF.Firstly, there is "extended ORIF" which encompasses multiple plates or screws that cross from the fibula to the tibia (Figure 1) [17].An alternative approach which permanently stabilises the hindfoot is the insertion of a hindfoot nail (HFN) or tibio-talar-calcaneal nail (TTC) [1,18].In contrast to ORIF techniques, which utilise direct anatomic reduction and fixation with metal implants to impart absolute stability, HFN utilises indirect fracture reduction and may be used to either span the fracture or fuse the ankle joints [19].
The primary aim of this study was to compare the mode of fixation of patients undergoing primary fixation of complex ankle fractures stratified by diabetes status.The secondary aims included the assessment of the utilisation of standard and augmented strategies and clinical outcomes, including post-operative weightbearing and surgical complications.
Study Design
A comparative analysis of patients with and without diabetes mellitus identified as part of a national retrospective multicentre observational study was conducted using a An alternative approach which permanently stabilises the hindfoot is the insertion of a hindfoot nail (HFN) or tibio-talar-calcaneal nail (TTC) [1,18].In contrast to ORIF techniques, which utilise direct anatomic reduction and fixation with metal implants to impart absolute stability, HFN utilises indirect fracture reduction and may be used to either span the fracture or fuse the ankle joints [19].
The primary aim of this study was to compare the mode of fixation of patients undergoing primary fixation of complex ankle fractures stratified by diabetes status.The secondary aims included the assessment of the utilisation of standard and augmented strategies and clinical outcomes, including post-operative weightbearing and surgical complications.
Study Design
A comparative analysis of patients with and without diabetes mellitus identified as part of a national retrospective multicentre observational study was conducted using a national collaborative approach in the United Kingdom [20].This study was reported in line with the STROBE guidelines [21].
Setting
This study included cases from fifty-six centres (10 major trauma centres and 46 trauma units) with representation from all 4 United Kingdom nations.Data were retrospectively collected on all patients presenting between 1st January 2019 and 30 June 2019.
Participants
All patients aged 16 years or older who sustained complex ankle fractures involving the ankle joint (classified as an AO44/AO43 fracture where the majority of the fracture line was within one Muller square of the joint line), as seen in Figure 2, undergoing a single-stage primary definitive fixation during the study period were screened for inclusion.Patients were split into two cohorts based on whether or not they had a pre-existing diagnosis of diabetes mellitus: the diabetes cohort (DC) and the non-diabetes cohort (NDC).
national collaborative approach in the United Kingdom [20].This study was reported in line with the STROBE guidelines [21].
Setting
This study included cases from fifty-six centres (10 major trauma centres and 46 trauma units) with representation from all 4 United Kingdom nations.Data were retrospectively collected on all patients presenting between 1st January 2019 and 30 June 2019.
Participants
All patients aged 16 years or older who sustained complex ankle fractures involving the ankle joint (classified as an AO44/AO43 fracture where the majority of the fracture line was within one Muller square of the joint line), as seen in Figure 2, undergoing a singlestage primary definitive fixation during the study period were screened for inclusion.Patients were split into two cohorts based on whether or not they had a pre-existing diagnosis of diabetes mellitus: the diabetes cohort (DC) and the non-diabetes cohort (NDC).Ankle fractures were defined as complex if one or more of the following were identified (see Supplementary Details S1): pre-existing or concurrent diagnoses of diabetes with or without neuropathy, rheumatoid arthritis, alcoholism, or cognitive impairment (including dementia).Fractures were also deemed complex if presenting as open fractures or associated with polytrauma (see Supplementary Details S2).Three different groups of surgical techniques were identified: standard ORIF using standard AO principles, extended ORIF and HFN.These were split into 2 surgical approaches: fixation or fusion.All patients aged under 16 years, and cases in which the fracture extended greater than one muller square from the joint, were excluded.Patients undergoing staged fixation or definitive external or frame fixation to manage soft tissue injuries were also excluded.
Outcome Measures
The primary outcome of this study was the mode of operative fixation stratified by diabetes status.The secondary outcomes included weightbearing status and complication rate.
Data Sources
Data were retrospectively obtained from each centre on patient and fracture characteristics, fixation choice and patient outcomes.Patient data included age, sex, laterality of injury, American society of Anaesthesiologists (ASA) grade and pre-operative factors (level of trauma unit receiving patient, fracture classification, open or closed fracture, polytrauma).Comorbidities were recorded and cross-referenced with the patient medical records (diabetes mellitus, peripheral neuropathy, rheumatoid arthritis, alcoholism, cognitive impairment and smoking status), abbreviated mental test score (AMTS) or prior mini mental state examination (MMSE), clinical frailty score (CFS) (see Supplementary Details S1), pre-operative mobility (e.g., unaided mobilisation, walks with one stick, two walking sticks or walking/Zimmer frame) and mental illness [22].
The operative factors were the method of definitive fixation, standard open reductioninternal fixation (ORIF), extended ORIF, augmented internal fixation, HFN and the use of external fixation/frames.The post-operative factors included immediate weightbearing status (divided into full weightbearing (FWB, i.e., all weight); partial weightbearing (PWB, i.e., where a patient is limited in the amount of weight that they can put down, e.g., toe touch); and non-weightbearing (NWB, i.e., no weight at all).Complications of surgery within 1 year were recorded, including wound breakdown, wound infection, deep vein thrombosis (DVT), pulmonary embolism (PE), failure of construct, further surgical procedures and the removal of metalwork.Patients were followed up until discharged or for a maximum of 18 months.
Bias
Steps were taken to reduce bias.The statistical analysis was conducted by blinded assessors and a multi-centre approach was utilised to reduce the risk of selection bias.Standardised data collection forms using clearly defined inclusion/exclusion criteria and quality assurance were utilised to minimize variations in data collection.
Study Size
In the design of this study, a precalculated study size was not established.The study was designed in order to assess a large number of patients in order to provide a meaningful representation of the population under study.
Ethical Approval and Funding
The NHS Health Research Authority decision tool was used, and this project was deemed not to be classified as clinical research requiring formal ethical approval [23].Each centre was required to submit confirmation of local audit office approval and name a substantive consultant supervisor.There was no funding to support this study and the study lead (RA) has no conflicts of interest to declare.
Statistical Methods
Baseline variables are described as frequencies and percentages for categorical data and as means and SDs for continuous variables.Crude comparisons between participants with and without diabetes and outcomes of interest were assessed using an independent-samples t-test for continuous variables with normal distributions, whereas categorical variables were compared using a χ 2 test.Chi-squared (χ 2 ) analysis was used to assess significance between patients with diabetes and without diabetes, and subsequently, between treatment groups.Matching for baseline covariates such as age > 65 years, ASA > 3, frailty (CFS > 4) and ankle fracture type (AO43/44) allowed for a more detailed comparison of primary surgical fixation technique outcomes in patients with/without diabetes.Statistical significance was set at p < 0.05 for all analyses.Statistical analysis was performed using SPSS (version 26, IBM, Armonk, NY, USA).
Results
Fifty-six centres with representation from all four UK nations participated in this project.This included 10 major trauma centres (MTC) and 46 trauma units (TU) which contributed 517 and 843 cases, respectively, during the study period.Overall, this study included 1360 fractures, with complete data available for 1222 fractures (data completeness rate of 89.9%).The mean age was 53.9 years (SD +/− 19 years) with a male/female ratio of 1:1.3.In total, the median follow-up time for the reported outcomes was 7.8 months post-operatively (range of 1.2-18 months).
In total, 316 patients with ankle fractures were reported to have a pre-existing diagnosis of diabetes.Ten of these patients had either definitive external fixation or staged reconstruction to manage soft tissue injuries and were excluded from further analysis as single primary fixation was not undertaken.The baseline demographics and characteristics of each cohort can be seen in Table 1.The primary outcome of this study was the mode of operative fixation stratified by diabetes status, as seen in Figure 2 and Table 2. DC: diabetes cohort, NDC: non-diabetes cohort.* Observed statistical difference (p < 0.05) between diabetes and non-diabetes groups for said technique; ** observed difference indicative of a trend between diabetes and non-diabetes groups for said technique (p < 0.06-0.08).Analysis was undertaken using Chi-square test for cells with values > 5.If value of a cell was <5, Fisher's Exact test was used.
HFN
In total, 26 patients with diabetes underwent HFN, and early post-operative weightbearing was more common (76.9%; n = 20/26) (Table 3) compared to all other groups (12.2% (n = 37/303) (p < 0.002).Within this cohort, two groups existed due to different surgical techniques: fusion (n = 10) and fixation (n = 16).HFN fusion was observed to have a greater number of wound complications and more often required a further surgical procedure compared to HFN fixation in patients with diabetes.Overall, no significant differences in wound complication (infection or breakdown) rates or the need for further surgery and the removal of metal work were observed between the diabetes and non-diabetes cohorts undergoing HFN fixation (n = 10 vs. 62) or fusion (n = 16 vs.23) (Table 2).Adjusting for age (>65) and ASA (1/2 vs. 3/4 patients), we compared techniques and functional and surgical outcomes between those patients considered to be high risk (age > 65, and ASA 3/4) with and without diabetes.In total n = 116/252 patients with an ASA of 3/4 and an age >65 and diabetes had standard ORIF fixation, and they were found to be more likely to be non-weightbearing in the immediate post-operative period (p < 0.03) and had a higher prevalence of wound breakdown (p < 0.016) compared to those without diabetes (n = 146/812).These differences in complication rates were not observed when comparing those with diabetes and without diabetes who underwent extended ORIF and/or either HFN technique, whilst more patients in these groups were partial weightbearing or full weightbearing in the immediate post-operative period (p < 0.04).
Independent of surgery type and diabetes status, patients were more likely to be weightbearing if treated in a Major Trauma Centre (MTC) as opposed to a Trauma unit (17.6% vs. 3.8%) (Table 3).HFN was equally likely to be performed at an MTC or a trauma unit, with a greater tendency to allow either partial weightbearing or full weightbearing than those fixed with extended fixation techniques (Table 3).There were no observable differences in complications rates, treatment times and outcomes between the two types of institution.
Discussion
In this study of complex ankle fractures treated with primary internal fixation, we observed that the diabetes cohort was frailer and older with poorer pre-operative mobility compared with non-diabetes patients.Those with diabetes were also more likely to have sustained an ankle fracture without fracture extension into the tibial shaft compared to those without diabetes, in keeping with low-velocity trauma.The incidence of peripheral neuropathy was higher in diabetic patients.Contrary to current guidelines, most patients were non-weightbearing post-surgery.
The Significance of Diabetes in the Surgical Management of Ankle Fractures
After surgery, the ORIF group exhibited higher rates of surgical complications when compared to those without diabetes, whereas the augmented-fixation groups did not experience such an increase.Furthermore, after standard ORIF, wound infection rates were higher in the cohort with diabetes (12.3% vs. 7.6%).Wukich et al. assessed 1000 patients undergoing elective surgery and found a similar surgical site infection rate of 13.2% in diabetes, compared to 2.8% in non-diabetes patients [24].SooHoo et al. (2009) reported that the odds ratio of major amputations in diabetes patients following ankle ORIF was extremely high (3.86%compared to 0.16% in non-diabetes patients-resulting in an OR of 27.6) [25].However, no amputations were recorded in the present study-this could be due to the median follow-up of 7 months.
The Need for a Multi-Disciplinary Approach
This study has demonstrated that the diabetic population undergoing primary fixation is older and frailer and thus would benefit from multidisciplinary care, ranging from inpatient diabetic foot care to treatment by surgeons.As many still need long-term orthotics or bracing or appropriate on-going glycaemic control and prevention of diabetes foot syndrome, multidisciplinary care has become more important [26].Early MDT input is necessary and acute vascular support may be needed if peripheral macro-angiopathy is suspected [6, [27][28][29].
Quantifying the Extent of Diabetes Complications
Multidisciplinary involvement would also facilitate the assessment of diabetic complications in patients with diabetes presenting with an ankle fracture.It is important to know the extent of diabetic complications, especially peripheral neuropathy, as it is associated with surgical complications, longer in-hospital stays and increased costs of foot and ankle surgery compared to those without diabetes [6][7][8][9].
Low rates of peripheral neuropathy in diabetes were reported in the present study.This was surprising, as peripheral neuropathy and its prevalence has been reported to be as high as 51% in patients with diabetes [30].It is therefore likely that this finding was under-reported.A recent audit indicated significant variation in practice in the clinical documentation of neurovascular status, despite current best practice guidelines in ankle fracture management advising examination for peripheral neuropathy [27,31,32].No standardisation of processes or guidance in measuring it in acute fractures currently exists.The detection of diabetic peripheral neuropathy is challenging when a fracture is rapidly immobilised in a plaster to prevent further injury [33,34].
The detection of peripheral neuropathy may help to determine the mode of surgical fixation by prompting the use of augmented fixation techniques which can promote early mobilisation and avoid the higher complication rates seen with standard ORIF [1].Meanwhile, as the severity of neuropathy is directly related to poor glycaemic control, HbA1C maybe a substitute marker [35,36].Level I evidence has shown surgical site infections to be independently associated with both peripheral neuropathy and HbA1C > 8%, whilst an HbA1C value greater than 6.5% in diabetic patients sustaining ankle fractures has been correlated with poorer radiological and clinical outcomes [37,38] Diabetes is associated with slower fracture healing, which traditionally leads to prolonged immobilization with restricted weightbearing [4,7,10].No weightbearing increases the risk of pressure sores, pneumonia and venous-thromboembolic events.The short-term surgical outcomes of extended ORIF fixation strategies and HFN were similar in both diabetic and non-diabetic patients.However, the need for further surgery was highest in the HFN-fusion group.Although the utilisation of augmented techniques may not always allow earlier weightbearing, it may confer the ability to stand and transfer in this frail and elderly cohort with diabetes, akin to the benefits of early mobilisation demonstrated in geriatric hip fracture treatment [35,[39][40][41][42].The comparison of surgical techniques here shows that the utilisation of extended-ORIF/HFN techniques provided no difference in complication rates between the high (ASA3/4)-and low-risk groups (ASA1/2) with or without diabetes.This suggests that these techniques are safe in all populations and reduce the risk of complications when compared to standard ORIF to allow early mobilisation [41].There is a need for further studies with detailed data to perform logistic regression to find the strength of association of the risk factors with outcomes in each surgery.
Limitations
Notable limitations of a retrospective study include selection and reporting bias.Surgical decision making, particularly with regard to the choice of HFN vs. internal fixation, may have been determined by factors including clinical knowledge and experience and soft tissue status, which were not captured.Complications are likely to be under-reported due to variations in clinical documentation and follow-up patterns.Larger numbers are required to exclude the effects of confounding factors and provide statistical significance in assessing individual inclusion criteria and the effects of prolonged protected weightbearing of the different surgical techniques.The lack of functional outcome measures and biochemical markers (HbA1C) and limited follow-up length limits the generalisability of our results and does not provide a long-term assessment of the treatment arms.Further studies are required to develop appropriate algorithms for patient selection and understand surgical choices, especially if joints are being immobilised.The key aim of this study was to understand the current state of practice in the UK with regard to these complex fractures in order to guide the design and development of future studies.It is important for readers to note that this study did not control or report other variables that can affect clinical outcomes, such as the use of corticosteroids, antibiotic prophylaxis or the method of anaesthesia.
Conclusions
Ankle fractures in diabetes occur more often in older and frailer patients with higher surgical complication rates with standard fixation techniques compared to similar patients without diabetes.A multidisciplinary approach similar to the treatment of hip fractures incorporating orthogeriatricians and diabetic foot teams should be adopted.Careful assessment of neuropathy and other known risk factors guide surgical decision making regarding augmented fixation techniques, which may facilitate early weightbearing.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/jcm13133949/s1,Supplementary Details S1: Details regarding definition of 'Complex Ankle Fracture'; Supplementary Details S2: Patient data Collection Tool and associated guidance.
Author Contributions: R.A., original draft and preparation of manuscript, design and methodology, analysis, revision of manuscript.C.W., original draft and data collection and analysis, preparation of manuscript.T.L.L., data collection and analysis, revision of manuscript.T.D.S., methodology, data collection and analysis, revision of manuscript.D.C., methodology, data collection and analysis.S.P.T., data collection and analysis and project management.M.E., revision and preparation of manuscript.M.M. Review of Manuscript.I.L.H.R., design and methodology, data collection and analysis, revision of manuscript.All authors have read and agreed to the published version of the manuscript.
Funding: There was no funding to support this study.The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.The authors have no relevant financial or non-financial interests to disclose.
Institutional Review Board Statement: Ethical approval was not sought for the present study because the NHS Health Research Authority decision tool was used, and this project was deemed not to be classified as clinical research requiring formal ethical approval.Each centre was required to submit confirmation of local audit office approval and name a substantive consultant supervisor.
Informed Consent Statement: Not applicable.
4. 4 .
Challenging Surgical Dogma: The Potential of HFN and Extended ORIF in Limiting Post-Operative Immobilisation/Non-Weightbearing in Diabetes
Table 1 .
An overall summary table of the characteristics of patients with complex ankle fractures comparing characteristics in a group of patients with diabetes (n = 306) and a control group of patients without diabetes (n = 970).
Table 2 .
Comparison of fixation technique-specific outcomes and association of diabetes and nondiabetes groups.
|
v3-fos-license
|
2020-03-19T10:17:18.381Z
|
2020-03-18T00:00:00.000
|
230550215
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jqs.3264",
"pdf_hash": "fe8215e11f17291739a15b3ec8a6acfcd68beab5",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41907",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "ec112484b485722dfc9fe7551101b4b199a630cb",
"year": 2021
}
|
pes2o/s2orc
|
Small shards and long distances — three cryptotephra layers from the Nahe palaeolake including the first discovery of Laacher See Tephra in Schleswig‐Holstein (Germany)
Investigations of Lateglacial to Early Holocene lake sediments from the Nahe palaeolake (northern Germany) provided a high‐resolution palynological record. To increase the temporal resolution of the record a targeted search for cryptotephra was carried out on the basis of pollen stratigraphy. Three cryptotephra horizons were detected and geochemically identified as G10ka series tephra (a Saksunarvatn Ash), Vedde Ash and Laacher See Tephra. Here we present the first geochemically confirmed finding of the ash from the Laacher See Eruption in Schleswig‐Holstein—extending the so far detected fallout fan of the eruption further to the north‐west. These finds enable direct stratigraphical correlations and underline the potential of the site for further investigations.
Introduction
Schleswig-Holstein, the northernmost German federal state, holds a key position in palaeo-environmental as well as archaeological research of the Lateglacial and Early Holocene. This is due to the fact that the region provided a north-facing corridor after the retreat of the glaciers. Despite numerous palaeo-environmental investigations and literature cited therein), a correlation of the individual records is difficult because age modelling is mainly missing, or because of hiatuses (Usinger 1981). The analyses of sediment cores from the Nahe palaeolake (NAH; Dreibrodt et al. 2020;Krüger et al. 2020) fill a gap in Lateglacial/Early Holocene palaeo-environmental research in Schleswig-Holstein. For the first time a complete Lateglacial to Early Holocene sequence is described without being affected by the Allerød-Younger Dryas hiatus that had been documented for numerous lake sediments in northern Germany and Denmark (Krüger and Damrath 2019;Bennike et al. 2004;Usinger 1981). Furthermore, a sequence of the sediment is annually laminated, allowing for a high resolution of the temporal scale (Dreibrodt et al. 2020). Therefore, an attempt was made to identify tephra layers as additional chronological horizons to supplement radiocarbon dating of macrofossils, as volcanic ash layers mainly represent single events. This would, moreover, provide the opportunity to directly correlate the record with important European key sites of Lateglacial-Early Holocene research. However, while recording and describing the NAH sediment sequence, no visible ash horizons could be detected.
The advancements in search techniques for non-visible volcanic ash beds, or cryptotephra, have widened the possibilities of searching for additional chronological markers in sediment archives (Blockley et al. 2005;Lowe and Hunt 2001;Turney 1998;Turney et al. 2004). In this way the detection of non-visible tephra horizons has been extended to further distal sites throughout Europe (Blockley et al. 2007;Bramham-Law et al. 2013;Haflidason et al. 2018;Lane et al. 2012b;Larsson and Wastegård 2018;Wastegård and Boygle 2012;Wastegård et al. 2000;Wulf et al. 2013). Palaeo-environmental studies provide the possibility of combining specific tephra horizons with pollen stratigraphy. Conversely, this implies that pollen stratigraphy can be used to locate the position of non-visible ash beds in sediment sequences. To successively increase the temporal resolution of the NAH sediment record a targeted search for well-dated cryptotephra horizons was performed based on their expected position in pollen stratigraphy. We aimed at searching for cryptotephra of the Vedde Ash (VA) and Laacher See Tephra (LST). Apparently, the Saksunarvatn Ash (SA) is the product of a series of eruptions in the Grímsvötn system and hence, does not represent one fixed event date (Davies et al. 2012;Harning et al. 2018;Óladóttir et al. 2020). However, this study also attempts to identify the SA/G10ka series tephra at least as an event interval (Óladóttir et al. 2020).
The SA was erupted in the Grímsvötn volcanic system in the Eastern Volcanic Zone on Iceland. The ash has been described from numerous sites in Northern Europe from Iceland, the British Isles, Norway, the Faroe Islands, Greenland and the North Atlantic (Birks et al. 1996;Björck et al. 1992;Grönvold et al. 1995;Harning et al. 2018;Mangerud et al. 1984;Timms et al. 2017). In Germany, the ash has been recorded at Potremser Moor in Mecklenburg Western Pomerania (Bramham-Law et al. 2013) and in Schleswig-Holstein at two locations 30 and 50 km north-east of the Nahe palaeolake, Lake Plußsee and Lake Muggesfeld (Merkt et al. 1993). Furthermore, the visual evaluation of thin sections from Lake Belau revealed brownish cryptotephra shards that have been assigned to the G10ka series tephra on the basis of their morphology (pers. com. W. Dörfler). Most recently, Saksunarvatn 1. to detect cryptotephra horizons in NAH sediment cores based on their expected occurrence in pollen stratigraphy; 2. to geochemically fingerprint the tephra layers; 3. to gain additional chronological tie-points for the age-depth model of the NAH record; 4. to underline the potential of future research into NAH sediments regarding cryptotephra layer identification.
Study site
The NAH is located in Schleswig-Holstein about 30 km north of Hamburg (Germany). The basin of the former lake was part of a larger glacial lake system and is separated from further elongated incised lakes by two narrow sand ridges to the north-west and south-east (Smed 1998;Woldstedt 1935Woldstedt , 1954.
To the south-east, Lake Itzstedt is the water-bearing remnant of the lake system.
The size of the palaeolake surface was about 16 ha during the Lateglacial (Fig. 1). The terrestrialisation process was completed in the course of the late Holocene. Today, the river Rönne flows in the centre of the still existing depression and thereby follows the course of the former lake in a northwesterly direction before turning southwards and draining off into the river Alster. The area is today used as pasture and partly forested by alder, birch and willow. The coring location is situated in the formerly deepest part of the incised lake (53°48.711' N, 10°8.082' E).
At the coring location approximately 1.6 m of fen peat overlie a sequence of 12.2 m of predominantly detrital gyttja. The shift from the Lateglacial to Holocene is reflected in a gradual shift from clayish to calcareous and finally detrital gyttja. Lateglacial mainly organic depositions of 1.6 m thickness contain a 50 cm sequence of very fine annual lamination (Dreibrodt et al. 2020;Krüger et al. 2020). The sediment cores reached glacial sand at 15.8 m. The chronology is based on AMS radiocarbon dates, pollen events, varve counting and additionally on the identified tephra layers (Dreibrodt et al. 2020).
Field methods and sampling
The coring campaign in the dry centre of the Nahe palaeolake took place in October 2017. A modified Livingston piston corer (Mingram et al. 2007)-the so called Usinger-corerwas used to extract the sediment cores. Two overlapping sediment sequences with a diameter of 80 mm and 16 m in length have been reached by coring. Each 1 m segment was cut longitudinally and stored as well as processed at the Institute for Pre-and Protohistoric Archaeology in Kiel, Germany. In order to connect the core sequences, a series of distinct layers and stratigraphic marker horizons have been defined in the parallel cores. In this way, a composite core was constructed, providing a continuous record avoiding gaps (Dörfler et al. 2012).
The interdisciplinary approach of the study requires that the results of different methods must be easily correlated across depths. Therefore, a grid of 5 mm step size was created spanning the lower 5 m of the sequence. Each sample was labelled according to consecutive numbers (953 potential samples in total, 11.10-15.86 m below the surface).
Pollen analysis
Samples for pollen preparation were mostly taken every centimetre but at least every fourth centimetre. Sample preparation was carried out according to standard techniques (Erdtman 1960;Faegri and Iversen 1989). Lycopodium spore tablets were added to enable the calculation of pollen concentrations (Stockmarr 1971). Pollen counting was performed at a total magnification of ×400 for routine counting and ×1000 for critical objects. A pollen sum of at least 550 TTP (total terrestrial pollen) per sample was achieved. Pollen identification followed mainly Beug (2004) as well as Moore et al. (1991). The reference collection at the Institute of Preand Protohistoric Archaeology in Kiel was further consulted. The results were visualised using the CountPol software (I. Feeser, Kiel University) as well as Inkscape (ver. 0.92.4). The general results of the pollen analysis of the complete section are presented by Krüger et al. (2020).
Tephra analysis and identification
On the basis of the preceding pollen analysis and a preliminary comparison of these results with investigations in northern central Europe , certain sequences were selected for a search for cryptotephra (Fig. 2). The selected depths were sampled in 2 cm steps (Lateglacial sequence) and 1 cm steps (Holocene sequence) resulting in sample weights between 5 and 11 g. The chemical preparation included treatment with HCl to dissolve carbonates, concentrated H 2 SO 4 and HNO 3 to remove the organic material from the samples and KOH (10%) to eliminate diatom silicates. Moreover, a density separation using sodium polytungstate was applied to separate heavy mineral particles of more than 2.7 g/cm 3 from the lighter particles (Turney 1998). The lighter fraction was mounted in glycerol-gelatine and subsequently analysed using a light microscope under bright field, ×250 magnification. Cross-polarisation was additionally used to check critical particles. The identified cryptotephra horizons were labelled according to sample numbers. As one cryptotephra sample spans 2 cm (equalling four samples on the composite core) the bottom sample numbers according to the composite core were utilised for the tephra samples.
To gain material for electron-microprobe analyses (EMPA) the mounted material was re-dissolved and embedded in synthetic resin. Subsequently polished thin sections were prepared from these slides.
Electron-microprobe analyses
The major-and minor-element geochemistry of the glass shards was determined with the JEOL JXA 8200 electron microprobe equipped with five wavelength dispersive spectrometers at GEOMAR (Kiel, Germany). The analytical conditions were 15 kV accelerating voltage, 6 nA current and 5 μm electron beam size. For calibration, a set of natural reference materials from the Smithsonian collection was used, the quality of the calibration was tracked by running the Lipari obsidian standard as a secondary glass standard. The laboratory was part of the INTAV (International Quaternary Association's focus group on tephrochronology) comparison of tephrochronology laboratories, which confirmed the high quality of the major-element data produced by the instrument set-up (Kuehn et al. 2011). For further details on the analytical procedure see Dörfler et al. (2012) and Zanon et al. (2019). Analyses with totals <95% were discarded. All data are plotted normalised to 100% volatile free. Average analyses are shown in Tables 1-3.
Results and discussion
A total of 39 sediment samples have been subject to the search for cryptotephra. The targeted search resulted in three clear cryptotephra shard concentrations of more than 350 shards/g (or 150 shards/g at generally lower concentrations). The outcome of the targeted search is displayed in Fig. 2. Shard sizes differ between 15 and 75 μm (Fig. 3). Samples with peak shard concentrations were analysed for the geochemical composition of the shard population with EMPA.
Identification, characterisation and correlation of tephra layers NAH-466
The Saksunarvatn eruption(s) can be assigned to the phase of rapidly increasing Corylus (Hazel) pollen values during the Boreal period in northern central Europe (Bramham-Law et al. 2013;Merkt et al. 1993). As this increase is strikingly and unequivocally reflected in the NAH record , a rough determination of the position of the ash could easily be made.
Here, the targeted search for cryptotephra revealed a twopart maximum of shards ( Fig. 2) in samples NAH-460 (13.390-13.400 m) as well as NAH-466 (13.420-13.430 m). Both peaks (NAH-460 and 466) correlate to the rapid increase and first Corylus pollen maximum of the (biostratigraphical) Boreal period.
Shard counts of sample NAH-460 yielded about 450 shards/g and NAH-466 about 770 shards/g. The shards in both peak occurrences are exclusively brown in colour ( Fig. 3a-b). The morphology of the glass shards is blocky, angular with curved shards, similar to that described by Merkt et al. (1993). The size spectrum ranges from 15 to 40 μm. Single shards with more than 60 μm appeared rarely. Shards from sample NAH-466 were analysed for geochemical composition.
The chemical composition is consistent with the SA that erupted from the Grímsvötn volcanic system on Iceland (Andrews et al. 2002;Bramham-Law et al. 2013). Tephra of this age, brownish glass shards and the same major-and minor-element composition, has been described at several terrestrial sites in Northern Europe, in marine cores in the North Atlantic and also in the GRIP ice core (Fig. 5a).
However, a recent review by Óladóttir et al. (2020) emphasised that the SA at different locations derives from various eruptive pulses of the Grímsvötn volcanic system. Tephras associated with the SA G10ka tephra series are of a very similar chemical composition ( Fig. 5b-e).
Due to the size of the glass shards in the NAH tephra and the still-missing interlaboratory comparison tests for calibrating the analytical machines, we abstained from analysing traceelement compositions.
The recently published data on Saksunarvatn-type tephra from Havnardalsmyren on the Faroe Islands revealed up to five tephras from the Grímsvötn volcanic system, deposited within an interval of about 400 years. Major-element geochemistry shows that two older cryptotephras (Havn 3 and 4) can be separated by lower MgO than CaO content from a visible Saksunarvatn tephra and two younger ashes . The glass shards from NAH-466 fall into the field of high Mg-Saksunarvatn tephra (Fig. 5).
The targeted search for SA revealed two separate peaks in shard concentrations that correlate to the phase of rapidly the main expansion of Corylus. Here, shard morphology has been utilised to distinguish between the two peaks. The lower peak contained brown platy and curvilinear shards, whereas the upper mainly comprised colourless shards with occasionally closed vesicles but mainly irregular morphologies. The lower and more distinct has been geochemically identified as a SA (Bramham-Law et al. 2013). However, the double peak observed in the NAH record does not contain separate peaks of morphologically different shards. The geochemical composition of the lower and more pronounced peak clearly reflects a known SA fingerprint. Here, one explanation could be two temporally close successive eruptions of the same volcanic system or two separate basaltic volcanic systems.
There is more than one known eruption from the Grímsvötn volcanic system that is linked to SA Óladóttir et al. (2020) demonstrated that there were at least three, potentially even seven, eruptions from the Grímsvötn volcanic system.
One ash plume was distributed towards the north-west of the system and was recorded in the Greenland ice cores (Rasmussen et al. 2006). At least one other dispersal envelope was directed towards the south-east, and has been identified at a variety of sites in continental Europe (Bramham-Law et al. 2013;Jones et al. 2018;Lohne et al. 2013;Merkt et al. 1993;Wulf et al. 2016). The SA recorded in the NAH is most likely related to this south-east dispersal fan.
In this respect, only single cryptotephra horizons associated with the SA have been recorded in annually laminated sequences in northern Germany (Dörfler et al. 2012;Jones et al. 2018;Merkt et al. 1993). One of those sequences derives even from Lake Poggensee (Zanon et al. 2019)-a lake less than 30 km east of the Nahe palaeolake.
(ii) The palaeo-environmental record of NAH revealed the presence of increasing amounts of undefined shells and shell fragments in the depth that corresponds to the SA. They can potentially be associated with small-scale lake-level fluctuations within the Nahe palaeolake. This assumption is in line with indicators of small-scale rearrangement. Therefore, it is probable that the two separate peaks can be explained by the redeposition of material or a secondary inwash of shards, respectively. Consequently, redeposition could be considered as a determining factor for the observed distribution pattern of cryptotephra in the sediment sequence (NAH-460 and NAH-466).
The SA is intended to provide an additional time span to the age-depth model of the NAH sequence. Therefore, the depth needs to be clarified to which the tephra is to be assigned. In this respect, the NAH record could be correlated to the pollen sequence from close by Lake Poggensee (POG; M. Zanon pers. com.). Here, the SA is embedded as a single visible layer in annually laminated sediments.
The results of the palynological analysis revealed that the curves of Corylus pollen ratios of NAH and POG match closely (pers. com. M. Zanon). This is not surprising as the distance is less than 30 km and the size as well as the catchment of the two lakes would be approximately equal. Palynologically, the SA was detected in the sediments of Lake Poggensee, exactly where the lower SA peak was detected in the NAH record. Considering this, the event horizon is assigned to the depth of the lower shard peak (NAH-466).
NAH-633
The pollen stratigraphic location of the VA is challenging. The fallout date of the VA has been placed in the Bas et al. (1986). The geochemical composition of single glass shards of selected sites are given for comparison. For Saksunarvatn Ash the composition in nearby Hämelsee (Jones et al., 2018) and from Grønlia on the Fosen peninsula, Norway (Lind et al. 2013). For the rhyolitic part of the Vedde Ash the composition in Hämelsee (Jones et al. 2018) and Scotland (Timms et al. 2017) is displayed. Data on the Laacher See Tephra (LST) are taken from microprobe glass analyses from proximal LST Ash (van den Bogaard and Schmincke, 1985;unpublished data) and LST in Hämelsee (Jones et al., 2018). All data plotted are normalised to a volatile-free base.
[Color figure can be viewed at wileyonlinelibrary.com] mid-Younger Dryas period and was determined to represent a distinct marker horizon that separates the early and the later phase of the Younger Dryas period (Bakke et al. 2009;Haflidason et al. 2018;Lane et al. 2012aLane et al. , 2013Mangerud et al. 1984). The VA has been associated with pollen stratigraphy from Scotland , Sweden (Björck and Wastegård 1999) and Russia (Wastegård et al. 2000). For a pollen stratigraphic comparison with the present study, however, these are too remote to be used for comparison with the NAH record in northern Germany.
In north and north-western Germany, the second half of the (biostratigraphical) Dryas 3 period (terminology following Krüger et al. 2020) is linked to the spread of Empetrum sp. (cf. E. nigrum) in different degrees, reflecting climatic alterations towards increased oceanity Merkt and Müller 1999;Overbeck 1975). As an increase in Empetrum-type pollen has also been seen during the Dryas 3 period in the NAH record, a search for VA cryptotephra has been carried out in corresponding depths.
Here, the targeted search revealed one clear maximum concentration of shards in sample . The depth of the sample position corresponds pollen-stratigraphically directly to a rapid increase in Empetrum-type pollen values as observed midway through the Dryas 3 period (Fig. 2; Krüger et al., 2020).
Shard counts yielded about 395 shards/g. The shards are exclusively colourless (Fig. 3(c)-(d)). Morphologically, they appear platy to highly vesicular, and a few have tubular properties. The size spectrum ranges from 20 to 50 μm.
The tephra has a homogeneous geochemical composition. In the total-alkali-silica (TAS) classification diagram it falls into the field of rhyolite composition with 71.3 ± 0.3 wt% SiO 2 , 5.2 ± 0.2 Na 2 O and 3.6 ± 0.1 K 2 O (Fig. 4). The full geochemical analyses are given in Table 2.
The general characteristics of geochemical composition (Fig. 6)
NAH-711
The Laacher See Eruption occurred after the palynologically defined termination of the Gerzensee Oscillation and around 200 years before the Dryas 3 period became fully established (Litt et al. 2003;Litt and Stebich 1999;Merkt and Müller 1999;von Grafenstein et al. 1994). In a number of pollen diagrams from north-eastern Germany a significant decrease of Pinus pollen values in the last third of the Allerød period is seen around the LST horizon (Jahns 2000;Theuerkauf 2003). This trend, albeit not strikingly pronounced, is observable in the pollen concentration values of the NAH record.
Here, the targeted search revealed one clear maximum concentration of glass shards at the 14.635-14.655 m core depth (NAH-711).
With respect to pollen stratigraphy, the sample depth corresponds to the last third of the Allerød period (Fig. 2). The high resolution of the pollen record allowed a clear distinction between the termination of the (palynologically defined) Gerzensee Oscillation and the timing of cryptotephra deposition (Dreibrodt et al. 2020;Krüger et al. 2020). It correlates to decreasing Pinus pollen values as well as increasing concentration values of pollen from grasses and herbaceous plants towards the end of the Allerød period.
Glass shard counts yielded about 160 shards/g. The tephra horizon consists of colourless vesicle-rich pumiceous shards with spherical vesicles, and pipe-like elongated bubbles (Fig. 3e-f). The external shape of these shards is determined by densely packed, open (burst) vesicle cavities. Single brownish shards occur. The size spectrum ranges from 20 to 75 μm.
Geochemical analysis was done on 31 shards (analyses that resulted in major-element weight percentage totals <96 wt% were discarded). The tephra has a phonolitic composition with 60.2 ± 0.8 wt% SiO 2 , a range of 7.8 to 4.3 wt% Na 2 O, a range of 8.4 to 6.6 wt% K 2 O and 2.6-1.5 wt% CaO. The full geochemical analyses are given in Table 3.
The geochemical fingerprinting of the NAH-711 glass shards confirms an identification with tephra from the Laacher See Eruption (Fig. 4) Riede et al. 2011;Turney et al. 2006;van den Bogaard andSchmincke 1984, 1985;Wulf et al. 2013) are in line with this interpretation. The chemistry suggests a fallout from the eruption phases MLST C1 (Fig. 7). This is also supported by the morphology of NAH-711 glass shards: the glass shards are highly vesicular and colourless. This is indicative of shards described from the LST eruptive phase that resulted from Plinian eruptions. Vesicle-rich pumiceous clasts with pipe-like elongated vesicles, are typical for LLST, MLST B and MLST C1 deposits, pumiceous clasts with spherical bubbles are described throughout the eruption sequence, but especially from LLST to the base of ULST. Glass shards from the phreatomagmatic phases of the eruption MLST A, MLST C2 and ULST are mostly angular and blocky with few vesicles (Jones et al. 2018;van den Bogaard and Schmincke 1985).
Targeted cryptotephra search based on palynology
It has been shown before that palynological records can be useful tools when intending to find cryptotephra layers (Dörfler et al. 2012). Preceding pollen analyses inherit the strength to narrow down the extent of the sediment sequence to be subsampled for tephra analysis.
In the present study, three cryptotephra layers out of three suspected were detected. The scope of the respective search sequence was in each case comparatively narrow (7-16 cm) due to the high resolution of the pollen record. Regarding all cryptotephra layers, the targeted search has been directly successful. Consequently, it must be questioned whether the detection of tephra in narrow sequences of very homogeneous sediments (here in the case of sediments deposited during the Dryas 3 and Boreal periods) means that cryptotephra is generally present throughout the sediment due to different depositional processes or turbations.
Nevertheless, with regard to the representation of shards per sample and depth of the VA and LST, large-scale rearrangement becomes very unlikely. In both cases a clear maximum of shards is recorded, considered to reflect the timing of the volcanic eruption and ash fallout. The presence of shards below the main peak can be explained by minor bioturbation (Anderson et al. 1984). As suggested by Davies et al. (2012), the concentration of shards per depth can decrease above the concentration peak, indicating the mobilisation of shards in the catchment. In this respect, the NAH cryptotephra record mainly contains expectable minor redepositions of shards resulting in a common tail-off pattern.
In order to exclude the possibility that shards are not generally present in every sample, specific sequences of transitional sediments were analysed. In addition, sequences were also considered in which no tephra layer would be expectable-this again on the basis of pollen stratigraphy and the current state of knowledge of tephra-producing events. Therefore, a sequence of 24 cm was selected spanning a transition from laminated to gradually more homogeneous sediments. According to pollen stratigraphy, this sequence was deposited during the last third of the Allerød period as well as during the transition from the Allerød to the Dryas 3 period. In the samples from the bottom of this sequence, the LST shows a clear peak. All samples above this maximum contained at most two shards, but predominantly no cryptotephra shards at all (cf. grey shaded areas shown in Fig. 2).
Consequently, two conclusions can be drawn. Firstly, only minor and expectable rearrangements of shards can be observed. This is in line with previous results from other approaches (Dreibrodt et al. 2020;Krüger et al. 2020). Therefore, large-scale rearrangements can be excluded.
Secondly, it becomes apparent that pollen stratigraphy is a very valuable instrument as a basis for a targeted search for cryptotephra horizons. However, this of course requires a high resolution of the palynological record as well as a profound knowledge of pollen stratigraphic positioning of the cryptotephra layers in compared regional diagrams.
Tephrochronological discussion
The detected cryptotephra layers could successfully be correlated by comparing their geochemical compositions with known volcanic eruptions of the Lateglacial and Early Holocene. NAH-466 correlates to a SA/G10ka series tephra, NAH-633 to the VA and NAH-711 to the LST. Hence, we here present the first geochemically confirmed find of LST in Schleswig-Holstein-placed outside the known dispersal envelope of visible LST (Fig. 8) (Riede et al. 2011;van den Bogaard and Schmincke 1984).
These results provide three independent age estimates for the age-depth model of the NAH record. As only a segmental sequence of the sediment is laminated, we here refrain from estimating our own age model of the eruptions. Therefore, it is crucial to discuss which available chronological tie-points should be used for the individual events (Table 4).
For the LST, reference is made to Bronk Ramsey et al. (2015) who compiled and improved age estimates for Late Quaternary European tephra horizons. The age estimate for the LST is based on tree-ring data from Friedrich et al. (1999), as well age estimates from Holzmaar, Soppensee and Rotsee. The resulting 12 937 ± 23 (μ ± σ; IntCal13), is in good agreement with estimates by Brauer et al. (1999) and dating by van den Bogaard (1995).
The record from Lake Holzmaar (HZM; Zolitschka 1998;Zolitschka et al. 1995) provides lamination until recent times, inheriting the opportunity to easily correlate further sequences. One of these sequences is the Meerfelder Maar (MFM) sequence that is correlated to the HZM record by using the Ulmener Maar tephra as chronological anchor (Brauer et al. 1999). The MFM record in turn provided varves throughout the Younger Dryas and the mid-Allerød period resulting in a very accurate estimate of 12 880 ± 40 varve years BP for the LST.
Based on 118 AMS 14 C dates from a sequence of Lake Kråkenes, Lohne et al. (2013 -IntCal09;2014 -IntCal13) provided the most accurate dating of the VA to date. The age estimate of 12 066 ± 42 cal BP is in line with further estimates from lake sediments in Europe (Birks et al., 1996;Wastegård et al., 1998;Matthews et al., 2011) but provides a considerably smaller uncertainty. The combined age model by Bronk Ramsey et al. (2015) to estimate the age of the VA is based on data from Lake Kråkenes, Abernethy, Soppensee, Rotsee and Bled. The resulting estimate of 12 023 ± 43 (μ ± σ; IntCal13) is in good agreement with the GICC05 date by Rasmussen et al. (2006).
In their analysis on the synchronicity of high-precision 14 C ages and the Greenland Ice Core Chronology, Lohne et al. (2013) further provided an age estimate for the SA of 10 210 ± 35 (μ ± σ; IntCal09). Based on the review by Óladóttir et al. (2020) it cannot be clarified with certainty to which of the G10ka series tephra the shards of the NAH record would correlate. Nevertheless, it has been shown that the pollen stratigraphical position of the lower shard peak from the NAH record correlates to the position of the SA layer in the pollen stratigraphy from Lake Poggensee (Zanon et al. 2019). At the near Lake Poggensee as well as Lake Woserin (Zanon et al. 2019;I. Feeser, pers. com.) only one ash layer has been identified in a laminated sequence. Both have been assigned to a SA (Zanon et al. 2019). As their individual dating falls well within the given age estimate by Lohne et al. (2013) this age has been utilised for the NAH age-depth model.
Resulting tephrostratigraphical framework, regional implications and future work The identification of the volcanic eruptions allow for a direct correlation between the NAH sediment sequence and European key sites for palaeo-environmental research such as Meerfelder Maar (Brauer et al. 1999;Litt and Stebich 1999), Lake Hämelsee (Jones et al. 2018;Müller 1999), Endinger Bruch (De Klerk 2002;Lane et al. 2012b), Lake Kråkenes (Lohne et al. 2013;Mangerud et al. 1984), Lake Tiefer See (Wulf et al. 2016) and Lake Soppensee (Lane et al. 2011;Lotter 2001). This highlights the relevance of the NAH location in correlating important key sites in northern and central Europe.
In combination with finds of LST at Lake Hämelsee (Jones et al. 2018) as well as Körslättamossen fen (Larsson and Wastegård 2018) the results of the present study demonstrate that the dispersal envelope of the Laacher See Eruption can be shifted further to the north-west.
In this respect it might even be possible to detect non-visible ash beds of the LST in Denmark (apart from Bornholm where it had already been identified- Turney et al. 2006) which would imply that biozonal-type localities such as Bølling (Krüger and Damrath 2019;Iversen 1942) or especially Allerød (Hartz and Milthers 1901) could finally be correlated to modern palaeoenvironmental investigations by event stratigraphy (Fig. 8).
The targeted search for cryptotephra comprised 18% of the total analysed NAH Lateglacial to Early Holocene sequence. Considering the extensive sections without a search for cryptotephra, it is evident that the sediment cores inherit the possibility of containing even more non-visible ash beds. The sequence already analysed palynologically covers spatially as well as temporally the eruptions and (predominantly) fallout zones of, e.g. Borrobol-type tephras, as well as Askja-S tephra, or Hässeldalen tephra (Davies et al. 2003;Pyne-O'Donnell et al. 2008;Turney et al. 1997;Wastegård et al. 2018;Wulf et al. 2016). Hence, we here emphasise the considerable potential of future investigations.
Conclusion
Three cryptotephra layers have been discovered and geochemically confirmed as G10ka series tephra (SA), VA and LST in sediments from the NAH. Consequently, we here present the first finding of LST in Schleswig-Holstein-located outside the ash plume that was reconstructed on the basis of visible ash layers. The combination of published LST findings from Lake Hämelsee (Jones et al. 2018) as well as Körslättamossen fen (Larsson and Wastegård 2018) reveals that the dispersal fan reached further to the north-west than previously assumed (Litt et al. 2003;Riede et al. 2011;Schmincke et al. 1999;Theuerkauf 2003;van den Bogaard and Schmincke 1985). Hence, the detection of LST in Jutland or the Danish Islands (adding to the finds on Bornholm) comes within reach, inheriting the potential to correlate important Lateglacial-type localities with recent palaeo-environmental investigations.
Furthermore, these results add three independent ages to the age-depth model for the NAH sequence (Dreibrodt et al. 2020), thereby emphasising the considerable potential for further investigations into both the site itself and general further tephrochronological studies in Northern Europe. (Brauer et al. 1999)
|
v3-fos-license
|
2023-03-26T15:19:43.240Z
|
2023-03-24T00:00:00.000
|
257755493
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-2615/13/7/1149/pdf?version=1679791595",
"pdf_hash": "0a29bb713184f6202c7053abdf662074645993a0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41908",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"sha1": "92c3651f7a57e509af61955c7c9ac99ac580d065",
"year": 2023
}
|
pes2o/s2orc
|
Identification of Genomic Signatures in Bullmastiff Dogs Using Composite Selection Signals Analysis of 23 Purebred Clades
Simple Summary Purebred dogs form distinct genetic subpopulations, and there are more than 400 of these recognized by breed clubs around worldwide. Their gene pool is limited by the number of dogs used to develop the breed and how the dogs have been bred. The total makeup of DNA in a breed determines the characteristics that we identify as typical of the breed. However, the limited genetic variation within a breed can also contribute to health issues arising from inherited faulty genes or because of complex interactions of many genes. Many studies have been completed in recent yearsso genetic information is now available for thousands of dogs. Together, this makes dog populations informative subjects for analysis. In this study, we applied a relatively new method which combines three ways to measure variation in the DNA of groups of animals, named Fst, ∆DAF, and XP-EHH, into an index and applied it to the Bullmastiff breed. The method allows us to compare the genomic differences and similarities between groups of individuals from many breeds. We show that there are distinct regions of DNA that are specific to the modern Bullmastiff breed. By focusing on these DNA regions, we can understand some of the characteristics that define the breed and use the information to help us understand how some diseases may be more common in this breed. Abstract Dog breeds represent canine sub-populations with distinctive phenotypic features and limited genetic diversity. We have established a resource to study breed-specific genetic diversity. Utilising genetic resources within our laboratory biobank, public domain genotype data and the phylogenetic framework of 23 breed clades, the primary objective for this study was to identify genomic regions that differentiate the Bullmastiff breed. Through application of a composite index analysis (CSS), genomic signatures were identified in Bullmastiffs when compared to the formative breeds, Mastiffs and Bulldogs, and to 22 other breed groups. Significant regions were identified on 15 chromosomes, with the most differentiated regions found on CFA1, CFA9, and CFA18. These regions may reflect genetic drift following establishment of the breed or the effects of selective breeding during development of the modern Bullmastiff. This was supported by analysis of genes from the identified genomic regions, including 458 genes from the multi-clade analysis, which revealed enriched pathways that may be related to characteristic traits and distinct morphology of the breed. The study demonstrates the utility of the CSS method in breed-specific genome analysis and advances our understanding of genetic diversity in Bullmastiff dogs.
Introduction
The domestic dog (Canis familiaris) is considered the first domesticated animal arising from interactions of their wild ancestors and humans. Over many centuries, dogs have been selected for an extensive range of phenotypes, including a wide range of body size, specific morphological traits (leg length, hair length, curl, texture, thickness, and tail and skull shape), coat color, and behavioural characteristics (herding, guarding, agility, speed, and companionship) [1,2]. There are now more than 400 breeds recognised by the different breed clubs, such as the Australian National Kennel Council (ANKC), American Kennel Club (AKC), United Kennel Club (UKC), The Kennel Club (KC) in the United Kingdom, or Fédération Cynologique Internationale (FCI). These clubs have established registration requirements and have developed breed standard guidelines [3,4]. Dogs from each breed are relatively genetically homogeneous, characterized by a high frequency of shared alleles and haplotypes with long range linkage disequilibrium. Because of their common ancestry, there is also a significant degree of shared haplotypes between breeds [5].
One group of breeds that has attracted some attention is the Mastiff-like dogs. Within this group are several breeds that share certain morphological features, including a characteristic skull shape described as brachycephalic, but they also exhibit a range of body sizes and susceptibility to some diseases. One example of this group is the Bullmastiff. This breed was derived by crossing Bulldogs (40%) with Mastiffs (60%) in Britain in the mid-19th century. The breed was popular with gamekeepers and was developed as a working dog to apprehend or ward off poachers [6]. A breeding program introduced by The Kennel Club in 1924 established the founders of the modern Bullmastiff purebred dogs. As with other breeds, the founder effect and subsequent selective breeding practises create distinctive genomic signatures. These genomic regions influence phenotype and may contribute to overall breed health. The Bullmastiff is characterised by its heavy musculature and a large square head supported by a muscular neck. It is considered a member of the working dog group in America and Britain, but in Australia, although originally imported from Europe, it is classified as a utility dog.
We have established a Bullmastiff resource to study genetic diversity of the breed in Australia [6,7]. Utilising the genetic resources of the Bullmastiff biobank, public domain genotype data, and the phylogenetic framework of the 23 clades described by Parker et al. [5], the primary objective for this study was to identify and analyse distinctive genomic signatures in Australian Bullmastiffs. Detection of these signatures may be carried out by comparing genome scans of different breeds. The advantage of this approach is that it does not require detailed phenotyping. A number of statistical methods can be employed to detect such variation, including a multi-statistical index method referred to as combined selection signals (CSS) analysis, which was originally developed for application in farm animals and which we recently showed to be a sensitive test for use in dogs [8].
Data Preparation
An in-house CanineHD BeadChip (Illumina Inc., San Diego, CA, USA) genotyping dataset was merged with those downloaded from previously published datasets derived from dogs from across the world. The resulting pooled dataset is summarised in Table 1. The in-house data (n = 747) were from four breeds: Bullmastiff, German Shepherd Dogs (GSD), Kelpie, and Border Collie [6,7,[9][10][11][12][13]. Public domain data were available from the GEO database and previous studies, contributing an additional total of 7317 samples, representing a broad range of breeds (n = 250) [5,[14][15][16][17][18][19]. Each of the datasets was generated independently, and they needed to be unified to address potential incompatibilities between genotypes arising from different sources, such as reference genome assemblies, labelling of X, Y, and MT chromosome, and the type of SNP identification used (GenBank, Broad Institute, or other database accession code, i.e., Chr_Pos or Chr.Pos). The standard maps of CanineHD BeadChip SNP markers (170K CanFam2, 170K CanFam3, and 220K CanFam3) were sourced from Illumina and applied to ensure that the different datasets were uniform. The SNPs that differed between genome assemblies (CanFam2 vs. CanFam3) were excluded. The genotypes assembled from different studies also employed different versions of the CanineHD BeadChip, with different strand orientations for different versions (TOP vs. BOTTOM) [25]. This was corrected using the function --flip SNPs in PLINK 1.9 [26] prior to merging.
Five studies had only raw SNP outputs available (idat file format) generated by the Illumina Canine High-Density Beadchip (Illumina Inc., San Diego, CA, USA) and were downloaded from seven GEO database files. These raw data were processed with GenomeStudio 2.0 software (Illumina Inc., San Diego, CA, USA), and each dataset was fit into a given cluster. Standard canine cluster files were retrieved from Illumina with chromosome, position, and genotype for each SNP. The files (map/ped) containing the information used in PLINK for each SNP were obtained from the PLINK Input Report Plug-in v2.1.4 in GenomeStudio 2.0 software (Illumina Inc., San Diego, CA, USA).
The combined genotype dataset was assigned genomic chromosome and positions according to the CanFam 3.1 assembly. Further pruning was conducted in the PLINK 1.9 software to exclude samples with over 10% of missing data and SNPs with less than 0.01 minor allele frequency to give a final sample size of 151,901 SNPs across 38 autosomes in 8005 dogs representing 250 breeds. The high-quality SNP dataset was subset using PLINK v.1.9 [26] with the parameters --geno 0.1 --maf, 0.01 --not-chr 0 39 40 41 42, and --keep sample ID. The resulting data (Supplementary Table S1) were then used for further analysis.
Phasing and Haplotype
The SNP array yields unphased genotypes. To generate the haplotype phase from the unphased genotype data, the HMM-based sampling approach implemented in the software Beagle 5.0 [27,28] (https://faculty.washington.edu/browning/beagle/beagle.html, accessed on 30 October 2020) was employed.
Prior to phasing, each panel was split by chromosome and converted to VCF format using PLINK v1.9. The VCF files were processed using the default settings (burnin = 3, iterations = 12, phased-states = 280, sliding windows = 40, overlap = 2, and no err parameter). The haplotype information obtained was applied in XP-EHH calculations.
Composite Selection Signals (CSS) Analysis
The composite selection signals (CSS) approach was used in this analysis. CSS combines different test statistics to generate rank-based empirical values. In this CSS analysis, three constituent statistical tests (F st , ∆DAF, and XP-EHH) were used. The description of these tests is provided in the following sections. CSS analysis is comprised of several steps described as follows. First, ranking of the SNPs based on estimates from individual tests were obtained, and the ranking was scaled to a fractional ranking (between 0 and 1). Secondly, the fractional scores were transformed into z-scores with the inverse normal cumulative distribution function (CDF). The z-scores were assumed to follow a single normal distribution. Then, the mean of the z-scores from individual tests were obtained for each SNP. This mean z-score follows a normal distribution (0~m −1 , m was the total number of test statistics) and was directly used to generate p values. The CSS value is defined as the logarithmic (−log 10 ) transformed p value. To capture significant genomic regions and account for linked SNPs, the raw CSS scores were smoothed using mean values of CSS scores within a 0.5 Mb sliding window on both sides of each SNP. The top 0.005 fractions (0.5%) of smoothed CSS scores were considered as significant regions. The average CSS value was the mean of smoothed CSS values of all significant SNPs in the region [8].
Cross-Population Extended Haplotype Homozygosity (XP-EHH)
A haplotype-based method was taken into account as a component of the CSS analysis, and the scores of cross-population extended haplotype homozygosity (XP-EHH) were extracted using the SELSCAN package [29]. The required files were generated using the option --trunc-ok as true, which allowed more accurate XP-EHH computation than the default of false. The XP-EHH analyses were performed on pairwise comparisons of Bullmastiff against each of individual or the multiple breed clusters as the reference population.
Fixation Index (F st ) Analysis
The fixation index (F st ) was first introduced 70 years ago [30] and subsequently developed to measure genetic differentiation. The most commonly used F st statistic presented by Weir and Cockerham [31] was applied to estimate the deviation of allele frequency between populations. A high F st value (0~1) for specific locus means high reproductive isolation levels (fixed) or strong positive selection in one of the populations. It has been shown to be suitable for detecting selection signatures using SNP array data. The F st scores for each SNP were computed by a pairwise comparison within all sets of the Bullmastiff target group and reference groups.
Derived Allele Frequency (DAF)
Multiple studies have found evidence that the most modern breeds primarily share European ancestry [32,33]. Hence, 59 European Gray Wolves, which were genotyped on the same CanineHD BeadChip (Illumina, Inc., San Diego, CA, USA) from four population clusters, were used as an ancestral reference in the assessment of derived allele frequency [15,34]. Ancestral genotype data from a merged dataset were extracted by the PLINK --keep function, and the common ancestral variants were defined as monomorphic SNPs. These alleles were validated against representatives of the ancient breed group of Alaskan Huskies (n = 10) [35].
The derived allele frequency difference was computed depending on the following formula, ∆DAF = DAF target − DAF reference . The wolf SNP information was used as ancestral. The ancestral SNPs were assigned for the DAF calculation. The normal distribution of derived allele frequency was estimated, and the ∆DAF values were transferred to Z scores (0,1). In the present study, the major alleles in wolves were assigned as ancestral alleles, following the approach used previously [8,15,36] and in comparison with data from 10 Alaskan Huskies as in a prior study [35]. The major alleles (common variants) from this dataset were assigned as ancestral alleles.
In Silico Functional Analysis of Candidate Genes
Annotated genes within significant genomic regions were identified and used for analyses. These genes were collated and annotated using RefGene (Ensembl) downloaded from the UCSC genome browser CanFam 3.1 (http://genome.ucsc.edu/, accessed on 9 April 2021). The list of genes was then used for functional enrichment analysis. The Database for Annotation, Visualisation and Integrated Discovery (DAVID) v6.8 was used for analysing functional classification, gene ontology, pathway involvement, and for understanding highlevel functions and contribution to biological systems from a large-scale molecular dataset (http://david.abcc.ncifcrf.gov/, accessed on 13 September 2021) [37,38] via the Kyoto Encyclopedia of Genes and Genomes (KEGG, https://www.kegg.jp/, accessed on 13 September 2021) and Gene Ontology (GO, http://geneontology.org/, accessed on 13 September 2021) knowledge base database resources. The Benjamini-adjusted p-value ≤ 0.05 and FDR ≤ 0.05 were used as thresholds for declaring statistically significant differences.
Results
A total of 26 distinctive peaks were found from two breed pair comparisons (Bullmastiffs vs. Mastiff and Bullmastiffs vs. Bulldogs) scanned using a 1 Mb sliding window CSS analysis. These were found on chromosomes CFA1, CFA3, CFA7, CFA8, CFA9, CFA11, CFA18, CFA20, CFA23, CFA25, CFA26, CFA32, CFA35, and CFA37, with CSS values ranging from 2.10 to 2.81 ( Table 2). Out of these, 14 regions were detected from the analysis of Bullmastiffs vs. Mastiffs, and 12 regions were detected from the Bullmastiffs vs. Bulldogs comparison. The highest signal (CSS value 2.81) was identified for a region on chromosome 18 (CFA18:29.32-31.85 Mb) in the pairwise comparison of Bullmastiff and Mastiff. This region spanned 113 SNPs and 46 genes (5snoRNA, 21 lncRNA, and 20 protein coding). The region with the highest peak (CSS value 2.57) for the Bullmastiff vs. Bulldog analysis was located at CFA8:56.55-60. 35 Mb and included 231 SNPs and 34 genes (four snRNAs, one pseudogene, 13 lncRNAs, and 16 protein coding). None of these regions were found in both comparisons, suggesting that they were derived from the original founder animals. There were four regions found in both comparisons, on CFA7, CFA9, CFA18, and CFA20. These regions are likely to have been selected during breeding of the modern Australian Bullmastiff. The distribution of CSS values for both comparisons is shown in Figure 1. Further analysis was conducted based on the genes within the identified regions. A total of 2681 significant SNPs were identified for the comparison of Bullmastiff with the two formative breeds (Mastiff and Bulldog). These SNPs were mapped to 487 genes from the pairwise comparison of Bullmastiff and Mastiff and 415 genes for the Bullmastiff/Bulldog comparison (Supplementary Table S2). Gene-set and pathway analysis using DAVID software tools classified the genes into a total of 120 and 163 GO terms (biology process, molecular function, and cellular component) and 10 and six KEGG pathways, respectively (Supplementary Table S2). The genes identified from the two datasets were also analysed using ClueGo software (ver.2.5.0) for enrichment and network analysis. Genes were classified into 42 enrichment groups, shown in Supplementary Table S3. The resulting network is shown in Supplementary Figure S1. The analyses highlight metabolic and growth-related gene pathways and networks captured within significant regions. One notable example is the muscle hypertrophy classification, a pathway that may underlie the highly developed musculature of Bullmastiffs. A genome-wide genotype comparison of Bullmastiffs and dogs within each of the clades defined by Parker et al. [5] revealed consistent distinguishing signatures for the Bullmastiff breed. First, an analysis of 141,302 SNPs from Bullmastiffs and the breeds defined in the previous study as European Mastiff (minus Bullmastiff data) were used for CSS calculations. A total of 689 significant SNPs were located within nine significant regions ( Table 3). The genes within these regions were retrieved for in silico functional analysis (Supplementary Table S4). . When both comparisons were collated, there were 490 genes (29 snRNA, eight pseudogenes, 358 protein coding, seven miRNAs, and 88 lncRNAs) within significant regions that were immediately flanking. A gene-set and pathway analysis of these genes using DAVID captured a total of 144 GO terms (biology process, molecular functions, and cellular component) and four KEGG pathways (Supplementary Table S5). Further analysis of the genes was conducted using ClueGo software (ver.2.5.0) to build an integrated network based on enriched functional classifications. The network is shown in Supplementary Figure S2. The genes were classified into 27 groups, with the most significant terms of each group shown in Supplementary Table S6. Interestingly, pathways related to inflammation and cancer are featured. Bullmastiffs are known to have an increased prevalence of cancer and are susceptible to inflammatory conditions. Dog breeds have previously been grouped into clades based on genomic analysis [5,42]. Genotypes from Bullmastiffs were compared to representative samples of breeds from each of these clades to investigate the most prominent genomic signatures. Pairwise genomewide scans of Bullmastiffs compared dogs from the 22 clades represented in the combined genotype dataset (described in Materials and Methods) were analysed. The details of each pair-wise analysis are shown in the form of Manhattan plots in Supplementary Files ( Figures S4-S25). To simplify these complex analyses, a summary of the results showing significant genomic regions is presented as a phenogram in Figure 2. When considering all comparisons, distinct peaks identifying consistently significant regions in Bullmastiffs were found on chromosomes CFA1, CFA3, CFA5, CFA7, CFA8, CFA9, CFA13, CFA18, CFA20, CFA23, CFA25, CFA26, CFA30, CFA32, and CFA37. Particularly noteworthy were regions on CFA9 and CFA18 where the Bullmastiff genotypes were distinguished from a majority of other breeds.
Discussion
Modern dog breeds were created by humans through crossing and selection according to breeding schemes and standards. Guidelines have been developed around specific Genes (n = 458) found within these regions and associated annotations are provided in Supplementary Data (Table S6). Gene-set and pathway analyses identified a total of 275 GO terms (biology process, molecular function, and cellular component) and 16 KEGG pathways for genes within the identified regions and those immediately flanking from the pairwise comparisons across all clades (Supplementary Table S6). The genes were subjected to enrichment analysis using ClueGo software (ver.2.5.0), and an enrichment network was constructed. The network is shown in Supplementary Figure S3. The genes were classified into 41 groups, with the significant annotations for each group listed in Supplementary Table S7. The largest groups of genes were classified as being involved in metabolic processes and immune cell function, especially cell migration pathways involving chemokines. One other interesting group contained several micro RNAs (MiR) that have been implicated in cancer.
Discussion
Modern dog breeds were created by humans through crossing and selection according to breeding schemes and standards. Guidelines have been developed around specific traits for each breed, including desired behavioural traits, morphological characteristics, or the ability to learn and perform different tasks [42,43]. As a result of the origins and selection processes, modern purebred dogs have limited genetic heterogeneity and distinct genomic signatures that underlie their characteristic traits. In this study, the CSS test was used to identify genomic signatures in multiple-breed comparisons with a focus on the Bullmastiff breed. CSS combines three commonly used test statistics into complementary signals so that regions harbouring common signals can be identified with high sensitivity [8,[44][45][46][47].
Bullmastiffs were originally created by crossing dogs from the Mastiff and Bulldog breeds to produce a breed that was intermediate in size and temperament. The Bulldog can be identified by its large head with wedge-shaped body, as well as short and folded ears. Other morphological features of Bulldogs include a stocky build with deep furrows of the skin and face, short or corkscrew tail, short thick legs with equally broad paws, and a moderate temperament. Relatively, Mastiffs have a more placid temperament, and are one of the largest and heaviest of dog breeds, weighing up to 100 kg [48]. The strong founder effect and relatively recent origins of the breed mean that Bullmastiff dogs share relatively long haplotypes with Mastiffs and Bulldogs, and as expected, they are classified within the same clade [5,6]. However, selective breeding and genetic drift since formation of the breed has left distinct signatures in the Bullmastiff genome. Indeed, through CSS analyses, notable genomic regions were identified in Bullmastiffs when compared to Mastiffs and Bulldogs. Although the analysis is designed to detect distinguishing genomic regions rather than specific functional variants, the regions are likely to contain some variants that have contributed to breed development or characteristic traits. Some examples of genes of interest within significant regions included the COL5A1 (Collagen alpha-1(V) chain) and ADAMTSL2 (ADAMTS-like protein 2) on CFA9 (48.92-51.87 Mb). Collagen is a key structural protein of the connective tissue including skin and supports the connective tissue matrix to maintain shape, strength, and the ability to resist deformation [49]. Genetic variants in the COL5A1 gene are associated with loose skin, and in extreme cases, with a clinical presentation of Ehlers-Danlos syndromes (EDS) in dogs [49]. ADAMTSL2 is a secreted extracellular matrix protein. Variants in this gene have been associated with the hereditary disorder Musladin-Lueke Syndrome (MLS) in Beagles, characterised by muscle and skin fibrosis leading to stiff skin and joint contractures [50]. In humans and mice, elevated levels of ADAMTSL2 are seen in cardiomyopathies [51]. Bullmastiffs are known to have an elevated risk of cardiomyopathy.
The genome-wide scan for the pairwise comparison of Bullmastiff and Mastiff identified a region on CFA1 (62.56-64.02 Mb), which includes the NKAIN2 (Sodium/potassiumtransporting ATPase subunit beta-1-interacting protein 2) gene. This gene plays a role in hair hypopigmentation, craniofacial and limb formation, eye development, and macrocephaly [52]. Similarly, a prominent region on CFA11 (13.25-14.57 Mb) contains the PRDM8 (PR domain zinc finger protein 8) gene. PRDM8 has been linked to coat length in dogs [43,53]. This region also contained the ZNF608 (Zinc Finger Protein 608) gene, which is associated with body mass in dogs [24]. A significant region on CFA25 (27.99-29.17 Mb) was adjacent to the MSRA (peptide-methionine (S)-S-oxide reductase) gene. This gene is involved in methionine metabolism and repair of oxidative damage of proteins. It has been associated with adiposity and fat distribution in humans [54], and through whole-genome selection scans and GWAS, with fat deposition and hair growth in other species [55]. Bullmastiffs are generally lean and muscular, but they are prone to obesity if not carefully managed.
A region identified on CFA32 (3.99-5.7 Mb) overlapped growth-related genes that may be associated with morphological traits. BMP3 (bone morphogenetic protein 3) affects bone growth and development and is associated with skull morphology. The FGF5 (Fibroblast growth factor 5) gene and ANTXR2 (anthrax toxin receptor-2) are associated with coat length in dogs. The PRKG2 (cGMP-dependent protein kinase) and RASGEF1B (RasGEF domain family member 1B) genes have been identified as positional candidate genes for growth restriction, aggression, self-injurious behaviours, and mental retardation in affected German Shepherd dogs [43].
Not surprisingly, the results of pairwise comparison of Bullmastiff with a combined reference group consisting of all other breeds in the European Mastiff clade showed substantial overlap with the comparison to Bulldog and Mastiff breeds. Some regions were consistently present in Bullmastiffs, but there were no outstanding gene candidates that could be associated with breed-specific morphological variation, which may require comparison of extreme phenotypes [24]. Instead, similar sets of GO terms and KEGG pathways were identified in this dataset, suggesting that the process of Bullmastiff formation from the Bulldog x Mastiff cross affected the same body systems that had been subject to selection at the onset of original breed formation. However, some functional categories from the geneset analysis were of interest and related to the adipocytokine signaling pathway (cfa04920) and the PPAR signaling pathway (cfa03320). Pathway-based annotation indicates that the adipo-cytokine signaling pathway and PPAR signaling pathway are significantly correlated with BMI and fat mass, suggesting that these pathways may play a role in the weight and body mass characteristics of the breed [56][57][58].
The region identified on CFA32, mentioned previously, and containing BMP3 and other genes, was highlighted in many of the pairwise comparisons of Bullmastiffs and dogs from 22 non-Mastiff clades. There were also a number of commonly found enriched gene pathways and GO categories evident in many comparisons, e.g., those associated with neurological, nervous and immune system development, metabolism, and organ growth. This points to the potential of early developmental pathways that may have broad phenotypic effects. As recently highlighted, variation in many of these fundamental pathways may have arisen prior to modern breed formation and arisen from selection pressures on ancestral populations [59]. A good example of this are the neural crest pathways. A role for the neural crest in dog domestication has gained support from researchers over recent years [52]. Neural crest cells (NCCs) are multipotent, transient, embryonic stem cells that are initially located at the crest (or dorsal border) of the neural tube. Regulation of neural crest development requires a network of genes expressed early in embryogenesis that coordinate a multi-stage process, resulting in migration of NCCs to various sites in the developing embryo where they differentiate into a diverse array of cell types [52]. Several of the pathways associated with genomic regions identified in this study are involved in this network, supporting the view that variants in key genes that activate the neural crest and define the migration gates for NCCs contribute to diversity in domestic dog breeds.
Conclusions
This study demonstrates the utility of applying the composite index method, CSS, to identify genomic signatures in purebred dogs. The results define regions containing variants that are found at higher frequency in the Bullmastiff breed when compared to other dog breeds. Analysis of annotated genes and related pathways found within these regions contributes to understanding diversity in the breed and may underpin further studies of breed health and disease. Figure S1. KEGG pathway and GO term analyses of genes in the selected regions in the Bullmastiff when compared with Bulldog and Mastiff population. Supplementary Figure S2. KEGG pathway and GO term analysis of genes for the Bullmastiff compared to other breeds within the European Mastiff clade. Supplementary Figure S3. Overview of KEGG pathway and GO term analysis of candidate genes found from the Bullmastiff pairwise comparisons with multiple clades. Supplementary Figure S4. Genomic signals detected in Bullmastiff dogs compared to Asian Spitz groups using the CSS method. Supplementary Figure S5. Genomic signals detected in Bullmastiff dogs compared to Asian Toy groups using the CSS method. Supplementary Figure S6. Genomic signals detected in Bullmastiff dogs compared to Tibetan Terrier groups using the CSS method. Supplementary Figure S7. Genomic signals detected in Bullmastiff dogs compared to Nordic Spitz groups using the CSS method. Supplementary Figure S8. Genomic signals detected in Bullmastiff dogs compared to Schnauzer groups using the CSS method. Supplementary Figure S9. Genomic signals detected in Bullmastiff dogs compared to Small Spitz groups using the CSS method. Supplementary Figure S10. Genomic signals detected in Bullmastiff dogs compared to Toy Spitz groups using the CSS method. Supplementary Figure S11. Genomic signals detected in Bullmastiff dogs compared to Hungarian groups using the CSS method. Supplementary Figure S12. Genomic signals detected in Bullmastiff dogs compared to Poodle group using the CSS method. Supplementary Figure S13. Genomic signals detected in Bullmastiff dogs compared to American Toy groups using the CSS method. Supplementary Figure S14. Genomic signals detected in Bullmastiff dogs compared to American Terrier groups using the CSS method. Supplementary Figure S15. Genomic signals detected in Bullmastiff dogs compared to Pinscher groups using the CSS method. Supplementary Figure S16. Genomic signals detected in Bullmastiff dogs compared to Terrier using the CSS method. Supplementary Figure S17. Genomic signals detected in Bullmastiff dogs compared to New World groups using the CSS method. Supplementary Figure S18. Genomic signals detected in Bullmastiff dogs compared to Mediterranean groups using the CSS method. Supplementary Figure S19. Genomic signals detected in Bullmastiff dogs compared to Scent Hound groups using the CSS method. Supplementary Figure S20. Genomic signals detected in Bullmastiff dogs compared to Spaniel groups using the CSS method. Supplementary Figure S21. Genomic signals detected in Bullmastiff dogs compared to Retriever groups using the CSS method. Supplementary Figure S22. Genomic signals detected in Bullmastiff dogs compared to Pointer Setter groups using the CSS method. Supplementary Figure S23. Genomic signals detected in Bullmastiff dogs compared to UK Rural groups using the CSS method. Supplementary Figure S24. Genomic signals detected in Bullmastiff dogs compared to Alpine groups using the CSS method. Supplementary Figure S25. Genomic signals detected in Bullmastiff dogs compared to European Mastiff groups using the CSS method.
|
v3-fos-license
|
2020-05-07T09:02:46.435Z
|
2020-01-01T00:00:00.000
|
219027308
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/25/e3sconf_caes2020_02033.pdf",
"pdf_hash": "183231269c726affe669a2665681314e118a9d9b",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41909",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"sha1": "595bdf6e8d58bddf88c736b376c35887a073fbbf",
"year": 2020
}
|
pes2o/s2orc
|
Research progress in activation of phosphorus containing substances and remediation of heavy metal pollution in soil
: The phenomenon of soil phosphorus deficiency in China is very serious, which limits the agricultural production in China. Low molecular weight organic acids and phosphorus solubilizing microorganisms are widely distributed in the soil, and can be used as activators to improve the content of phosphorus in the soil. With the rapid development of industry and agriculture in China, heavy metal pollution in the environment is becoming more and more serious. China is rich in phosphate rock resources, but the grade of phosphate rock is low and the utilization efficiency is not high. Using phosphate rock to deal with heavy metal pollution has been favored by environmental scholars. This paper analyzes the main composition and application of phosphate rock in China, the activation of phosphate rock powder, the remediation effect and mechanism of phosphate rock powder and activated phosphate rock powder on heavy metals in soil, providing theoretical basis for the scientific utilization of low-grade phosphate rock and the treatment of heavy metal pollution.
Introduction
Phosphorus plays an important role in the growth and development of animals and plants [1] . Phosphate rock is the only phosphate rock in the world, which is widely distributed. According to statistics, the global phosphorus reserves in 2011 have reached 65 billion tons, of which the reserves of China's phosphate rock are more than 4 billion tons. Phosphate rock is the general term of diphosphate minerals that can be used economically, and it is an important non renewable resource. There are many kinds of phosphate minerals in nature, but few of them are valuable for exploitation [2] . It has been reported that about 85% of the world's mined phosphate rock is used for phosphate fertilizer production [3] . With the rapid increase of population and the rapid development of agriculture, the demand for phosphate rock is increasing. Therefore, the exploitation and utilization of low-grade phosphate rock resources are more and more concerned, and the utilization of phosphate rock is also improving, including physical [4] , chemical [5] and biological [6] methods. Among them, the most effective method is chemical method, mainly the application of organic acid to the activator of phosphorite powder.
At present, with the rapid development of industry and agriculture, soil heavy metal pollution is becoming more and more serious. People gradually turn their attention to non-metallic minerals in the traditional heavy metal pollution treatment methods. These non-metallic minerals usually have large reserves and wide distribution. Some of them have good environmental compatibility [7] , and are ideal heavy metal repair materials. Some studies have shown that apatite has a special crystal chemical structure, which can adsorb and fix many heavy metal cations [2] . In recent years, the removal of heavy metals from aqueous solution and soil by phosphorus containing substances has achieved a little success [8][9] . Therefore, to explore the methods and ways of using phosphate ore to control heavy metal pollution in the environment can improve the economic and social benefits of phosphate ore application, and provide new ideas and methods for heavy metal pollution remediation.
Phosphate rock composition and its application
Phosphorus minerals are widely distributed in nature. However, there are few phosphate minerals that can be developed and utilized. According to their genesis, phosphate rocks can be divided into apatite and phosphorite. The representative phosphate minerals are shown in Table 1 The raw materials for the production of phosphate fertilizer and phosphorus compounds are mostly phosphate rock, which is widely used in agriculture, chemical industry, medicine, food and other industrial sectors. According to statistics, 80% -90% of the world's phosphate rock is used to produce phosphate fertilizer, about 3% of the phosphate rock is used to produce feed additive, about 4% is used to produce detergent, and a small amount is used in chemical industry, light industry, national defense and other industries. With the rapid development of agriculture and the gradual increase of population, the demand of phosphate rock in the world and China is increasing (Figure 1). Since 2000, the consumption of phosphate rock in the world has been on the rise. In 2006, the total consumption of phosphate rock reached 46 million tons.
Activation of phosphate rock powder
For low-grade phosphate rock, whether it is used as phosphate fertilizer in agriculture or as remediation materials to deal with heavy metal pollution in the environment, it needs to be activated to improve the content of available phosphorus. Traditional activation methods can be divided into three categories: physical method, chemical method and biological method. In recent years, many methods have been developed to activate phosphate ore powder, such as dissolving phosphate ore powder by phosphate solubilizing microorganisms, modifying organic active agents or surface active minerals acting on phosphate ore to release effective phosphorus.
Phosphate dissolving microorganism dissolves phosphate ore powder
Scakett et al [12] first found in 1908 that some insoluble phosphate rock powder can be used by plants after being applied into the soil. There are mainly bacteria, fungi and actinomycetes in nature, which play an important role in the utilization of soil phosphorus. In fact, there are a large number of microorganisms in the soil and crop rhizosphere that can dissolve phosphate ore powder, which are collectively called phosphate dissolving bacteria. When the phosphate rock powder inoculated with phosphorus dissolving bacteria is applied to the soil, the phosphorus available for crops to absorb and use in the soil is significantly increased, and the biomass and yield of crops are increased. Wu Xiaoyan et al [13] . found that Soil Rhizosphere mixed bacteria had better decomposition ability for low-grade phosphate rock. The decomposition mechanism of inorganic phosphorus by microorganisms is mainly through the secretion of organic acids and heavy metal ions complexation, or to reduce the pH of soil, so as to promote the release of soluble phosphorus [14] .
release of phosphate ore powder by activator
The modification of active agent can promote the release of effective phosphorus in phosphate rock powder. When some complex organics are added to phosphate rock powder, organic acids can be produced in this process, which can promote the release of phosphorus. For example, Badr [15] et al. Found that the soluble phosphorus content of phosphate rock powder modified by bagasse, furfural residue and other organics increased; the surface active mineral can promote the release of low-grade phosphate rock powder after the activation modification. It has been reported that zeolite and bentonite can promote the release of low-grade phosphate rock powder (montmorillonite) reacts fully with phosphate rock powder and reacts with Ca 2+ in phosphate rock powder to activate phosphate rock powder so as to increase the content of available phosphorus [16] .
phosphorite powder activated by low molecular organic acid
Low molecular organic acids are carboxylic compounds with molecular weight less than 500, such as acetic acid, oxalic acid, malic acid, citric acid, tartaric acid, etc., which are widely distributed in soil. There are a lot of free carboxyl and hydroxyl groups in low molecular organic acids, which have high activity and water solubility, chelation and coordination, and can effectively promote the release of effective phosphorus in insoluble phosphate. Liu Yonghong [17] and others respectively used formic acid, acetic acid, oxalic acid and tartaric acid to activate phosphate rock powder in indoor culture. The results showed that the activation effect of phosphorus in phosphate rock powder increased with the increase of
China
The World organic acid concentration, and the particle size of phosphate rock powder affected the activation effect. The smaller the particle, the better the activation effect. The activation effect of phosphate rock powder increased with the increase of liquid-solid ratio of activator and phosphate rock powder Liu Tingting et al [18] . showed that the activation effect of oxalic acid on low-grade phosphorite powder was the best when the concentration of oxalic acid was 40mmol / L; Gong Songgui et al [19] . studied the effect of oxalic acid, citric acid, tartaric acid and malic acid on the activation of inorganic phosphorus in red soil by indoor simulation test. The results showed that the ability of organic acid to activate soil phosphorus was citric acid > tartaric acid > malic acid at the same concentration, and Under the same acidity, aluminum phosphorus (a1-p) is the most active quantity, iron phosphorus (Fe-P) and calcium phosphorus (Ca-P) are the second, and closed storage phosphorus (o-P) is the least; some studies have also shown that [20] oxalic acid, citric acid, malic acid, etc. can promote the dissolution of insoluble phosphate and increase the content of phosphorus in soil solution by several times; Wang Guanghua et al [21] . have studied that the type and concentration of organic acid can activate phosphate ore powder With the change of organic acid concentration, the activation effect of phosphate rock powder is high or low.
Organic acids play an important role in a series of material cycles, such as soil mineral weathering, nutrient transformation and soil biological activity. Organic acids usually promote the release of phosphorus through dissolution, chelation and other functions. The release of organic acids to phosphorus is usually inseparable from its kind and concentration. The mechanism of organic acids promoting phosphorus release is as follows: Ca10(PO4)6F2+12H + →10Ca2 + +6H2PO4 -+2F -CaX2·3Ca(PO4)2 + + Organic acid→PO4 3-+Ca-Organic acid(X=OH or F) Al(Fe)·(H2O)3(OH)2H2PO4 + + Organic acid→PO4 3-+Al(Fe)-Organic acid 4 Remediation effect and mechanism of phosphate rock powder and activated phosphate rock powder on heavy metals As early as 1981, the hydroxyapatite synthesized by Suzuki et al [22] can effectively remove Pb 2+ from water, and phosphorous materials are widely used as heavy metal repair agents. Phosphate remediation of heavy metals in soil is mainly through changing the form of heavy metals in the soil system, reducing its biological activity and availability, thus reducing its toxicity. Hu Jinhuai et al [23] . studied the remediation effect of different particle size and dosage of phosphate rock powder on heavy metals in soil. The research showed that the smaller the particle size of phosphate rock powder, the larger the dosage, the better the removal effect of heavy metals. Yin Fei et al [24] . showed that phosphate rock powder can significantly reduce the bioavailability of Pb, CD, Cu, Zn, as in soil. Zhang Lijie et al [25] . found that phosphate rock powder can reduce the recovery of heavy metals The content of Cu, Zn, Pb and Cd in the polluted soil was the best. Duan ran et al [26] . found that with the increase of the amount of oxalic acid and biochar, the pH in the soil increased gradually, and the bioavailability of CD and Ni decreased; Xu Xuehui [27] found that the application of oxalic acid activated phosphate rock powder to the soil can effectively reduce the exchangeable CD in the soil, and the accumulation of Cd in the plant also decreased. Jiang Guanjie et al [28] . studied the passivation effect of oxalic acid activated phosphate rock powder on lead in Latosol. It was found that the mass fraction of exchangeable Pb in Latosol decreased significantly with the increase of phosphate rock powder application.
The reaction mechanism of phosphate to heavy metals is relatively complex, which is mainly determined by the properties of soil, components, anion and heavy metal. The main mechanisms include: co precipitation of phosphate and heavy metal ions after dissolution; surface complexation and adsorption of phosphate; co precipitation after dissolution of hydroxyapatite or ion exchange on the surface of phosphate ore.
summary
The problem of soil heavy metal pollution has been paid more and more attention. To solve the problem of soil heavy metal pollution has become an urgent need of ecological environment construction in China and even in the world. The research and development of high-efficiency remediation technology for heavy metal contaminated soil has been practically promoted and applied in practice, which is very important for the implementation of soil environmental control plan, adherence to the red line of cultivated land, and improvement of agricultural product quality and safety Meaning. China's phosphate rock resources are widely distributed, mostly used in agricultural phosphate fertilizer manufacturing, only a few used in industry, national defense, chemical industry, etc., but most of them are low-grade phosphate rock, and the phosphorus content is not fully used. In view of the rapid development of agriculture and the large-scale application of soil system remediation in China, it is of great practical significance to effectively apply low-grade phosphate rock to agricultural production and soil remediation engineering, which will help to improve agricultural production and realize the healthy recovery of heavy metal contaminated soil. 2. Huang Zhiliang. Apatite mineral material.Beijing: Chemical Industry Press,2008
|
v3-fos-license
|
2021-05-11T00:07:32.771Z
|
2021-01-01T00:00:00.000
|
234129868
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/12/e3sconf_icersd2020_01044.pdf",
"pdf_hash": "3fb7fca25cc1adce5779b16df46f96d08dcda88f",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41911",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "44106c881df2fdfa404a44756433a5b74c33c02f",
"year": 2021
}
|
pes2o/s2orc
|
The design of the permeable brick permeable coefficient test method
: According to the standard content, it provides a completely satisfy the permeable brick permeable coefficient test detection device. This for improvement of the permeable brick permeable coefficient test device, the vacuum control system, electric control system and weighing tank organic union, more in line with test operation process. It has a strong and durable, easy maintenance, the advantages of small human error influence, the systematic test device and automatic control, and single convenient operation, the operator simple sample preparation is more secure, more HuanYang easy, intuitive visual measuring instrument, collect data and calculate quickly at the same time. The test data with high precision, small error, high efficiency, low requirements for the operator, scope of application is wide.
The introduction
The utility model in accordance with the national standard GB/T 25993-2010, 6.5, 7.4, appendix C operation specification, the test work flow section of each device effectively upgrade the process, integrated into one device, function and appearance are optimized. Permeable pavement brick and permeable pavement plate permeable rate of testing equipment, integrate innovation test operation process, the overall structure is more practical and beautiful. Its characteristics mainly embodies in four parts, namely the sample preparation section [1], water system, test device and circuit control system four parts. It includes sample preparation includes water drill, sample preparation, sink, protective dust removal device; Water system including airless vacuum system water device, vacuum box for sample preparation, water source heat preservation box, water system, water treatment system; Test unit including the main frame, the overflow tank, porous cylinder, cylinder seal, caching, weighing water tank; Circuit control system including temperature detection, vacuum control, water supply, according to operation. Permeable brick permeable coefficient measurement device schematic as a whole sample preparation is card in card water drill head, drill water installation in the sink, sink peripheral protective dust removal device, the sink drain connection precipitation tank water treatment system.
Fig 2.
Ceramic drill sample preparation
Airless vacuum system water device
water used in the production test. Tap water through the filter, then through the valve into the vacuum tank, vacuum tank installation mechanical vacuum gauge and vacuum sensor; Vacuum tank via pipeline is divided into two road respectively connect the vacuum pump, water pump. In turn connect between vacuum tank and pump suction bottle, steam trap, the valve; Followed by pump pressure regulating valve, valve, connected to water source heat preservation box [2].
Vacuum box for sample preparation
the measured samples used in vacuum, gas. Water inlet valve box, box installation mechanical vacuum gauge and vacuum sensor; Case by the pipe connection steam separator, valve, vacuum pump; The water tank water to precipitation tank water treatment system.
Water supply
water, water heat preservation box test above through porous cylinder pump to test device.
Water treatment system
the main processing vacuum box for sample preparation, sample preparation part of the used water. a permeable coefficient test apparatus Water supply system 2. The overflow mouth 3. The overflow tank 4. Stents Sample 6. Measuring cylinder 7. Water level difference 8. Transparent cylinder
Test device
the main frame, the overflow tank, porous cylinder, cylinder seal, caching, weighing water tank Test apparatus analysis diagram
Circuit control system
temperature detection, vacuum control, water supply, according to operation
The principle of analysis
This equipment adopts touch screen control, output pressure, liquid level difference, rate and related data. It has reached the closed-loop control. The precision is poor, program automatic real-time record equivalent stress, displacement, weight, and convenient late research, in order to improve the accuracy of brick of pottery and porcelain rupture modulus and fracture strength data. material after curing, put the sample in a vacuum, vacuum-90 kpa plus or minus 1 kpa, and maintain a 30 min.While maintaining the vacuum, add enough water to cover sample and make the water level higher than that of sample 10 cm, to stop pumping air into vacuum state, for 20 min, removed, load the permeable coefficient test device, seal sample with porous cylinder link.Into the overflow tank [4], it has opened the water valve, make the airless water into the container, such as the overflow tank overflow hole on the water out, adjust the water inflow, keep the porous cylinder certain water level (150 mm), for the overflow tank overflow mouth and the overflow of the porous cylinder hole water stabilized, with weighing tank from the outlet of water, the water flow in five minutes.Measured three times, averaged [5].
conclusion
Never before seen a permeable brick permeable coefficient of overall equipment, test equipment to achieve from the sample preparation, the whole process of pumping air into vacuum state, water supply, water weighing, as shown in figure 1. It has developed considering the fundamental solution to this problem: achieve comprehensive detection, permeable coefficient of the whole process of a device that can completely solve the various steps while doing the experiment fault discontinuity problems.
|
v3-fos-license
|
2022-02-16T16:20:42.737Z
|
2022-02-12T00:00:00.000
|
246844694
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4441/14/4/547/pdf?version=1645157390",
"pdf_hash": "04dcdb593546f92dfd5015b3e3d25de853a25af1",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41913",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "7bea6277497f96dbe2f0876ef266cce3203f273d",
"year": 2022
}
|
pes2o/s2orc
|
Parallel-Computing Two-Way Grid-Nested Storm Surge Model with a Moving Boundary Scheme and Case Study of the 2013 Super Typhoon Haiyan
This study presents a numerical tool for calculating storm surges from offshore, nearshore, and coastal regions using the finite-difference method, two-way grid-nesting function in time and space, and a moving boundary scheme without any numerical filter adopted. The validation of the solitary wave runup on a circular island showed the perfect matches between the model results and measurements for the free surface elevations and runup heights. After the benchmark problem validation, the 2013 Super Typhoon Haiyan event was selected to showcase the storm surge calculations with coastal inundation and flood depths in Tacloban. The catastrophic storm surges of about 8 m and wider, storm-induced inundation due to the Super Typhoon Haiyan were found in the Tacloban Airport, corresponding to the findings from the field survey. In addition, the anti-clockwise, storm-induced currents were explored inside of Cancabato Bay. Moreover, the effect of the nonlinear advection terms with the fixed and moving shoreline and the parallel efficiency were investigated. By presenting a storm surge model for calculating storm surges, inundation areas, and flood depths with the model validation and case study, this study hopes to provide a convenient and efficient numerical tool for forecasting and disaster assessment under a potential severe tropical storm with climate change.
Introduction
Storm surges due to a tropical cyclone cause a catastrophic impact on coastal communities [1]. Due to climate change resulting in the occurrence of more severe typhoons [2,3], having a better understanding of storm surges is required. However, storm surges involve multiple physical factors, thus complicating hydrodynamical modeling. First, storm surges interacting with tides and wind waves may intensify coastal surge heights [4][5][6][7][8]. Second, when a tropical cyclone closes coasts, surface winds amplify the amplitude of storm surges in shallow waters [7,8]. Lastly, the Coriolis force with a particular environment will generate unignored forerunner surges before main surges [9]. Hence, to counter the coastal storm surge impact with the multiple physical factors, an accurate numerical tool for evaluating the threat of storm surges is in high demand.
From a numerical point of view, storm surge models can be divided into two groups: (1) structured-grid model; and (2) unstructured-grid model. Structured-grid storm surge models are easily implemented and developed for operational purposes in the early stage.
An example of a structured-grid model is SLOSH (Sea, Lake, and Overland Surges from Hurricanes [10]). However, these kinds of models may not handle the change of the wavelength of storm surges perfectly because of limitations of a grid size [11], ignoring the advection/horizontal eddy diffusion terms on simulating coastal storm surges [10], or simulating storm surges without inundation areas and flood depths [10,11]. Thus, the gridnesting function becomes a good option for a structured-grid storm surge model to consider better physics and descriptions for calculating coastal storm surges. Using the gridnesting function, a structured-grid storm surge model can simulate storm surges from the ocean to coasts with the appropriate grid sizes for both deep and shallow waters. For example, some of these nested-grid models are NCTSM (Nested Coupled Tide-Surge Model [11]) and SuWAT (Surge, WAve, and Tide [12]). Besides using the grid-nesting function, structured models adopting a curvilinear grid improve the simulation of coastal storm surges around a complicated coastline, such as CH3D-SSMS (Curvilinear Hydrodynamic in 3D Storm Surge Modeling System [13]) and ROMS (Regional Ocean Modeling System [14]). Here, it is noted that a curvilinear-grid storm surge model sometimes adopts a relatively small domain; thus, boundary conditions need to be provided by a basin/regionalscale model [13]. In addition to structured-grid models, unstructured models have recently become more prevalent in simulating storm surges. Most unstructured-grid storm surge models are extended from ocean current models, allowing for the simulation of storm surges in only one computational grid with a more extensive range [7,8] or resolving the three-dimensional structure of storm-induced currents [15]. These models are, for example, SCHISM (Semi-implicit Cross-scale Hydroscience Integrated System Model [16]), ADCIRC (ADvanced CIRCulation [7,8]), and FVCOM (Finite Volume Community Ocean Model [15]). Although an unstructured-grid storm surge model allows the computational grid to gradually change from coarse to fine, it needs to shoulder the convergence and stability issue of the grid system [17], which usually relies on professional grid-maker software. This implies that the grid size of an unstructured grid is difficult to modify after the grid has been built, especially for grids near the coastline [17]. Hence, by comparing the pros and cons between the structured-grid and unstructured-grid models, the structured-grid models with the grid-nesting function should be a user-friendly choice implemented for users and developers to study storm surges from deep to shallow waters.
From a practical point of view, a storm surge model should be able to evaluate future storm surge hazards [18] or predict storm surges in a deterministic/ensemble manner before the landfall of a tropical cyclone [19,20]. First, since climate change has caused the sea-level to rise and these water levels have become a severe issue in the upcoming future [2,3], a hazard map for storm-induced inundations and floods is vital to coastal communities [21][22][23]. However, calculating storm-induced inundations and floods requires an accurate moving boundary scheme (i.e., moving shoreline scheme) with a stable nonlinear advection (convection) solver of a storm surge model, which usually results in some numerical issues during computations [24] and is usually ignored in the operational forecasting [10]. Thus, storm surge calculations with the moving boundary scheme tracing flood depth and inundation due to a tropical cyclone are challenging. Second, the prediction of coastal storm surge heights and warning of potential storm surges before a tropical cyclone landfall are essential for a global or local operational organization; such predictive simulations include Storm Surge Maximum of the Maximum (MOM) [11] and Maximum Envelope of High Water (MEOW) [11,25]. Hence, an operational storm surge model shall simulate storm surges from the basin/regional scale to the coastal scale. In addition, the model shall have enough efficiency to conduct ensemble simulations (such as MOM and MEOW) and update the warning messages appropriately. Moreover, predicting storminduced inundation areas and flood depths is more important than solely calculating coastal storm surge heights [10,11]. In summary, a storm surge model should have the ability to (1) evaluate potential storm surge hazard maps by simulating coastal storm surges with inundations and flood depths and (2) predict storm surges from offshore to nearshore with flood depths and inundation areas with enough operational efficiency.
After reviewing the structured/unstructured storm surge model development and practical uses of the storm surge predictions, a well-developed storm surge model should satisfy accuracy, stability, efficiency, flexibility, and convenience when performing storm surge computations from offshore to nearshore. Thus, this paper aims to present a numerical tool for calculating storm surges satisfying the aforementioned requirements. The storm surge model developed in this study is extended from the well-known tsunami model, COMCOT (COrnell Multi-grid Coupled Tsunami Model) [26,27] and allows the two-way grid-nesting function to increase the spatial resolution of nearshore regions. In addition, the moving boundary scheme with the nonlinear advection-term solver is expected to handle the climbing of coastal storm surges. Moreover, the performance and efficiency are enhanced by the parallel-computing technique. Here, we note that the linear equation model presented in Tsai et al. (2020) [28] is also based on COMCOT. However, that model ignores the nonlinear advection terms, horizontal eddy diffusions, moving boundary scheme, and grid-nesting function. Thus, the model presented in this study can be considered as an extension of the 2020's model from both physical and numerical points of view. Furthermore, some other studies have also conducted storm surge modeling based on COMCOT [29,30]. Yet, they did not involve any parallel-computing function in the simulations.
This paper is organized as follows. In Section 2, we will first present the storm surge model used in this study from the governing equations, discretization, grid-nesting/moving boundary scheme, to the parallel-computing technique. Next, in Section 3, this study will demonstrate a benchmark problem for validating the moving boundary scheme and the advection-term solver. Afterward, in Section 4, we will show the 2013 Super Typhoon Haiyan event with storm surges, storm-induced currents, and inundations around Leyte Gulf and San Pedro Bay. Finally, in Section 5, we will conclude this study and illustrate some work for the future.
Methodology
This study carries out storm surge computation using a depth-integrated equation model with the Coriolis effect, bottom friction, and horizontal eddy diffusion [11,12]. This storm surge model is extended from the well-developed COMCOT tsunami model to have better simulation accuracy. The reasons for developing COMCOT from simulating tsunamis to storm surges are as follows: (1) COMCOT has been validated by several benchmark problems [27,31] and real tsunami events, such as the 2011 Japan Tohoku Tsunami [32] and 2019 Indonesia Palu Tsunami [33]; thus, the accuracy of the numerical algorithm such as the upwind advection solver has been proved. (2) Both tsunamis and storm surges are in the regime of long waves; the physical equation can easily handle the computation change from tsunami to storm surge by adding some physical terms (e.g., wind shear stress terms) with slight modifications. (3) COMCOT has a two-way grid-nesting function in time and space, allowing seamless storm surge simulations from offshore to nearshore with flexible grid sizes/time steps and more accurate model results. (4) The parallel-computing function has been added to COMCOT for a fast-calculation purpose; hence, this parallel-computing function should be easily included into storm surge calculations [32]. From the reasons mentioned above, the storm surge model developed in this study will be nicknamed COMCOT-SURGE (COrnell Multi-grid COupled Tsunami-Storm SURGE). The governing equations, discretization, two-way grid-nesting, moving boundary scheme, and parallel-computing function of COMCOT-SURGE will be illustrated in the following subsections.
Governing Equation of the Storm Surge Model
The mass and momentum equations of COMCOT-SURGE are presented as follows: where is the free surface elevation (unit: m), is the time (unit: s), ( , ) are the spatial notations of the x-and y-directions (unit: m), ( , ) are the volume-flux components (i.e., depth-integrated volume fluxes per unit length) of the x-and y-directions (unit: m 2 /s), is the total water depth ( = ℎ + ; unit: m), ℎ is the still water depth (unit: m), g is the gravitational acceleration (= 9.81 m/s 2 ), is the Coriolis parameter ( = 2 ; unit: 1/s), is the Earth angular velocity (= 7.2921 × 10 −5 rad/s), is the sea-level air pressure (unit: N/m 2 ), ( , ) are the wind shear stresses (unit: N/m 2 ), ( , ) are the bottom frictional shear stresses (unit: N/m 2 ), is the water density (unit: kg/m 3 ), and ℎ is the horizontal eddy diffusion coefficient (unit: m 2 /s). Here we note that Equations (2) and (3) ignore the advection terms where the nonlinear effect becomes insignificant in deep waters. Thus, the linear momentum equations are shown below: The Manning's formula, from the conception of the open-channel flow, is used to model the bottom friction: where is the Manning's roughness coefficient (unit: s/m 1/3 ). The quadric law is used to model the wind shear stresses on the water surface: in which is the wind-drag coefficient between the water surface and the air, 10 is the 10-m wind speed (unit: m/s), ( 10 , 10 ) are the components of the 10-m wind speed in the x-and y-directions. The wind-drag coefficient proposed by Wu (1982) [34] with an imposed lower bound limit of WAMDI (1988) [35] is expressed; in addition, the cap of the wind-drag coefficient is determined [12,15] Here we note that is a dimensionless coefficient.
Discretization
COMCOT-SURGE adopts the finite-difference method to discretize the mass and momentum equations. Due to the leap-frog scheme, the mass and momentum equations are solved at a separate time step. Some studies have used a similar procedure to discretize their governing equations on storm surge modeling (e.g., Kim et al., 2015 [12]). The discretized mass equation is shown below: where ∆ is the time step (unit: s) and (∆ , ∆ ) are the grid sizes in the x-and y-directions (unit: m). The momentum equations are discretized using the upwind scheme for advection terms, implicit form for bottom friction terms, and central difference form for horizontal eddy diffusions. Thus, the consequence of discretized momentum equations for the x-and y-directions are as follows: and, , +1 2 ⁄ +1 The parameters and are The coefficients for the upwind advective terms are: The total water depths at the cell centers are evaluated by Furthermore, the total water depths at discharge points (i.e., locations of volume-flux components) are calculated as , +1 2 ⁄ +1 2 The total water depths of discharge points at = + 1 2 ⁄ are not calculated in the discretized mass equations; thus, they are evaluated from both time and space averages: , +1/2 = 0.25 • � , In a similar manner, the volume-flux components , +1/2 and +1 2 ⁄ , are We note that extremely large bottom frictions will occur when the total water depth is close to zero. Thus, the minimum total water depth threshold is required in simulations, which is 10 −5 m in this study. Following a similar procedure in calculating the upwinddiscretized advection terms, the total water depth threshold is also adopted.
Grid Nesting in Time and Space
COMCOT-SURGE adopts the two-way grid-nesting scheme to increase the resolutions in time and space for calculating storm surges. When adopting a smaller grid size, a corresponding time step following the CFL (Courant-Friedrichs-Lewy) condition is required to maintain the numerical stability in time and space: in which is the Courant Number, ℎ is the maximum still water depth (unit: m), and ∆ is the diagonal distance of two neighboring cell grids (∆ = √2∆ if in a square grid; unit: m). As discussed in Liu et al. (1998) [26], the Courant Number in the depthintegrated model using the leap-frog finite-difference method in the linear regime is approximately 0.70. For more practical wave modelling, the Courant Number is suggested as 0.65 for the linear model and 0.35 for the nonlinear model (see Wang and Power, 2011 [36]). It is noted that COMCOT-SURGE always needs to satisfy this condition when using the explicit leap-frog finite-difference method. Figure 1 shows the example of the grid-nesting function for the spatial domain in the grid-size ratio of 1:3 between a coarse and fine grid at the upper left and lower right corners. As shown in Figure 1, the cells of the inner grid (i.e., the fine grid) fully occupy the cells of the outer grid (i.e., coarse grid). The illustration here only explores the upper left and lower right corners between the coarse and fine grids, but the rest of the overlapped areas for the nested-grid domain are in the same manner. For waves propagating using a nested-grid domain, using a grid-size ratio from 3 to 5 to have a smooth transition across the connected boundaries between multiple grids is recommended [36]. It is also noted here that adopting a larger grid-size ratio needs to shoulder a smaller time step in the inner grid, restricted by the CFL condition. The grid-nesting function adopted in this study allows the outer and inner grids to use different time steps, but only in the time-step ratio of 2. Figure 2 illustrates the procedure of calculating the mass and momentum equations in the outer and inner grids having different time steps. As the volume-flux components have been computed at = Δ , the free surface elevations in the outer grid are evaluated by the mass equation (see Step 1 of Figure 2). After interpolating the volume-flux components from the outer grid to the inner grid (see Step 2 of Figure 2), the free surface elevations of the finer grid at = ( + 1/4)Δ are solved by the mass equation (see Step 3 of Figure 2), and the volumeflux components at = ( + 1/2)Δ are evaluated by the momentum equations (see Step 4 of Figure 2). To solve the free surface elevations of the finer grid at = ( + 3/4)Δ , the volume-flux components along the connected boundaries at = ( + 1/2)Δ are required. To get the volume-flux components along the connected edges at = ( + 1/2)Δ , they are interpolated in space and averaged in time (see Step 5 of Figure 2) from the volume-flux components of the outer grid at = Δ and = ( + 1)Δ . It is noted here that the volume-flux components of the outer grid at = ( + 1)Δ have not been addressed at that time; hence, they are "predicted" by solving the momentum equations with the information of the free surface elevations at = ( + 1/2)Δ . By having the volume-flux boundary conditions at = ( + 1/2)Δ , the free surface elevations of the inner grid at = ( + 3/4)Δ can be smoothly solved (see Step 6 of Figure 2). Subsequently, the volume-flux components of the inner grid at = ( + 1)Δ are evaluated by the discretized momentum equations (see Step 7 of Figure 2). If the two-way grid nesting is activated, the time-averaged free surface elevations of the finer grid at = ( + 1/4)Δ and = ( + 3/4)Δ are extrapolated back to the outer grid (see Step 8 of Figure 2). At the final step (Step 9 of Figure 2), the free surface elevations of the outer grid at = ( + 1)Δ are solved.
Moving Boundary Scheme
The moving boundary scheme adopted in this study allows the model to not only trace the moving shoreline with the free surface elevations, but also calculate the volumeflux components with the nonlinear shallow water equation model. Figure 3 shows the one-dimensional illustration for the adopted moving boundary scheme. As shown in Figure 3a, the shoreline stops between the grid cells i and i+1 because the free surface elevation at the grid cell i is not greater than the land elevation at i+1/2; thus, the volume-flux component at i+1/2 will not be calculated. As shown in Figure 3b, for another scenario that shows the free surface elevation at the cell i is greater than the land elevation at i+1/2, the flood depth exists and the volume-flux component at i+1/2 is calculated. Afterward, the free surface elevation at i+1 will be evaluated as the non-zero volume-flux component at i+1/2 exists. In this particular illustration, the flood depth, , is 0.5 × . We note here that the time-label of this one-dimensional illustration is ignored. The moving boundary scheme focuses on dealing with the calculations of the volume-flux component with the movement of the shoreline. The nonlinear shallow water equations are adopted for calculating these physical properties for volume-flux components and : indicates the inland flood depth (unit: m), which replaces the total water depth, , when the flow crosses from dry cells to wet cells. If the volume-flux components exist between two wet cells, the momentum calculations will return to the standard procedures.
OpenMP Parallel Computing
COMCOT-SURGE supports using the OpenMP (Open Multi-Processing) parallelcomputing technique in a workstation, cluster, or personal computer, which has been used in iCOMCOT (i.e., cloud computing platform of COMCOT; see Lin et al., 2015 [32]). By using OpenMP, the paralleled version of COMCOT is about ten times faster than the serial version [32]. Thus, this OpenMP computing technique is extended from COMCOT to COMCOT-SURGE for calculating storm surges. The OpenMP parallel-computing technique is applied in the do-loops when solving the mass, momentum equations, and forcing terms (i.e., sea-level pressure gradient terms, wind shear stress terms, Coriolis terms, and horizontal eddy diffusion terms) during storm surge calculations.
Introduction
The experiment of the solitary wave runup on a circular island, designed to study the unexpected tsunami impacts on the lee side of a small island [27,37], provides a good data set to validate the model. The experimental data for the solitary wave runup on a circular island has been widely used to validate hydrodynamical models such as a shallow-water equation model [38] and Boussinesq-type model [39]. Thus, this study adopts the benchmark problem to examine: (1) the evolution of the free surface elevations near the coastline during the runup and rundown; (2) the model performance and numerical stability when tracing the moving shoreline with higher wave nonlinearity. Figure 4 shows the computational domain from top and side views. As illustrated in Figure 4a, the center of a circular island is located at x = 15 m and y = 13 m in a wave basin with the dimensions of 30 m in width and 25 m in length. The slope from the base of the island to the top is 1:4 (see Figure 4b). The top and base radius are 1.1 m and 3.6 m, respectively (see Figure 4b). The still water depth is 0.32 m, and the height from the still water surface to the island's top is 0.305 m. The four wave gauges recording the free surface elevations are available for the model validation, and the locations of the wave gauges can be found in Table 1. The incident solitary wave, generated by the wavemaker, propagates along the +y direction from the bottom boundary of the wave basin (the incident wave direction is indicated in Figure 4a). The formula of the incident solitary wave [40] is
Computational Setting
and, where is the incident solitary wave height (unit: m), is the effective wavenumber (unit: 1/m), and is the long-wave celerity (unit: m/s). The two-layer nested computational domains (Grids 01 and 02) are adopted for the solitary wave runup on the circular island. As shown in Figure [36]. Grid 02 will accept the volume-flux components of Grid 01 as its boundary conditions, and it is noted that the two-way grid-nesting function is activated in time and space between Grids 01 and 02. The water density used in this benchmark simulation is 1000.0 kg/m 3 (i.e., the reference density of pure water). Briggs et al. (1995) [37] presented the experiments for the three different wave nonlinearities (A/h = 0.045, 0.091, and 0.181), but this study will only showcase the medium case. The reasons are: (1) the weakest wave-nonlinearity case (i.e., A/h = 0.045) may not be able to highlight the nonlinear effect with runup and rundown; (2) the largest wave-nonlinearity case (i.e., A/h = 0.181) has been found to have significant wave breaking phenomena around the circular island, which is not the main point of this study. Thus, this paper decides to shed light on the medium case (i.e., A/h = 0.091).
Computed Free Surface Elevations
The computed free surface elevations on the frontal and lee sides of the circular island are presented in Figures 5 and 6, respectively. At t = 8.5 and 9.0 s, the amplitude of the solitary wave becomes higher by the shoaling effect (see Figure 5a,b). At t = 9.5 and 10.0 sec, the amplified solitary wave inundates the frontal side of the circular island and generates a significant runup (see Figure 5c,d). In addition, the trapped waves propagate along the coastline, and the cylindrical wave patterns occur in front of the circular island after the wave runup generates on the frontal side of the island (see Figure 5). From t = 11.0, 11.5, 12.5, 13.5, and 14.0 s, the trapped waves propagate along the coastline (see Figure 6a-c), collide again (see Figure 6d), and generate another backward runup behind the island (see Figure 6e). It is noted that the inundated area on the frontal side is wider than those on the lee side. At t = 14.5, 15.0, and 16.0, the trapped waves propagate along the coastline again (see Figure 6f-
Time History of Free Surface Elevations
The measurement for free surface elevations at wave gauges on the frontal, lateral, and lee sides of the circular island (G6, G9, G16, and G22) from the Briggs et al. (1995)'s laboratory experiment are used to validate the model results. The measured water-level data in the time interval of 0.04 sec can be downloaded from the NOAA Center for Tsunami Research. (Website: https://nctr.pmel.noaa.gov/benchmark/Laboratory/Labora-tory_ConicalIsland/index.html, accessed on 10 January 2022). Figure 7 presents the comparisons between model results and measurements. In general, the computed free surface elevations agree well with the measured water levels in terms of the wave heights, arrival times, and wave shapes for the leading waves at G6, G9, G16, and G22 (see Figure 7). The correlation coefficients between the model results and measurements at G6, G9, G16, and G22 are 93.23%, 92.97%, 95.10%, and 92.87%, respectively. After the leading waves, the wave depressions are predicted less in the numerical model than in the measurements, which is consistent with the findings in the other depth-integrated model results [27,39]; this phenomenon has been discussed by Lynett et al. (2002) [39]. In addition, Titov and Synolakis (1998) [38] pointed out that wave breaking occurs on the lee side of the circular island; however, wave breaking occurs only on the lee side of the circular island and seems not to seriously affect the model prediction from the gauge comparison at G22 (see Figure 7).
Figure 7.
Comparisons between the computed free surface elevations (red lines) and measured water levels (black dots) (A/h = 0.091) at wave gauges G6, G9, G16, and G22, respectively. The wave gauge locations can be found in Table 1.
Runup Height and Inundation Area
The runup heights between the numerical results and measurements are projected onto the coordinates of the circular island in Figure 8. Here, the runup height is calculated as the maximum land elevation that waves arrive at from the original shoreline (z = 0 in the model simulations). As shown in Figure 8, the model results agree well with the measured runup heights on the frontal side of the island. However, on the lee side of the circular island where the wave breaking is found, the maximum runup heights predicted by the model are slightly lower than the measurements by about 0.5-1.0% (see Figure 8). Despite this, the leading order phenomena (i.e., significant runups on the lee side of the circular island) are well computed by the model. Thus, this implies that the maximum runup and inundation ranges are mainly contributed by the horizontal velocities rather than the vertical velocity changes due to the wave breaking in this particular runup case [38]. Hence, the model used here, without considering wave breaking, perfectly matches the measurements.
Introduction of 2013 Typhoon Haiyan
Typhoon Haiyan, also known as its local name Yolanda, was the strongest storm in 2013; it struck the Philippines with catastrophic storm surges, winds, and waves, and caused casualties of more than 6,300 [41]. The event of Typhoon Haiyan has three unique features: (1) a record-breaking wind speed of more than 310 km/hr [42]; (2) a fast forward motion of the storm at up to 41.0 km/hr [43]; (3) the notable induced storm surges and floods found in Leyte Gulf and San Pedro Bay [44][45][46][47]. Thus, these unique features make the 2013 Typhoon Haiyan a good case study for highlighting the model performance of predicting coastal storm surges and inundation areas.
Computational Setting
The three-layer nested-grid computational domains are adopted in this study to perform the storm surge simulation for the 2013 Typhoon Haiyan (see Figure 9). Table 2 tabulates each nested grid's computational domains, grid sizes, and corresponding time steps. By using the grid-nesting function, these three layers can simulate storm surge motions from offshore, nearshore, and coastal regions with appropriate grid sizes. The gridsize ratio between the outer grid and the inner grid is 3. In addition, we note here that all the settings of computational grids satisfy the CFL condition. The grid numbers of Domains D01, D02, and D03 are 69,276, 42,799, and 19,936, respectively; the total grid number is 132,011. The bathymetry data used in this study is from GEBCO (The General Bathymetric Chart of the Oceans) [48], which has a resolution of 15 arc-seconds in the latest GEBCO 2021 grid. The input 10-m winds and sea-level pressure fields are from the 1980 Holland Wind Model (Holland, 1980) [49] with the best-track data of JMA (Japan Meteorological Agency). The methodology for generating storm winds and surface air pressure fields, as well as the validation of these input meteorological fields with observations, is elaborated in the predecessor of this study [28]. Table 3). The switches for numerical calculations are modified accurately to simulate storm surges from offshore, nearshore, and coastal regions. This study assumes a storm surge propagating from the Western Pacific Ocean is under the linear regime; thus, the linear momentum equations are adopted to solve the offshore-scale storm surge motion. Afterward, when the storm surge propagates to nearshore and coastal regions, the nonlinear effect becomes more important; hence, the nonlinear momentum equations are conducted. Moreover, the inundation areas and flood depths shall be investigated near the Tacloban DZR (Daniel Z. Romualdez) Airport; thus, the moving boundary scheme (i.e., moving scheme) is turned on to trace shoreline and inland floods. In summary, Table 4 tabulates the switches for advection term, forcing terms (sea-level pressures, wind shear stresses, Coriolis force, bottom frictions, horizontal eddy diffusions), and moving boundary scheme for Domains D01, D02, and D03. In addition, some coefficients are required during the computations: the horizontal eddy diffusion coefficient is 100.0 m 2 /s [50]; the water density is the reference density of seawater (= 1025 kg/m 3 ) [51]; the Manning's coefficient is 0.025 s/m 1/3 [12]. Table 4. Switches for advection term, forcing terms, and moving boundary scheme. O indicates the switch is turned on; X indicates the switch is turned off.
Horizontal Eddy Diffusion Term
Moving Boundary Scheme
Storm Surges and Storm-Induced Currents
The Haiyan-induced storm surges propagate from the Western Pacific Ocean, to Leyte Gulf, and then to San Pedro Bay in the computational domains of our interests. Figure 10 shows the snapshots of computed storm surges in Domain D01. When the Super Typhoon Haiyan generates offshore winds, the negative storm surges occur accordingly in Leyte Gulf (see Figure 10a). When Typhoon Haiyan makes landfall, higher storm surges are generated near the coastline of Leyte Island and San Pedro Bay (see Figure 10b,c). With southeastern storm winds in San Pedro Bay, storm surges of more than 7 m occur (see Figure 10d), which causes dramatic impacts and damages to coastal communities in Tacloban (see, e.g., Mori et al., 2014 [52]; Soria et al., 2016 [46]). Figure 11 shows coastal storm surges and floods near Cancabato Bay and Tacloban DZR Airport, presented by the computed free surface elevations. The storm surges induced by Haiyan penetrates from Cancabato Bay to the coasts at 23:00 UTC on 7 November 2013 (see Figure 11a). Afterward, Haiyan-induced storm surges come from the east side of the Tacloban DZR Airport (see Figure 11b). After about 30 min to 1 h, larger coastal storm surges from San Pedro Bay enter Cancabato Bay and the east side of the Tacloban DZR Airport; thus, more inundation areas are found in the simulations (see Figure 11c,d). We note here that the elevation data used in the GEBCO 2021 grid may not be able to present accurate shapes and coastal infrastructures such as sea walls. In addition, the nearshore bathymetry is sensitive to hydrodynamical computations [53]. Hence, this particular case is used to showcase the model performance and ability of COMCOT-SURGE on the inundation calculation. Figure 12 presents the snapshots of the storm-induced current velocity fields. As shown in Figure 12a,b, an anti-clockwise vortex occurs inside Cancabato Bay. As found in the simulations, the maximum storm-induced current speed is about 3.5 m/s near the north tip of the Tacloban DZR Airport (see Figure 12b). Afterward, the storm-induced flows enter the Tacloban DZR Airport from both the east and north sides at 00:00 UTC on 8 November 2013 (Figure 12c). The storm-induced currents change to southward directions at 00:00 UTC on 8 November 2013, while the larger inundation areas are found in the simulation results (Figure 12d).
Maximum Storm Surges and Flood Depths
After exploring the storm surges and storm-induced currents around Cancabato Bay and the Tacloban Airport, this subsection further investigates the maximum storm surges and flood depths, which are essential to coastal communities. Figure 13a shows the maximum storm surges around the coasts of the Leyte Island. As shown in Figure 13a, the maximum computed surge heights are amplified from 3 m to 8 m in San Pedro Bay and the coasts of Leyte Island. The catastrophic storm surges contribute this phenomenon due to Typhoon Haiyan's southeastern winds propagating from Leyte Gulf to San Pedro Bay, as discussed in Figure 10. In addition, more detailed discussions about the storm surges related to storm winds can be found in Tsai et al. (2020) [28]. Furthermore, the maximum storm surges amplified from Leyte Gulf to San Pedro Bay can be also found in the discussion of Mori et al. (2014) [52], Kim et al. (2015) [12], and Soria et al. (2016) [46]. Figure 13b presents the maximum computed inland flood depths, which are the difference between the maximum computed free surface elevation and land elevation. As shown in Figure 13b, the maximum flood depth in the Tacloban DZR Airport shows the largest flood depths of about 6 m. Additionally, the coastal regions around Cancabato Bay suffered from significant floods of about 5 m. Although the land elevation from the GEBCO 2021 grid has not considered detailed infrastructures such as sea walls, the simulation results still expose dramatic flood depths around Cancabato Bay and the Tacloban DZR Airport corresponding to the field survey [46][47][48][49]. Basically, the storm-induced inundation areas agree with the measured extents by Tajima et al. (2014) [44] (see Figure 13b). Table 5 tabulates the flood depths between the measurements and model predictions of COMCOT-SURGE. Our storm surge model shows a good match with the flood depth at P2 but lower predictions at P1 and P2. On the one hand, we need to mention that the detailed digital elevation model (DEM) is not involved in our model; thus, the accuracy of inundation prediction can be improved once considering DEM or more accurate bathymetry [53]. On the other hand, the measured data may have some uncertainties, which may affect the validation of the inundation areas and flood depths. For example, P1, P2, and P3 show the reliabilities C, B, and A in the field survey, respectively (A: clear mark with small error; B: unclear mark but small error; C: unclear mark with large error [44]). Hence, the data (i.e., DEM, bathymetry, and field survey) are essential when discussing the storm-induced inundations. Table 5). The yellow rectangles with the dashed black line mark inundation areas of Tajima et al. (2014) [44].
Time Series of Storm Surges
The time series of computed storm surges at specified numerical gauge stations (see Table 3) are illustrated in Figure 14. As shown in Figure 14, the negative storm surges occur from 18:00 UTC on 7 November 2013, and the water levels dramatically increase after 22:00 UTC on 7 November 2013 (see Basey, Tacloban, Palo, Tanauan, and Dulag). The maximum storm surge height of about 7.5 m is found at Tacloban (see Figure 14). This phenomenon is attributed to (1) the strong southeastern winds from Leyte Gulf to San Pedro Bay and (2) the changes of the wind directions due to the typhoon landfall. As discussed by Soria et al. (2016) [46], the water level from the trough to the crest within 1 to 2 h corresponds to the tsunami-like waves called by local residents, which is also discussed in Tsai et al. (2020) [28]. It is noted here that the maximum storm surges in the time-history data decay from north to south (see the Basey, Palo, Tanauan, and Dulag stations of Figure 14). In addition, after the largest amplitude, the following multiple crest-to-trough heights also diminish from north to south (see the Basey, Palo, and Tanauan stations of Figure 14). Table 3.
Numerical Experiments
Section 5 has explored the 2013 Haiyan's storm surges and storm-induced flood/inundation areas using the three-layer nested-grid domain with the moving boundary scheme. Thus, this section will further discuss the effects of the nonlinear advection term and the fixed/moving shoreline, which are essential in simulating coastal storm surges, disaster assessments, and storm surge forecasting. In addition, the model efficiency boosted by OpenMP will be also investigated in this section.
Linear/Nolinear Equations with a Fixed or Moving Shoreline
When storm surges propagate to shallow waters, the surge amplitude increases by the wind shear stresses, wave radiation stresses, or bathymetry effect. Moreover, storm surges will penetrate from offshore to nearshore and inundate coastal communities. At these stages, the nonlinear effect may play an important role in simulating coastal storm surges; however, it is usually ignored in storm surge simulations or forecasting [10]. Additionally, the moving boundary (i.e., moving shoreline) with increased nonlinearity becomes challenging in storm surge simulations. Thus, this subsection will explore the nonlinear effect by conducting numerical experiments with the fixed and moving shorelines. Figure 15 explores the maximum storm surges of the nonlinear equation model with the moving/fixed shorelines and linear equation model with the fixed shoreline. The inland storm surges (i.e., free surface elevations or storm-induced floods) in Figure 15a are masked to compare each simulation. As shown in Figure 15, the maximum storm surges using the moving shoreline inside Cancabato Bay are about 0.5 lower than the fixed shoreline case. This corresponds to the arguments in Kowalik and Murty (1993) [54], that the water-level predictions of fixed shoreline are higher than the moving shoreline in a numerical model. However, the difference between the nonlinear and linear equation models under the fixed shoreline is relatively unclear, indicating a 0.1-0.2 m difference inside Cancabato Bay (see Figure 15b,c). Figure 16 further investigates the storm-induced current fields between the nonlinear and linear equation model using the fixed shoreline. At 00:00 UTC on 8 November 2013, the nonlinear model shows an eddy inside Cancabato Bay, with its center located on the west coast of the Tacloban Airport (see Figure 16a). This eddy is also found in the model results using the nonlinear equation model with the moving boundary scheme (see Figure 12c). However, at 00:00 UTC on 8 November 2013, the simulation of the linear model with the fixed shoreline only shows anti-clockwise currents inside Cancabato Bay and no eddies are occurring (see Figure 16c). At 00:30 UTC on 8 November 2013, the storm-induced currents propagate along the east coasts of the Tacloban Airport in both nonlinear and linear models with the fixed shoreline (see Figure 16b,d). However, the results using the moving boundary scheme show the different flow patterns. Since the computed storm surges have inundated the Tacloban Airport, the flows pass over inland regions southeastward (see Figure 12d).
Parallel-Computing Efficiency
This section explores the model efficiency in the workstation with the CPU of AMD Ryzen 9 3900X (12 cores; in other words, 24 threads) and with a RAM of 64 GB under the operating system of the Linux-based CentOS 8. The clock time of each computation is calculated by the elapsed time between the first and the last output files. Here, we note again that the OpenMP algorithm is applied in the do-loop calculation when calculating the discretized mass and momentum equations and the forcing terms. But the output and input procedure are not accelerated in the model. Figure 17 shows that the clock time corresponds to the usage of thread numbers. As shown in Figure 17, the efficiency is dramatically enhanced when using more CPUs from 2 to 12 threads, but the efficiency stops being boosted after 12 threads (i.e., 6 CPUs). This implies that the OpenMP technique helps to enhance the storm surge calculation from the serial version to parallel version. However, after 12 threads in this particular case, the threads are overused. As shown in the running test in the workstation, the model efficiency follows the exponential curve (see Figure 17). Table 6
Conclusions
This study has developed a numerical tool with the two-way grid-nesting function in time and space, the moving boundary scheme in tracing the shoreline, and calculating coastal storm surges and inundation areas without any numerical filter adopted, which has been nicknamed COMCOT-SURGE. By validating the model performance and numerical stability against the solitary wave runup on a circular island, the time series of the free surface elevations and the runup heights perfectly matched the measurements. In addition, the free surface evolution of the incident solitary wave has been explored in the three-dimensional presentation with the runup, rundown, and trapped wave propagation around the circular island. After the benchmark problem validation, the extreme storm surge event of the 2013 Super Typhoon Haiyan was selected to showcase the storm surge computations of the developed model in this study. The three-layer nested-grid domains have been adopted to perform the storm surge simulations from the offshore, nearshore, to coastal regions. The extreme storm surges of about 8 m with the storm surge-induced inundation were explored in the Tacloban Airport and Cancabato Bay. Additionally, an anti-clockwise eddy in the storm-induced current fields was found inside Cancabato Bay.
The numerical experiments showed that the linear/nonlinear momentum equation models with the fixed shoreline predict higher storm surge amplitudes than the nonlinear momentum equation model with the moving shoreline around Cancabato Bay. However, the linear momentum equation model cannot resolve the eddy generated in Cancabato Bay. Furthermore, the model parallel efficiency has increased dramatically in using multiple threads as compared to a single thread. However, the boost of the model efficiency followed an exponential curve, implying the overuse of threads may not benefit the computations. By presenting a finite-difference-method numerical storm surge model with the two-way grid-nesting function and moving boundary scheme solving the nonlinear momentum equations, this study hopes to provide a convenient, useful, and flexible numerical tool for future research in evaluating storm surges in operational forecasting or disaster assessments.
Future Work
The future work is suggested as two parts: (1) open-source model to the research community and (2) model coupling with the spectral wave model. First, although the finite-difference model has been developed for a few decades, not many models have been shared or opened to the research community as open-source codes. Thus, an official website or GitHub/Gitlab webpage for COMCOT-SURGE will be developed for opening the source codes. Second, the wave-enhanced radiation stresses may play important roles in amplifying coastal storm surges [6][7][8]13]; thus, the coupling between COMCOT-SURGE and a spectral wave model such as SWAN (Simulating WAves Nearshore) shall be explored to improve prediction accuracy. https://nctr.pmel.noaa.gov/benchmark/Laboratory/Laboratory_ConicalIsland/index.html (accessed on 10 January 2022). The GEBCO 2021 grid can be found at the website: https://www.gebco.net/data_and_products/gridded_bathymetry_data/gebco_2021/ (accessed on 10 January 2022).
|
v3-fos-license
|
2021-03-16T14:26:40.325Z
|
2021-03-16T00:00:00.000
|
232237462
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-021-10536-y",
"pdf_hash": "f039be481354d14cd035f72f613983d4aeae2c83",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41915",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"sha1": "f039be481354d14cd035f72f613983d4aeae2c83",
"year": 2021
}
|
pes2o/s2orc
|
Ethnic inequalities in older adults bowel cancer awareness: findings from a community survey conducted in an ethnically diverse region in England
Background To date, research exploring the public’s awareness of bowel cancer has taken place with predominantly white populations. To enhance our understanding of how bowel cancer awareness varies between ethnic groups, and inform the development of targeted interventions, we conducted a questionnaire study across three ethnically diverse regions in Greater London, England. Methods Data were collected using an adapted version of the bowel cancer awareness measure. Eligible adults were individuals, aged 60+ years, who were eligible for screening. Participants were recruited and surveyed, verbally, by staff working at 40 community pharmacies in Northwest London, the Harrow Somali association, and St. Mark’s Bowel Cancer Screening Centre. Associations between risk factor, symptom and screening awareness scores and ethnicity were assessed using multivariate regression. Results 1013 adults, aged 60+ years, completed the questionnaire; half were of a Black, Asian or Minority ethnic group background (n = 507; 50.0%). Participants recognised a mean average of 4.27 of 9 symptoms and 3.99 of 10 risk factors. Symptom awareness was significantly lower among all ethnic minority groups (all p’s < 0.05), while risk factor awareness was lower for Afro-Caribbean and Somali adults, specifically (both p’s < 0.05). One in three adults (n = 722; 29.7%) did not know there is a Bowel Cancer Screening Programme. Bowel screening awareness was particularly low among Afro-Caribbean and Somali adults (both p’s < 0.05). Conclusion Awareness of bowel cancer symptoms, risk factors and screening varies by ethnicity. Interventions should be targeted towards specific groups for whom awareness of screening and risk factors is low. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-021-10536-y.
Background
Colorectal cancer (CRC, also referred to as 'bowel cancer') is the fourth most common cancer and the second leading cause of cancer-related death in the United Kingdom [1]. When diagnosed early, prognosis for survival is improved, with 95% of patients diagnosed at stage I surviving five or more years, compared with 10% of patients diagnosed at stage IV [2]. Unfortunately, due to the asymptomatic nature of the disease in the early stages, the majority of CRCs are diagnosed late [3], when the prognosis for survival is generally poor.
Population screening can help diagnose CRC early, by detecting cases before symptoms develop [4]. On this basis, many countries, including the United Kingdom, offer national screening programmes for CRC [5]. Despite the wide availability of screening, however, the majority of CRCs are still diagnosed in people reporting with symptoms [6]. This is thought to be, at least in part, because of the low, socially graded, awareness and uptake of screening, which is significantly lower among people living within more socioeconomically deprived and ethnically diverse areas, as well as individuals of non-White ethnicity, specifically [7,8].
The reasons for non-uptake of CRC Screening have been investigated in-depth and are reported to relate to an interplay of factors, including: the unpleasantness of obtaining stool samples, the invasive nature of follow up tests, and emotional barriers, such as fearful and fatalistic beliefs about the potential consequences of a CRC diagnosis [9,10]. Many of these barriers have also been found to impede timely presentation for symptoms suggestive of CRC, which is further compounded by a lack of public awareness of these symptoms as warning signs for cancer [11][12][13][14].
Like screening participation, there are social inequalities in CRC risk factor and symptom awareness [15]. A previous study by Power and colleagues (2011) found that 'non-white' and 'lower socioeconomic group' adults, living in England, had lower risk factor, symptom and screening awareness compared to their 'white' and 'higher socioeconomic group' counterparts [15]. Unfortunately, due to small numbers in each ethnic subgroup, the authors were unable to investigate ethnic inequalities in greater detail.
Since the aforementioned study was published, several cancer awareness campaigns have been conducted in England [16], including Cancer Research UK's 'Be Clear on Cancer' campaign, which aimed to raise awareness of the signs and symptoms of bowel cancer, specifically [17]. The results of this study found that, while the number of patients referred onto the 2-week-wait pathway (for bowel cancer-related symptoms) was higher three months following the campaign, compared with three months before, the proportion of 'non-White' patients referred onto the pathway was less following the campaign, suggesting a possible widening of ethnic inequalities [17]. Again, due to small numbers in each ethnic sub-group, and a lack of long-term follow-up data, the authors were unable to investigate ethnic inequalities in greater detail, and the long-term impact of the campaign on awareness and inequalities is not known [16].
Identifying the specific ethnic groups for which risk factor, symptom and screening awareness is particularly low, as well as specific symptoms / risk factors for which these groups lack awareness, has important implications for the development of effective interventions that can be targeted to reduce inequalities in awareness. The aim of this study, therefore, was to extend our understanding of the association between ethnicity and CRC awareness, among screening eligible adults aged 60+ years, by conducting questionnaires with an ethnically diverse sample.
Study design and setting
To assess people's awareness of the risk factors, symptoms and availability of screening for CRC, we conducted a cross-sectional survey with individuals living in the ethnically diverse London Boroughs of Brent, Harrow and Hillingdon.
Participants
Participants were adults, aged 60+ years, who were eligible for bowel cancer screening during the study period (April-June, 2019).
Participant recruitment
Given that language can be a barrier for some Black, Asian and Minority Ethnic (BAME) group adults (e.g. first generation immigrants), and they are likely to have better access to a community pharmacy (CP) than a primary care practice (in England, 89% of the population can walk to a CP within 20 min, rising to 98% in urban areas, and 99% in areas of high deprivationthe 'positive pharmacy care law') [18], we decided to recruit participants to the study through CPs, which, in addition to being more accessible than primary care practices, tend to employ people from the local community and can speak multiple languages reflective of the local population [18].
Recruitment and training of CP staff and other community providers
All healthcare staff, working at a CP that was part of the Middlesex Pharmaceutical Committees (MPCs), during the study period, were invited to partake in the research. Eligible staff first received an email from the head of the MPCs, inviting them to participate in the study. Those who responded indicating they were interested in helping with the study were then invited to attend one of three training sessions (one training session per borough). The training was delivered face-to-face, by the St. Mark's Bowel Cancer Screening Centre Health Promotion Team in March 2019. The training sessions provided attendees with information on bowel cancer screening, symptoms and risk factors (providing them with the correct answers to the questions, should the survey generate discussion with the patient), as well as the aim of the study and how to deliver the questionnaire to customers. Each CP was tasked with surveying 30 participants (as a guideline: one per day, throughout the month of April), verbally, in return for £160 (CPs that collected < 30 questionnaires received £5.33 for each questionnaire collected); all of the surveys were printed in English (Appendix 1).
An additional 85 members of the public were later surveyed over the phone by a member of the Harrow Somali Association (May, 2019), who received the same training as CP staff. A further 15 questionnaires were administered in person by a member of the St Mark's Bowel Cancer Screening Centre Health Promotion Team, during a Black African and Caribbean community group meeting (June 2019). These additional questionnaires were performed to help collect data on individuals who do not typically visit a CP.
Data collection and data entry
Completed surveys were returned to the Health Promotion Team at St Mark's Hospital (via freepost), where the data were entered into the study database on SPSS by the Health Improvement Principal (AP). A proportion (10%) were checked and validated by another member of the team (SC).
Measures
Awareness of the symptoms, risk factors and available screening for bowel cancer were assessed using a modified version of the Bowel Cancer Awareness Measure (BCAM): a publicly available and scientifically validated questionnaire produced by Cancer Research UK [15]. The adapted measure included a range of questions, including 9 on awareness of warning signs, 10 on risk factors and one on awareness of the UK bowel screening programme (not previously included -Appendix 1). Respondents completing the questionnaire were asked to indicate whether each of the warning signs could be symptoms of CRC, with response options: 'Yes', 'No' and 'Don't know'. A correct response ('Yes') was given a score of '1', while an incorrect response ('No' / 'Don't know') was given a score of '0', generating a scale ranging from 0 (no correct responses) to 10 (no incorrect responses). For the list of risk factors, respondents were given a 5-point Likert scale, ranging from 'Strongly agree' to 'Strongly disagree'. Responses were dichotomised as 'correct' (1) or 'incorrect' (0) respectively ('Strongly agree' / 'Agree' vs. 'Neither agree nor disagree', 'disagree', 'strongly disagree'), creating a scale ranging from 0 (no correct responses) to 9 (no incorrect responses). A single question was used to assess screening awareness: 'is there a bowel screening programme', with response options 'Yes'/'No/'Don't know'. Responses were dichotomised as either 'correct' (Yes) or 'incorrect' (No / Don't know). All scoring was performed by the researchers (RK and CvW).
In addition to CRC awareness, several demographic variables were measured, including: gender (response options: Male, Female, Prefer not to say), age (response options: 60-65, 66-69, 70-75, 76+, prefer not to say [all participants were asked to confirm they were 60 years or older, before commencing the questionnaire]), ethnicity (response options: White British, White Irish, Any other white background, Black Caribbean, Black African, Indian, Pakistani, Bangladeshi, Chinese, Any other Asian background, White and Black Caribbean, White and Black African, White Asian, Any other mixed background, Other, Prefer not to say) and main language (English, Sylheti, Urdu, Cantonese, Punjabi, Other, Gujurati, Somali, Arabic, Prefer not to say). The wording for the demographic questions, and their response options, were extracted verbatim from the original B-CAM [15]; however, individuals called by the Harrow Somali Association, or who attended the Black African and Caribbean community group meeting, were asked to additionally specify their ethnicity, if they stated 'other'.
For the purpose of the analysis (described below), ethnicity was coded as 'White British / Irish' (White British + White Irish), South Asian (Indian + Bangladeshi + Pakistani), Any other Asian ethnicity (Chinese + Any Other Asian Background), Afro-Caribbean (Black African + Black Caribbean), Somali (Somali), Arab (Arab) and Mixed / Other (Any Other Background + Any Other Mixed Background + Any Other White Background + White Asian + White and Black African + White and Black Caribbean), while main language was coded as English ('English') and 'Any other language' (Sylheti + Urdu + Punjabi + Gujurati + Cantonese + Somali + Arabic + Other).
Pilot testing
The development of the B-CAM is well documented and has previously been reported, in detail, by Power and colleagues [15]. Items were originally reviewed by CRC experts (N = 16), who considered the interpretability, clarity and accuracy of questions. Draft versions of the measure were then assessed in cognitive interviews (n = 17) with members of the public using the 'think-aloud' method [19] to assess respondents' comprehension of the questions (i.e. whether specific words and phrases used in the question are understood as intended by researchers). Several adjustments were then made to the questions, based on the interviews (e.g. 'straining feeling' was changed to 'a feeling that your bowel does not completely empty after using the lavatory').
Analysis
Descriptive statistics were used to report the frequency and proportion of participants who correctly identified each risk factor and symptom, both overall and by demographic subgroup (e.g. men, women, etc.), as well as the demographic characteristics of the sample. The mean number of risk factors and symptoms identified by participants was also reported using descriptive statistics; again, both overall and by demographic subgroup. Associations between mean risk factor and symptom awareness scores and demographic variables were assessed using linear regression (separate models were produced for risk factor awareness and symptom awareness). Similarly, associations between screening awareness and demographic variables were assessed using logistic regression (the corresponding method for binary outcomes). Logistic regression was also used to assess associations between awareness of individual risk factors and warning signs and demographic variables. Associations were considered 'statistically significant' if the p value was < 0.05. All analyses were performed using SPSS statistics (Ver 25.0).
Missing data
The number of cases with missing data (including those who responded 'prefer not to say') for each variable was reported using descriptive statistics. Cases with missing data for the outcome variable, or one or more of the covariates (i.e. gender, age, ethnicity, language), were excluded from the analyses. The total number of cases included in each analysis is reported in the tables.
Collinearity
Collinearity between the predictor variables (i.e. gender, age, ethnicity and main language) was assessed using Pearson's correlation. As all correlations between predictor variables were < 0.7 (Appendix 2), there was no evidence for collinearity between predictor variables. To be sure, we additionally calculated the variance inflation factors for the predictor variables, all of which were < 10 (Appendix 2).
Ethical approval
The study was performed as part of the routine service improvement strategy employed by St Mark's Bowel Cancer Screening Centre. Completion of the Health Research Authority Decision Tool indicated that NHS Research Ethics Committee Review was not required. All data were anonymous and participants provided informed consent through completion and return of the questionnaire (the purpose of the research was explained prior to data collection). The study was carried out in accordance with Good Clinical Practice guidelines and the principles set forth in the Declaration of Helsinki.
Recruitment of CP staff
Staff from 206 CPs in Northwest London were invited to participate in the study. In total, staff from 40 CPs (19.4%) responded to take part. Each CP collected an average of 23 questionnaires.
Overall symptom awareness
Participants were able to correctly recognise a mean average of 4.27 (out of 9) warning signs. After adjusting for co-variates (i.e. gender, age, language), awareness was statistically significantly lower for every ethnic minority group (except 'mixed / other': p = 0.627) compared with White British / Irish (all p's < 0.05; Table 2). People whose main language was not English correctly identified fewer symptoms on average than people whose first language was English (participants correctly identified 3.34 and 4.89 symptoms, respectively; p < 0.0001), independent of co-variates (i.e. gender, age, ethnicity). There were no statistically significant differences based on age or gender (both p's < 0.05).
Awareness of individual symptoms
The individual symptoms for which there was the greatest awareness in the sample were: 'Blood in stool' (n = 734; 72.5%), 'Bleeding from back passage' (n = 666; 65.7%) and 'Weight loss' (n = 539; 53.2%; were symptoms. There was little evidence of associations for main language, with people whose main language was 'Any other language' being less likely, compared with people whose main language was 'English', to recognise 'blood in stool' (60.4% vs. 79.7%; aOR: 0.57, 95%CI: 0.34, 0.95; p = 0.031), only.
With regards to gender, only 'change in bowel habits' (46.6% vs. 54.6%; aOR: 1.35, 95%CI: 1.02, 1.78 p = 0.360) was associated, with women being statistically significantly more likely than men to know it was a symptom of bowel cancer (Table 3). There was no evidence of an association between any of the warning signs and age.
Overall risk factor awareness On average, participants were able to correctly identify 3.99 out of 10 risk factors. After adjusting for co-variates (i.e. gender, age, language), risk factor awareness was statistically significantly lower among Afro-Caribbean (p = 0.043) and Somali (p < 0.001) participants, compared with White British participants (Table 2). Risk factor awareness did not vary by any other characteristic ( Table 2).
Awareness of individual risk factors
Overall, awareness of individual risk factors was low, with less than 50% of the population being able to correctly recognise any one risk factor. The individual risk factors for which there was the greatest awareness were: 'low fibre diet' (n = 502; 49.6%), 'existing bowel condition' (n = 498; 49.2%) and 'red or processed meat daily' (n = 481; 47.5%; Table 4). The individual risk factors for which there was the lowest awareness were: 'physical activity weekly' (n = 326; 32.2%), 'alcohol daily' (n = 285; 28.1%) and 'having diabetes' (n = 193; 19.1%).
Screening awareness
Despite nearly all participants being registered with a GP (n = 988, 97.5%), and within the eligible age range for bowel cancer screening, only 71.3% (n = 722) of participants were aware that a national screening programme for bowel cancer exists. After adjusting for co-variates (i.e. gender, age, language), Afro-Caribbean and Somali adults were statistically significantly less likely to know there is a bowel cancer screening programme compared with White British / Irish participants (awareness was 63, 14 and 81%, respectively; aOR: 0.38, 95%CI: 0.21, 0.70; p=0.002; and aOR: 0.05, 95%CI:0.02, 0.12; p < 0.001, respectively). There were no statistically significant differences based on gender or main language (all p's > 0.05; Table 2). Participants over the age of 76 years, however, were statistically significantly less likely to know there is a bowel cancer screening programme than adults aged 60-65 years (awareness was 61 and 69%, respectively; aOR: 0.54, 95%CI: 0.34, 0.85; p = 0.009). Conversely, participants aged 70-75 were statistically significantly more likely to know there is a programme than adults aged 60-65 (awareness was 81 and 69%, respectively; aOR: 1.70, 95%CI: 1.08, 2.70; p = 0.023).
Summary of results
This study examined bowel cancer awareness among individuals living within the London Boroughs of Brent, Harrow and Hillingdon. It demonstrates that awareness of CRC symptoms and risk factors is generally low, with individuals correctly identifying (on average) less than 5 out of 9 symptoms and less than 4 out of 10 risk factors. It also demonstrates that awareness of CRC screening is low (especially considering all of the participants were eligible for screening and should have been invited at least once), with only 71.3% of adults correctly identifying that there is a screening programme.
This study also highlights that there are strong associations between ethnicity and CRC screening, risk factor and symptom awareness. It demonstrates that symptom awareness is lower among almost all ethnic minority groups (i.e. those that do not identify as White British or White Irish), and that risk factor and screening awareness are lower for Afro-Caribbean and Somali adults, specifically.
Comparison with previous literature
Our findings are consistent with those previously described by Power and colleagues (2011), who found that respondents from a non-White ethnic background recognise fewer symptoms and risk factors compared with respondents from a White ethnic background [15]. Our results add to the previous literature, however, by providing more granular information regarding the specific ethnic groups for which risk factor and symptom awareness are independently lower. Our results also add to the literature by investigating associations between ethnicity and bowel cancer screening awareness among groups not included in previous research (e.g. Somali) [8,15]. The results of our study are consistent with Hirst and colleagues (2018), who previously reported that participation is lower among more ethnically diverse areas, compared with less ethnically diverse areas [8]. Importantly, our study finds that language is not an independent predictor of screening awareness, suggesting language barriers are not responsible for ethnic differences, as previously thought [8]. One possible explanation would be that individuals from specific ethnic groups are less likely to be registered with a general practitioner, which is a prerequisite to receiving screening invitations.
Implications for future research and practice
This study demonstrates that symptom awareness is universally lower among ethnic minority groups, while risk factor awareness is specifically low for Somali and Afro-Caribbean groups. As such, the results of this study suggest that a broad approach to raising awareness of symptoms among ethnic minority groups is required, while a more specific approach, targeted towards Somali and Afro-Caribbean adults, is required for raising awareness of risk factors. This study also demonstrates that there are specific risk factors and symptoms for which awareness is lower, generally, including 'Having diabetes', 'Alcohol daily' and 'Physical activity weekly', and 'Bowel not empty', 'Tiredness / anaemia' and 'Pain in back passage'. As such, the results of this study suggest that health promotion activities seeking to raise awareness in the general population should focus on raising awareness of these specific risk factors and symptoms.
This study also has several important implications for future research. First, further research is needed to understand why awareness of bowel cancer screening is low after adjusting for language, particularly when nearly all of the participants should have received at least one invitation for screening. Second, further research is needed to establish the effectiveness of novel health promotion interventions to increase awareness and reduce inequalities in awareness, and the effects of such interventions on behaviour (screening behaviours, help seeking behaviours, etc.). Finally, further research is needed to assess awareness in other parts of the country, especially where the ethnic composition of the local population is different to that of Brent, Harrow and Hillingdon (for example, areas with large numbers of Chinese adults, who may have lower awareness of specific symptoms and risk factors, which cannot be determinmed from the present analysis, due to small numbers of these individuals).
Strengths and limitations
This study has several strengths. First, it used a large, ethnically diverse sample, enabling a more nuanced understanding of the relationship between ethnicity and CRC awareness to be developed. Second, it used validated measures for bowel cancer awareness, improving the validity of the study findings. Finally, this study was conducted by community pharmacy healthcare staff, many of whom could speak languages other than English, which meant that it was possible to include non-English speaking participants in the research.
This study also has several limitations. First, there was relatively low participation among CPs. As such, we cannot dismiss the possibility of selection bias. Second, CPs were only required to recruit one participant a day during the study (as a guideline). Here too we cannot dismiss the possibility of selection bias. Third, we did not have an objective measure of screening participation, which would have been more reliable. Fourth, all questionnaires were printed in English. As such, CP staff were required to translate the questionnaire verbally for any non-English language participants they surveyed and, as it was not possible to validate the translations performed, the data for these interviews may be less reliable. Fifth, not all participants were recruited through CP, and so it was not possible to add the pharmacy code as a variable in the analyses (i.e. because it would have led to the exclusion of those participants not recruited through CP). As a result, the possibility of clustering effects and further recruitment bias cannot be discounted. Sixth, the sample size for some ethnic groups was very small (e.g. Arabs). Consequently, the results for these populations should be treated with caution. Finally, this study did not test for interactions between variables (e.g. between ethnicity and gender) and did not include several potentially important confounding variables, including health literacy and years of education. Health literacy, specifically, has been shown to be lower among ethnic minority groups, and may account for the lower awareness scores among some of these groups [21]. Years of education, meanwhile, has been shown to be lower among older adults and is associated with health literacy [22]; as such, the results may not be generalisable to future generations with more years of education.
Conclusions
This study is the first to demonstrate that there are strong associations between ethnicity and CRC screening, risk factor and symptom awareness. It indicates that there is a special need for effective strategies to raise awareness of specific risk factors and symptoms, particularly among ethnic minority groups, including low physical activity, increased alcohol consumption, diabetes, pain in back passage, anaemia and bowel not feeling empty.
|
v3-fos-license
|
2022-12-15T14:21:47.163Z
|
2019-07-05T00:00:00.000
|
254650030
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10846-019-01050-w.pdf",
"pdf_hash": "5b1838d733109a192f7fbc0b92567d2b5079cbb2",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41916",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "5b1838d733109a192f7fbc0b92567d2b5079cbb2",
"year": 2019
}
|
pes2o/s2orc
|
User-Oriented Design of Active Monitoring Bedside Agent for Older Adults to Prevent Falls
A small bedside agent for preventing falls has been developed. It talks to a person on a bed to prevent them from getting out of bed abruptly, until a care worker arrives. This paper describes the user-oriented design process of the agent system. The development process involving users, such as nurses and caregivers, as well as older adults is described. First, hardware design, such as the outer shape, size, and function of the agent was reviewed by nurses and caregivers, mainly from a safety viewpoint. The prototype agent incorporating improvements based on their opinions was used experimentally by older adults after several review processes. Second, the software design of the agent, such as the content of voice call, was studied through multiple experiments to improve its acceptability. Lastly, the integrated model was introduced into care facilities and hospitals to investigate the practical serviceability of the system.
Introduction
The number of older adults is reported to be increasing, which is consistent with rapid aging of the society in many countries, including Japan, which is already called a "superaged society" [1]. Consequently, the number of older adults with physical or mental disabilities and needing care or assistance is increasing, because of their declining physical ability and increasing frailty due to aging. Apparatuses to assist the daily living activities of older adults are strongly needed [2] to reduce the number of people who require manual nursing care. Replacing human support with apparatuses such as robotic devices to assist disabled older adults with daily living activities is expected to substantially contribute to the smooth functioning of superaged societies.
As the word "bedridden" implies, getting out of bed is one of the most important daily living activities to maintain a person's physical functions. Commonly, however, accidents such as "falls" which can most likely cause physical handicap and even worsen a person's symptoms, occur near the bed where an older adult spends considerable amount of time during the day and night [3]. Numerous research and development efforts related to assistive devices have been undertaken to solve such problems [4].
Falls among older adults occur mostly because of their poor ability to maintain dynamic balance, not because of reduced muscle strength. Consequently, "fall" accidents are listed as the most frequently occurring incidents in hospitals, at least in Japan, and the number of such incidents has been increasing because of an increase in the average age of hospital inpatients [3,5,6].
Building a safe bed environment, one of the major components of the ambient assisted living concept [2], is imperative in hospitals and care facilities. One of the directions of such efforts is the research and development of a monitoring system.
Pressure-mat-type sensor devices [7] are widely used for monitoring the posture of a person, although they have a few drawbacks, including poor reliability. The use of TV cameras or microphones for monitoring a person's behavior is not recommended because of privacy issues, although cameras can precisely detect the movement of a person on or near a bed. Studies on the use of infrared sensors, ultrasonic sensors, and force sensors to detect the movement or posture of the person on a bed have been carried out as well [8][9][10]. However, these systems assist caregivers, that is, nurses and care workers, by relieving their burden by providing information even when the person being monitored is not near the caregivers.
Systems that respond to the action of older adults exist, and such systems reach out directly to the person. They are intended mainly for communication or therapeutic purposes, not for preventing falls [11,12]. In addition, systems called social robots are used in environments with older adults. The major objective of these systems is to increase the opportunity for communication by initiating conversation [13,14].
The authors' ultimate motivation in this study is to develop a monitoring system that is both friendly to a person on or near a bed and beneficial to caregivers. From this viewpoint, the concept of the system is set as an active system that reaches out to a person; therefore, a robotic agent is selected as one of the best-suited candidates for this purpose because of affinity with the person being cared for. Such a human-friendly agent that behaves softly with the person receiving care is presumed to provide a feeling of security.
This study presents a specific vision of the aforementioned concept, especially the desirable and appropriate form and aspect of a system based on the concept, and clarifies its acceptability through the development and testing of such an agent system.
In this paper, an outline of the development method is given and the system development process is explained. In addition, a brief outline of the system is presented and the feasibility of the system is evaluated through verification experiments.
Research Objectives
There are various approaches to prevent patients' falls. These approaches can basically be divided into two categories, namely physical methods and mental methods. One typical example of the physical methods is to put a fence or siderail(s) around the bed, so that the patient cannot leave and fall from the bed physically. However, this type of a method is considered as the "medical restraint", since it deprived the patients of freedom, even for the patients with mental illness. Therefore, such a method is not recommended to be utilized in hospitals and care facilities, at the government level in many countries including US and Japan [15][16][17].
Thus, the physical methods, securing safety at the expense of patients' freedom, are considered undesirable, and not suited for the solution to prevent falls. For example, even a device like a fence which is actuated and set in place only when needed, is considered restraint because the device physically blocks the patient's intention of getting out of the bed, although engineers tend to prefer this type of seeds-oriented approach. Also, it would be potentially hazardous when there were some moving parts, that move automatically, independent from the patient's intention, in the vicinity of a patient in a bed. As a result of the above discussion, the authors made it a policy to solve the problem by not applying the physical means, and utilizing only the mental means, or non-physical assistive means, when designing the system. The bedside agent system developed here, which is intended to be used in hospitals or care facilities, monitors a person's behavior to provide caregivers with information in addition to providing a voice-call-based function for preventing or postponing behaviors that lead to falls and similar accidents. That is, the main feature of the system is that physical means are not used to prevent falls. Instead, falls are prevented by human caregivers' physical assistance. The agent system provides "talks" or voice calls, which remind the person to rethink and suppress behaviors leading to falls, as human caregivers provide conventionally. The burden of caregivers would be minimized if the bedside robotic agent system can perform the caregiver's function of preventing the person from acting dangerously, even though the agent is not equipped with physical means to prevent falls. The quality of nursing care could be greatly improved with the help of this type of system.
Basic Concept
The main objective of the agent system is to provide assistance that contributes to reducing fall accidents without using physical means. Currently, human caregivers or nurses watch a patient who is at risk of falling and offer assistance when necessary. Usually, a "nurse call system" is used when a patient needs help. It is the top-priority task for nurses to respond to nurse calls and to go to the patient's ward as soon as possible. Thus, fall accidents can be prevented if the nurses can visit the patient's side before the patient leaves the bed. Therefore, the ability to postpone the patient from leaving the bed, at least by the time the nurses reach the ward, is quite beneficial [18].
From this point of view, the system detecting the patient's activity that leads to leaving the bed, launching the nurse call, and allowing the patient to postpone leaving the bed with the voice calls as long as possible would be very helpful. Such a system would be beneficial for nurses as well as for patients, even though the effect of the postponed time span is limited, because the chances of a patient falling as a result of unattended movement can be reduced if the nurses can reach the ward before the patient leaves the bed.
In summary, i) if a patient's action to leave bed can be postponed, then ii) the chance for the nurses to visit the patient before the patient leaves bed can be increased, thereby iii) reducing the number of fall accidents. The basic concept of the bedside agent system can thus be expressed as "buying time for nurses to come to the patient's aid".
Design Methodology
Various types of assistive devices for older adults are available, although not all of them are truly applicable in real scenarios. The major reason for such a situation may seemingly be because a large number of such devices were developed following the seeds-oriented approach. The needs-oriented approach, by contrast, has been used rarely because it is time consuming and is associated with many difficulties such as real needs extraction and appropriate technology matching.
Notably, when carrying out this type of research, following the needs-oriented approach or the user-oriented design approach is essential to realize user-friendly systems for older adults to the extent possible. The flowchart in Fig. 1 shows the user-oriented procedure adopted in the present work for system development and implementation, although the processes indicated by dotted lines are incomplete at present.
Lessons learned from the development are summarized as follows. First, interviews with nursing staff at hospitals and staff at care facilities were conducted. This is seemingly an obvious first step in the user-oriented approach to extract system prerequisites. However, the information extracted from nurses and care staff is not "what IS needed," but "what is NOT good." Such denial and repudiation are most likely the common responses from the staff, and extracting the real needs from the older adults themselves is even more difficult.
The important activities in the user-oriented design approach are i) to carefully masticate and analyze the interviewed results, ii) to form hypotheses and make preliminary prototypes, iii) to re-interview the users or persons concerned, iv) to feed back the interviewed results to refine the hypotheses, and v) to repeat the series of this plan-do-check-action (PDCA) processes several times to realize what users truly need, to meet the real requirement of the field of nursing and care.
Because the initial requirements described by the intended users were implicit, explicit requirements were revealed incrementally through the recursive process of prototyping and reviews. In addition, approval from the ethical review boards is imperative at each step of the field interviews and test phases.
In the following sections, the steps of the development process are briefly described, along with an overview of the agent system.
Interviews with Nurses/Caregivers
First, interviews were conducted to clarify the needs of nurses and caregivers. The most prevalent problem was assumed to be "fall" incidents near patient beds, as explained previously, and the interviews focused on the means to prevent such incidents from occurring. The responses gleaned from the interviews show that the means for preventing falls are i) constant periodical patrol rounds to assess the patients' condition and state, ii) talking to the patients during the patrol rounds to make them feel secure, iii) letting the patients be active in the daytime and sleep well during nighttime, iv) encouraging patients to go to the toilet before sleep to reduce the need to leave the bed for visiting the toilet during the night, and v) encouraging, through the nurse call system, the patients to stay in bed so that the nurses can arrive at the patients' ward before the patients leave bed. Notably, all of these measures are indirect, nonphysical-contact methods. The aforementioned needs lead to the idea of a noncontact "voice call" type active monitoring agent system, as shown in Fig. 2. The agent system is expected to substitute the staff by voice calling the patient in the bed to ensure patient safety and reduce the burden on the care staff; the realization of this expectation is contingent on whether the system can talk the patient out of getting out of bed until the care staff arrive at the scene. The voice call is also considered very important in nursing/caring environments to provide a trigger for the patients to communicate and make their intention known to the staff [19]. The expected users of the system are patients who can notice voice calls and are at a high risk of falling if they attempt to get out of bed unassisted.
System Requirements
Nurses have various missions to perform and cannot remain at all times at the bedside of patients at high risk of falling. Although nurses are expected to prevent fall accidents, this task cannot be done unless they are near the patients. Therefore, as described in the previous section, a bedside agent system that can postpone the patient's action to leave bed until the nurses arrive at the patient's side would be useful [18]. According to Japanese standards for hospital care, a nursing unit is generally set to monitor 40 patients, where 4 to 6 nurses are assigned during daytime and at least 3 nurses are assigned during nighttime, depending on the severity of the conditions of the patients in the ward. For the system designed here, the longest length of a nurse's activities from the nurse station to a patient's room is set to 25 meters, considering the standard layout of a hospital's general ward, where the nurses' station is located near the center of the ward [20]. The duration required for the agent to postpone the patient's leaving bed can then be calculated as 15 s, which is the duration for nurses to arrive at the farthest-located ward assuming that the nurses' quick walking speed is 6 km per hour. Note that the time required for a patient to move from supine position to sitting square position, which is additionally needed for a patient to leave a bed, is not included in this time. Therefore, the agent system is beneficial if the system can continue to attract a patient's attention and allow the patient remain on the bed for the time period of 15 s, for instance, to theoretically minimize unattended fall accidents.
Outline of System Configuration
The active monitoring agent system with a voice call, shown in Fig. 3, consists of i) a sensor system to detect the motion of a person on a bed, and ii) an agent robot for voice calling. The sensor system is connected to the nurse call system to let nurses/caregivers know about the person's behavior when assistance is needed. The sensor system's output is also monitored and the voice call of the agent is activated when such help is needed.
System Implementation Procedure
Evaluation of the developed system at hospitals and care facilities is the most important process in useroriented design. The following steps are related to system implementation and verification of system serviceability.
Step 1: Functionality and safety evaluation of the system (Adequacy for medical/care wards) Step 2: Evaluation of human-agent communication (Acceptability from older adults) Step 3: System verification in real environment (Applicability for practical service)
Bed Sensor Systems
According to statistical data, "falls" are most likely to occur when a person gets out of bed [3,8]. The states of a person on a bed are categorized with respect to the motion sequence of a person leaving the bed as i) supine (dorsal) position, ii) long sitting position, and iii) sitting square (at bed edge), as shown in Fig. 4. These categorized states represent the Fig. 4 Detection of a person's state typical phases that lead to standing up from a bed and can thus be regarded as predictors of whether a person intends to get out of bed.
A system called "Getting-out-of-bed CATCH" is installed in hospital beds and in care beds. In this system, load sensors are equipped with actuators for back lifting and frame lifting [9,10]. The system can detect the aforementioned three phases of the process of getting out of bed, along with the output pattern and time transition of the sensors, and is widely used in hospitals and care facilities.
Although the last state before getting out of bed is "sitting square" at the edge of the bed, "long sitting" is the first state in getting out of bed, and it can thus be treated as forestate information. Providing such information in advance to caregivers is important for preventing certain types of patients, such as the ones agile in motion, from falling because the nurse or caregiver can be alerted to visit the patient before he/she gets out of bed.
Sensor System Design
The aforementioned system is useful and suitable for installation in hospitals and care facilities as long as the bed is set at a fixed location. However, transporting the bed every time to the facility where the experiment is conducted is not easy. In such a scenario, a system comprising multiple mat-type sensors is used to ensure portability, as shown in Fig. 4. The sensor mats are aligned at the chest/back part, waist/hip part of a person on the bed, and the edge part of the mattress on the bed. Thus, the three phases of the process of getting out of bed can be detected, as shown in the figure, to activate the nurse call as well as the agent's voice call as explained in the following sections.
The state or the posture of a person on a bed was determined with a series of mat sensors' data, which were measured every 0.5 s; and the accuracy of the detected state when 6 data points (3 s) were used for the recognition was 81 to 93%, varying from person to person. The false recognition errors that occurred depended mostly on the person's agility. The applicability of the sensor system for field tests was thus demonstrated through preliminary experiments.
Requirements for Bedside Agent: First Design
First, the design policy of the robot agent from the viewpoint of safety requirements was examined by following the general risk assessment method [21]. The bed can be moved to other rooms in the event of an emergency, and the use of wireless communication systems is restricted, especially in hospitals, where the electromagnetic compatibility characteristics of the system are important. On the basis of these requirements, the agent system was designed as follows: i) the controller was placed beneath the bed, ii) the agent was connected to the controller by wire, and iii) the agent was designed to be placed at the bedside table.
The agent robot should preferably give off a humanfriendly feeling or exhibit a high degree of affinity for a person on a bed. The size of the agent was initially set to 100-200 mm on one side; it was designed with a round shape without protrusions or sharp edges and was lightweight. A robot is known to attract a person's attention when it moves [22,23]. Therefore, the agent robot was designed to have moving parts to attract the patient's attention, although the degree of freedom of movement should be as low as possible to reduce any potential for hazard as well as to increase system reliability. In addition, the mechanical structures of the movable parts, such as joints, were designed to prevent any body parts from getting caught in them. Prevention of electrical shock and waterproofing of the mechanism were considered in the design stage.
The first prototype of the bedside robot agent based on the aforementioned considerations, which the authors named "Side-Bot", is shown in Fig. 5. This robot has two degrees of freedom: vertical and horizontal head shake (expressing yes and no) capability.
First Evaluation by Nurses and Care Workers
The first prototype of the robot agent was brought into the medical wards of the rehabilitation and orthopedic surgery department of a hospital to be evaluated by nurses and certified care workers. The evaluations were made mostly from the viewpoint of patient safety, and refinements to the design were made until evaluations indicated that no possible hazards remained.
"Side-Bot" was introduced in a simulated room of the medical ward and demonstrated to teams of medical staffs, each consisting of three to four persons. The robot's voice calls and motion were demonstrated in a scenario in which a typical ward patrol round was simulated. The performance of the robot was evaluated by three different teams after each demonstration.
The points of evaluation emphasized by the authors were i) safety of robot movement, ii) the possibility of the robot falling when placed on or near the bed, iii) the safety of the robot when it is dropped, iv) feeling of security (shape, face design), v) robot size (for placement near or on the bed such that it is noticeable to the person), vi) placement location (negative effect on nursing and caring task), vii) voice call (method and contents), viii) sound volume and directional characteristics, ix) electrical shock and waterproofing quality, and x) overall safety.
As a result, the major problems to be solved for actual installation of the robot in the ward were identified as follows: i) movable parts are not desirable from safety viewpoint, even though they are designed to prevent body parts from being stuck in them; ii) possibility of dropping remained; iii) possibility of chippage and breakage when dropped remained; iv) face design is too realistic; v) size should be smaller; vi) no space on the bedside table because of personal belongings, nor by the person on the bed because of the back-lifting function of the bed; and vii) motor sound should be quieter to avoid distracting other patients in the same ward.
These results show that the agent is not necessarily a robot that moves. Moreover, the expected situations in which the agent is most needed are mornings and nights, when the staff are the busiest and it is difficult for the agent to attract attention even though it moves because it is difficult to observe in the dark, especially during nighttime. On the basis of this result, luminescence, instead of motion, along with voice and sound, were considered more appropriate cues from the agent.
The results of evaluations by the staff revealed additionally that the human-like face design does not always give a human-friendly impression, especially when the robot is placed near the person all the time. A simple doll-like design is more suitable in the target environment, especially when the agent is not in use.
In addition, it was revealed that the agent should not be placed on the bedside table because the table holds various personal belongings and medicines and because there is no space for the agent in most cases.
Second Design and Evaluation
"Side-Bot" was then improved based on these evaluation results, and the second design of the agent named Prototype II (Fig. 6), was fabricated. The size of the agent was 80 mm on one side. This version i) had no motorized moving parts, ii) was equipped with an arm/hanger wire so that it can be suspended from the side rail of the bed, and iii) was lightweight because of the absence of motors. The design was inspired by a Japanese Kokeshi doll, and it was equipped with LEDs on the cheeks of the face.
The second prototype was evaluated by the medical staff. The results of the evaluation were mostly good and acceptable, except for the fact that the agent was too small and some patients could accidentally put it into their mouth.
Therefore, another refinement, that is, the size of the agent was increased to 120 mm on one side to ensure that the agent was suitably large. Because the agent cannot express emotions by bodily motion nor attract attention by moving, it was designed to i) address the person being cared for by name, and ii) emit light from LEDs, both before talking, to draw patients' attention.
Evaluation with Community-Dwelling Older Adults
A preliminary experiment with 16 university students and 4 community-dwelling healthy older adults (age 67 to 77) was conducted before the system was introduced into an actual medical ward. In this experiment, the mechanism and function of the second prototype, along with the contents of the agent's speech, were evaluated.
The major points investigated included whether the participants noticed or understood the agent's speech, any preference between a male voice vs. a female voice, and any feeling of stress. The transcripts of a few of the utterances of the agent, which are voiced using synthetic voices, are listed in Table 1. The experiment was conducted in a facility's meeting room environment, and each participant, Fig. 5 Side-Bot Prototype I one at a time, was introduced into the room to perform the experiment. Two agent robots were set on a table, and the contents and timing of the agent robots' speech were controlled manually by an experimenter not seen from the participant. The experimenter starts the robot agents to make speech, once the participant has been settled and gotten used to the room environment. There was an observer as well, also not seen from the participant, to record the response of the participant.
The experimental results were evaluated using objective measures, observed from other peoples' perspective, and subjective measures, obtained as the feedback from the participants. The objective measures were an attention score (estimated by the participant's line of sight: gazing at the robot when the robot is talking) and a recognition score (measured by the adequacy of the participant's answer to a question posed by the agent). The subjective measures were the participants' visual analogue scale (VAS) answers to the questionnaires regarding audibility (self-evaluation about the understanding of the agent's speech), unnaturalness of the conversation, and stressfulness. The objective scores were measured by an observer who monitored the response of the participants. This human-based method, rather than recording the scene using a video camera, was used for consideration of the privacy of the participants.
The results shown in Figs. 7 and 8, as examples of the experimental results, indicate that the recognition score and audibility were better when the agent first addressed the participant by name. However, a significant difference in this regard was observed only with students because the number of older adult participants was limited. The difference in recognition scores with or without LEDs was found to be not significant, although slight gaps between the average data are evident in the figure in all cases.
The results also showed that the synthetic voices were not always easy to hear. Nevertheless, female voices were preferred according to the VAS evaluation, although there was not a significant difference in terms of audibility. One case was reported as stressful because the patient found the voice difficult to hear.
Additionally, some of the test subjects commented that visibility of the light emission was insufficient and that the stability of the agent's body was unsatisfactory. The third prototype was then designed on the basis of these findings.
Third Design and Evaluation
The third prototype model, shown in Fig. 9, was designed to solve the aforementioned problems. That is i) the external shape was designed to be squeezed slightly to provide a feeling of comfort, ii) the center of gravity of the agent was lowered by changing the position of the speaker to increase stability, and iii) LEDs were relocated to irradiate the head portion from the cheek parts.
Content of Voice Call
As described in the preceding section, the authors optimized the hardware design of the agent. However, the content of the voice call, or what the robot must utter, must be examined to achieve the objective of the agent, which is to attract the attention of the person on the bed and to ensure that the person remains on the bed longer. An experimental study of the content of the voice call or speech is described in this section.
Scenario-Based Talk
Achieving a dialog with an agent requires voicerecogni-tion technology, discourse-comprehension technology, conversation-generation technology. These technologies are not easy to implement when the dialog with the agent and a person is required to be realized in a "natural" manner. One of the reasons for this difficulty is that speech recognition and understanding the intentions of older adults is not easy.
The authors developed a technique based on "scenario prediction-based dialog", which does not depend on voicerecognition technology. It was found from the interviews that, to the extent possible, nurses try to voice-call patients by using inquiries that can be responded to with simple "yes" or "no" answers. This technique avoid causing panic and allows the nurse to understand the patient's request correctly. This setting resembles the dialog system, where questions have limited types of expected answers [24,25], with the difference being that the type of expected answer is unique.
The robot agent developed herein was programmed to speak in phrases such that the responses to the phrases are basically predictable. Thus, the answers to these responses could be preprogrammed in most scenarios, without the need for recognition. The person giving attention to the agent can feel as though he/she is having a conversation, even though the agent is speaking according to the scenario by simply controlling the timing of its utterances. That is, the agents' utterances, after the questionings, were chosen to provide "back-channel feedback" type utterances, as shown in Tables 2 and 3, that were not affected by the person's answers.
Dialog-Style System
Ishiguro et al. [26] has shown that a person begins to have a feeling of involvement, or participation awareness, by simply looking at two agents having a dialog according to a scenario. This feeling of dialog is important for ensuring that the communication between an agent and a person seems natural and acceptable. Therefore, a system comprising a pair of "Side-Bots" was used in the following experiments, as shown in Fig. 9.
Experiments
The proposed system was introduced into a medical ward for experiments with patients after approval was obtained from both our institution's ethical review board and that of the hospital. Experiments with 12 older adults at a care facility revealed information about i) the effect of attracting attention, ii) audibility and sense of comfort of patients to contents of the agent's utterances. The experiment duration was approximately 30 min, and the effect, audibility, and comfort were evaluated by observation and based on a questionnaire administered to the participants.
The agent system consisting of two "Side-Bots" was operated manually from a remote location not visible to the participants. The reason for using this configuration was to find the patients' responses and to choose the appropriate contents of the agent's utterances. The agent's utterances were chosen to extract expected responses from the patients through analysis of the results of this experiment.
The experiments comprised three conversation parts, each of a different type: i) word repetition (type 1), ii) everyday conversation with an agent (type 2), and iii) everyday conversation with a pair of agents (type 3), as shown in Fig. 10. In type 2, robot A is programmed to reply to the participant's answers, whereas robot B is programmed not to reply, to observe differences in the participant's impressions. In type 3, robots A and B converse dialogs according to the scenario, as shown in Table 2, and then ask a question to the participant and respond to the participant's answer. Here, multiple scenarios (five to twelve, depending on the type) were prepared for each conversation type to avoid dispersion of the experimental results. The average time durations of each of the three types of scenario were approximately 1, 5, and 15 s, respectively, based on the discussion in Section 3.2.
Results
The experimental results were evaluated using objective and subjective measures, as described in Section 5, and the preferences of robot A and B were added along with other indexes, such as attention/recognition scores and stressfulness. The rate at which attention was attracted by the added LEDs and the name call before the conversation, as measured by the participant's line of sight, was found to be approximately 70-90% on average (50-100% depending on the participants), as shown in Fig. 11. This result reveals that the LED and name call did attract attention, irrespective of the type of conversation. The rate of recognition, as measured by the participant's response words, was found to be 40-100% depending on the participant, and conversation type 1 was less recognizable, presumably because words were spoken abruptly.
The result of the questionnaire showed that 75% of the participants did not feel stress. The remaining 25% felt some stress, mostly because of difficulty in hearing the agent's artificial voice. The impression of two agents (robot A and B) did not much differ, which means that whether the agent replied or not did not affect the preference for agents.
Regarding the naturalness of the conversation, 70% answered that the conversation felt natural, which indicated the practicality of the scenario-based dialog method, although 30% felt otherwise on the points such as the conversation interval. This issue warrants further investigation.
In addition, the results of this experiment show that the scenario-based conversations could attract participants' attention and that their attention was retained so long as the agents continued to speak. This means that the participants' attention can be maintained up to 15 s, when the Type-3 scenario was employed. The major purpose of this agent system is to let the person on the bed rethink and suppress behaviors leading to falls until a nurse arrives at the person's side. Thus, the dialog-style conversation not only creates a natural and acceptable feeling but also continues for the longest period among the three types and is most effective for retaining a person's attention. Thus, it is the most suited How are you?
from the viewpoint of developing the content of the agent's utterances.
Experiment for System Verification
Verification of the system in a real hospital and care facility with continuous system operation over an extended duration is the most important phase of this type of system development. The sensor system, bedside agents, and dialog contents were integrated into a fully automated active monitoring bedside agent system. The timing and content of the utterance, in the following experiments were determined and controlled automatically by the status change of the Fig. 11 Attention score vs. conversation type sensor system described in Fig. 4, which indicated the patient's posture change.
Positive results were obtained through preliminary trials, but additional experiments need to be conducted. An outline of this phase is described in this section. The experimental procedure was screened and approved by both the ethical review committee of our institute and that of the facility.
Preliminary Experiment with Students
An experiment with 11 students was conducted. The experiment duration was set to 240 min, and system operability was studied. Sounds were recorded at intervals of 60 s after the agents started talking. An example of the talk is summarized in Table 3. The scenario was controlled by the sensor output which indicated the posture of the patient, as shown in Table 3. Also, the contents of the scenario were chosen such that they were not strongly affected by the answer of the patient, whether it be yes or no, to provide a "back-channel feedback" type utterance, as shown in the table. The major evaluation points were safety and acceptance of the system. Function, response to the agents' speech, response time, time duration of the attention, stress and fatigue were examined as well. Here, the participants' behavioral response and utterances were measured using sensor mats, as explained in Fig. 4 along with a voice recorder for evaluation purpose. The stress and fatigue were evaluated using participants' subjective answers to the questionnaires. The results show that i) "go in one ear and out the other"type topics were useful, along with questions that need to be answered, for minimizing stress and fatigue, and that ii) the dialog method elicited responses from the participants, although frequent and short talks caused discomfort to the participants. Also, it was found that the participant's attention was retained so long as the agents continued to speak. The scenarios and their timings were readjusted on the basis of the results.
Preliminary Experiment at a Care Facility
A 48-hour experiment was conducted with two older adults each living alone and two residents of the care facility to verify system utility. The experimental scenario, experimental conditions, evaluation points and methods Fig. 12 User-oriented design process for Side-Bot were the same as those in the experiment involving students. Video cameras were not used because of consideration of the participants' privacy, although microphones were used to record the conversation, similarly to the experiment explained in Section 5.4. The experiment was conducted in the rooms where the participants reside.
The sensing result was accurate and well reflected the behaviors of the participants. The attentions of the participants were retained during the period of the agents' utterances, and the responses of the participants were mostly positive, although one of the residents gradually lost interest and positive attitude owing to shortcomings in hearing capability.
The staff opined that the volume of the sound must be adjustable and that the system must not be used with dementia patients. The overall impression was favorable because the residents appeared to be enjoying the agent's company. The results confirmed that the current system can be applied in actual care environments.
Conclusion
A bedside agent system that speaks to a person to reduce the risks of falling was developed in this study. The system monitors a person's posture and behavior, and when it senses risk, it activates simultaneously a nurse call alarm and a voice call to the person to ensure that the person stays on the bed until a caregiver arrives, thus reducing the chance of a fall.
This system was developed following the user-oriented approach, and it considers the different positions of users, including care staff (who provide care) and older adults (who are cared for).
The opinions of the care staff about the system and the reactions of the older adults to the system were considered at various stages of the development process. The knowledge gleaned from care staff pertained mainly to safety issues, and the feedback from older adults was mostly related to the contents of voice calls, which helped refine system acceptability. An overview of the process is shown in Fig. 12, and the details are summarized as follows.
First, hardware design, such as outer shape, size, and functions of the agent's mechanism, was reviewed by nurses and caregivers. Safety was the most emphasized viewpoint in this step because it is the most important issue when new equipment is introduced into a care environment. A prototype agent developed after several review processes based on user opinions was introduced into a community-dwelling of healthy older adults for experimental verification. Basic functions related to the agent's function, such as name calling and LED lighting, were studied in the experiment. In the development of Side-Bot, this process was repeated three times, as described in the paper.
Second, the software design of the agent, such as the contents of voice calls, was studied through multiple experiments. In this step, the major concern was acceptability of the system by real end users, namely, older adults. Two cycles of the revision processes were performed in the case of "Side-Bot".
One of the requirements discovered in this study is that an agent simply talking to a person can practically be accepted without much discomfort, and such an agent was found to be useful even without the use of speech recognition technology.
Last, an integrated model of the hardware and software was introduced to care facilities and hospitals to be used by older adults needing care. Here, system safety and acceptability were evaluated preliminarily in a real care environment. The main concern in this step was to study and ensure the practical serviceability of the system.
The staff at hospitals/care facilities recognize the system's benefits because the system has the potential to help reduce the number of fall accidents. However, the final objective, that is, system serviceability, involves i) ensuring that the patient remains on a bed longer because of the voice call of the agent, ii) confirming that the nurse can arrive at the patient's bedside while the patient remains on the bed, and iii) preventing falls as a result. The objective of system serviceability warrants further investigation.
Moreover, further developing the system by adding i) timing control capability for the agents' speech, and ii) the ability to sense a person's detailed conditions, such as sleep/awake state, should be studied to realize a more human-friendly and versatile bedside agent system.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
|
v3-fos-license
|
2022-05-10T16:32:01.123Z
|
2022-05-01T00:00:00.000
|
248614363
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2223-7747/11/9/1231/pdf?version=1651479243",
"pdf_hash": "2de85cdfe80431070d3999b6247ca96c227a2b86",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41917",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "39825fc74f2e78774e144d4710255dbd98d16c5b",
"year": 2022
}
|
pes2o/s2orc
|
Review of the Leaf Essential Oils of the Genus Backhousia Sens. Lat. and a Report on the Leaf Essential Oils of B. gundarara and B. tetraptera
A review of the leaf oils of the 13 species now recognised in the genus Backhousia is presented. This review carries on from, and incorporates data from, an earlier (1995) review of the then recognised eight species. The leaf oils of two new species of Backhousia, B. gundarara and B. tetraptera are reported for the first time. B. gundarara contains a mixture of mono- and sesquiterpenes, with α-pinene (14%) and spathulenol (11%) being the main members. In B. tetraptera, the principal component of the mainly terpenoid leaf oil is myrtenyl acetate (20–40%). The review also incorporates the two species of the genus Choricarpia, which have been subsumed into Backhousia, viz. B. leptopetala and B. subargentea. Due to its history in Backhousia, Syzygium anisatum, which has been transferred out of Backhousia, is included in the review for historical reasons.
Introduction
Backhousia Hook. & Harv. is a genus currently comprised of 13 species, within the Family Myrtaceae, sub family Myrtoideae [1]. It is now the sole member of Tribe Backhousieae. It was first described in Curtis' Botanical Magazine in 1845 by William Jackson Hooker and William Henry Harvey. The species so named was Backhousia myrtifolia Hook. & Harv. As the authors report: "This very pretty greenhouse shrub, its conspicuous petalois calycine segments giving the idea at first sight of large corollas to flowers, was found by Mr. James Backhouse in the Illawara (sic) district of New South Wales; and, not being referable to any Myrtaceous Genus yet described, Mr. Harvey and myself are anxious to dedicate it to our mutual friend now mentioned, who, amidst his various and arduous labors of love during a voyage to, and journeyings in, various parts of Australia and South Africa, still found leisure to collect and to describe in manuscript many interesting plants, which his previous botanical acquirements enabled him to do with great judgement" [2].
The species in this genus grow as aromatic shrubs or trees (5-25 m tall), with leaves 3-12 cm long and 1-6 cm wide, which are arranged opposite to each other. The genus is represented, with one exception, as endemic to the rain forests and forests of eastern Australia (New South Wales and Queensland). Recently, one species has been identified from the Kimberley region of Western Australia.
The first report of leaf oils from the genus was in 1888, by the firm Schimmel & Co of Miltitz, Germany, who reported that the leaf oil of B. citriodora was almost exclusively (95%) citral [3].
The leaf oil of B. myrtifolia, from which the genus was named, was investigated by Penfold in 1922 [4]. Later work by Penfold et al. in 1953 [5] showed the existence of second collection was made by G. and N. Sankowsky, also in the upper region of the Prince Regent River in 2003. It is known only from these two locations, in the Kimberley region of Western Australia. The leaf oil sample described below originates from the Sankowsky collection. This species is the only species of Backhousia not to occur naturally on the east coast of Australia, although it is growing at Tolga in north Queensland from cuttings taken during the Sankowsky collection. The leaf oil of B. gundarara, produced in 0.3% yield (w/w fresh leaf), contained a mixture of mono-and sesquiterpenes in approximately equal amounts. Additionally, present were six aromatic compounds (from their mass spectra) totalling approx. 9% of the oil. These remain unidentified at the moment. The main monoterpenes were the hydrocarbons α-pinene (13.6%), limonene (3.6%), and p-cymene (1.2%). The oxygenated monoterpenes were not as plentiful, with the principal members being terpinene-4-ol (1.5%) and α-terpineol (1.2%). Of the sesquiterpenes, the major compounds were the alcohols globulol (6.1%), viridiflorol (3.3%) and spathulenol (11.1%). Of the hydrocarbons the main compounds were aromadendrene (1.6%), viridiflorene (1.2%) and an unknown sesquiterpene hydrocarbon (unknown X), whose mass spectrum is given in the footnotes to Table 1 (2.1%). Additionally, present in the oil was what is suspected, from mass spectrum and linear retention index (LRI) data, of being 2,4,6-trimethoxytoluene (0.4%). A detailed list of compounds identified in the oil is set out in Table 1 below, and a Total Ion Current (TIC) trace of the leaf oil from B. gundarara on a polar column is given in Figure 1.
Backhousia tetraptera Jackes
Backhousia tetraptera Jackes is a newly described species growing in gullies on Mount Stuart, Townsville, Qld, at an altitude of about 500 m. It occurs as a population of 170-180 trees. A second site has recently been found at Clement State Forest, near Rollingstone, Qld. It is a tree, usually growing 5-8 m in height, but can grow up to 15 m. It has leaves 5.5-9 cm in length and 1.5-3.8 cm in width. Oil glands are rather sparse in the leaves, and it was not surprising that the oil yield on steam distillation was low, 0.1-0.2%, w/w fresh weight leaves.
Three collections of B. tetraptera foliage were available for steam distillation: one individual tree, a bulk of 3 trees and a sample grown from seed from an individual tree. The oils obtained from the three samples were similar. There were a considerable number of monoterpenes present in this oil, though sesquiterpenes were also well represented. The main component was the ester myrtenyl acetate (24-46%). This ester was accompanied by lesser amounts of α-pinene (3.7-3.8%), linalool (5.0-8.9%), myrtenol (0.5-2.1%) and α-terpineol (0.3-0.7%).
Backhousia tetraptera Jackes
Backhousia tetraptera Jackes is a newly described species growing in gullies on Mount Stuart, Townsville, Qld, at an altitude of about 500 m. It occurs as a population of 170-180 trees. A second site has recently been found at Clement State Forest, near Rollingstone, Qld. It is a tree, usually growing 5-8 m in height, but can grow up to 15 m. It has leaves 5.5-9 cm in length and 1.5-3.8 cm in width. Oil glands are rather sparse in the leaves, and it was not surprising that the oil yield on steam distillation was low, 0.1-0.2%, w/w fresh weight leaves.
Three collections of B. tetraptera foliage were available for steam distillation: one individual tree, a bulk of 3 trees and a sample grown from seed from an individual tree. The oils obtained from the three samples were similar. There were a considerable number of monoterpenes present in this oil, though sesquiterpenes were also well represented. The main component was the ester myrtenyl acetate (24-46%). This ester was accompanied by lesser amounts of α-pinene (3.7-3.8%), linalool (5.0-8.9%), myrtenol (0.5-2.1%) and α-terpineol (0.3-0.7%).
Five chemical varieties have tentatively been suggested in this species. These were:
The structures of these compounds are shown in Figure 3. The composition of the different varieties are given in Table 3 below.
Backhousia bancroftii F.M.Bailey
Backhousia bancroftii F.M.Bailey is a rainforest tree growing up to 25 m in height [37]. It occurs in the Cook and north Kennedy pastoral districts of tropical Queensland. The oil yield from this species was poor (0.03-0.10%) based on fresh leaves. Before the analyses by Brophy et al. [29] the only report on the leaf oil was in 1939, where Lahey and Jones found very poor oil yields and sesquiterpenes as the principal components, with α-pinene and esters as minor constituents [8].
Brophy et al. [29] found that the principal components of the oils of this species were terpenes (mainly sesquiterpenes), alkyl derivatives (alcohols and esters, mainly acetates) and aromatic compounds. There was significant between-tree variation in the oils.
In all but one of the trees examined, the main components were alkyl acetates: in the majority of trees, it was octyl acetate (33-62%), but in one bulk sample it was decyl acetate, with another tree containing approximately equal amounts of decyl-and dodecyl acetates and the corresponding alcohols. In all the oil samples, octyl-, decyl-, dodecyl-and tetradecyl acetates and the corresponding alcohols were significant components, between them accounting for the majority of the leaf oil. There were also small amounts of higher esters identified by mass spectrometry.
Terpenes were very minor components, two trees containing α-pinene, but all trees contained small amounts of sesquiterpenes (both hydrocarbons and oxygenated). In all cases they were individually <3%.
The two principal aromatic compounds identified in the oil of all trees were 2,4,6trimethoxy-3-methylacetophenone and 6-hydroxy-2,4-dimethoxy-3-methyl-acetophenone = bancroftinone (5), shown in Figure 3. This latter novel compound is related in ring substitution to isobaeckeol [38]. In one cultivated tree of unknown origin, it accounted for 85% of the leaf oil, but in natural stands it accounted for trace-3%. 2,4,6-Trimethoxy-3methylacetophenone accounted for 23% of the oil in one tree, but usually was present in the range 0.1-3.9%.
A list of the compounds identified in the oils of this species is given in Table 4. Compounds identified at the level of formulae only have been omitted, but a complete list can be found in [29] 2.5. Backhousia citriodora F.Muell.
Backhousia citriodora F.Muell. is a small-medium sized rainforest tree, endemic to Queensland, Australia. It occurs in the Sunshine coast region of Queensland near Eumondi, Maroochydore, Noosa and Woondon, in the ranges west of Mirriamvale, in the Mackay, Whitsunday, Townsville regions, and near Herberton, Queensland [25]. Several populations have been reduced to isolated trees through land clearing.
The leaf oil of B. citriodora was first described by the firm of Schimmel & Co of Miltitz, Germany in 1888 [3]. It was reported to be almost entirely (95%) citral. This was confirmed in 1923 by Penfold [22]. It has since become the source of a commercial industry for supply of geranial/neral. This is detailed in a recent comprehensive review in a sister publication by Southwell [26]. The presence of (Z)-iso-citral (11), (E)-iso-citral (12) and exo-isocitral (10) in these oils has been confirmed by Doimo [39]. The oil yield is 1.1-3.2% (w/w fresh leaf).
In 1950, Mr. J. R. Archbold, who was collecting from natural stands of the species near Miriam Vale, about 300 km north west of Maryborough, QLD, noticed slight differences in the odour of the oil produced by some trees in the area, indicating that a different type of oil was being produced by some plants in the area. An examination of the oils from single tree sampled by Penfold et al., indicated that the oil from these trees, which were morphologically indistinguishable from other Backhousia citriodora trees, contained-L citronellal (62-80%) [23][24][25]. The trees in question were found scattered throughout a rocky hillside area of about 2 hectares. The variant trees were located in 2 pockets, each containing about 12 trees, of which about half were the variants (one tree being about 27 m in height and slightly over 2 m in girth at breast height). Nothing else was published on this citronellal variant for about the next 50 years. As part of a systematic breeding project to produce clones of B. citriodora with greater percentage of citral, 16 open pollinated families were selected. From this trial, 3 trees, out of 272 sampled, gave the L-citronellal oils. As part of this trial, a re-examination of the parent trees in the population at Noosa, QLD, from which the parent trees of the 3 citronellal producing offspring had originally been obtained, was undertaken. It was found that 1 tree was producing the L-citronellal oil. Breeding trials from this 1 tree were then undertaken [25].
The composition of the leaf oils from both chemotypes are given in Table 5. This is based on the oil composition obtained from the 3 clones taken from the open pollenated trees in the breading trials for the L-citronellal chemotype and from commercially harvested material of the citral chemotype [23,26]. The oil yield from the L-citronellal chemotype was 1.8-3.2% (w/w dry weight). The structures of the numbered compounds are given in Figure 3. Backhousia enata A.J.Ford is a relatively recently described species [40,41]. It is a large shrub or tree growing to 5-15 m in height, with a trunk diameter up to 20 cm diameter at breast height. It occurs in northeastern Queensland, where it is endemic to the 'Wet Tropics' and is currently confined to the Tully River catchment area. It inhabits notophyll vine forest/rainforest on soil derived from rhyolite and basalt. In 2007, there were less than 200 individuals known.
The leaf oil of B. enata bears no similarity to that of B. myrtifolia, its nearest morphological relative, whose leaf oil is dominated by the aromatic ethers, elemicin, isoelemicin, methyl eugenol or methyl isoeugenol.
Backhousia hughesii C.T.White
Backhousia hughesii C.T.White is a tree growing up to 30 m in height. It grows in the Atherton tablelands (in the Cook pastoral district) of Queensland [33]. Early work on the leaf oil of this species by Jones and Lahey, published in 1938 [7], showed that it contained mostly D-α-pinene and D-β-pinene. Brophy et al. [29] in 1995, who examined the oil of this species from 3 populations, found that the oil contained mainly sesquiterpenes rather than monoterpenes. One tree contained 12% of α-pinene, but all others examined contained <5%. The oil yield (on a fresh weight basis) was 0.13-0.45%. In contrast to other Backhousia species, there appeared to be only one chemotype.
Backhousia kingii Guymer
Backhousia kingii Guymer is a relatively recently described species [30]. It is a tree growing up to 20 m and is endemic to subcoastal, central eastern Queensland in the Leichhardt, Wide Bay and Burnett pastoral districts [33]. It grows in noto-or microphyll semi evergreen vine thickets in an altitude range of 0-400 m above sea level.
As part of the survey of the oils of Backhousia, Brophy et al. were able to confirm the presence of the three chemotypes (elemicin, methyl eugenol, and methyl isoeugenol) [29], but in the trees available were not able to confirm the isoelemicin chemotype, first recorded by Penfold in 1953, in the trees they examined.
The analyses of samples of the methyl eugenol methyl isoeugenol, and elemicin chemotypes, together with the isoeugenol chemotype, taken from Penfold et al. [5] are listed in Table 10. The oil yields obtained by Brophy et al. in 1995 [29] were in the range 1.0-2.2% (on fresh weight basis), although 1 tree of the elemicin chemotype gave a yield of 0.5%. The oil yields obtained by Penfold et al. and Hellyer et al. were lower, despite the fact that they were measured on a dry weight basis (0.1-0.7%). As can be seen from Table 10, one aromatic ether dominated the oil from each chemotype. The compound is accompanied by a large number of terpenes (usually sesquiterpenes). Several compounds that were identified only at the formula level have not been included here, but can be found in [29]. Structures of numbered compounds are given in Figure 3.
Backhousia oligantha A.R.Bean
Backhousia oligantha A.R.Bean, called Backhousia sp. (Dicot Pilferer 12671) in a previous publication [29], is a small tree growing to a height of 4 m, but is often multi-stemmed, forming a low groundcover. It is found in semi-evergreen microphyll vine thicket near Biggenden in the Wide Bay pastoral district of south-east Queensland [29,43].
The leaf oil also contained a homologous series of both alkanols and their corresponding acetates. The series commenced octanol (0.2%) and continued to tetradecanol (0.4%), with the principal members being decanol (2.2%) and dodecanol (8.2%). The alkyl acetates were present, with the odd numbered members being present in lesser amounts compared to the even numbered members. The alkyl acetates present corresponded to the alkanols found, the principal members being decyl acetate (1.5%) and dodecyl acetate (8.0%). Several propionate esters were also detected (decyl-and dodecyl-), but were present in amounts of less than (0.4%). Backhousia sciadophora F.Muell. is a tree attaining a height of 30 m, and occurs in drier rainforest gorges and steep slopes from Dungog (NSW) to Nambour (QLD) [42,43]. The oil was first reported on by Penfold in 1924 [27]. He reported that the oil from this species contained D-α-pinene (80-85%), the remainder of the oil being sesquiterpenoid.
Syzygium anisatum (Vickery) Craven & Biffin
Syzygium anisatum (Vickery) Craven & Biffin (syn Backhousia anisata) is a fairly dense glabrous foliage tree that can reach 50 m in height and have a circumference of 4 m. It inhabits rainforests in a few places in the Bellingen and Nambucca valleys of northern New South Wales [42]. In its natural state, it is regarded as a rare and endangered species [16,45]. The species, since then, has had two changes of name as its taxonomy has been reinvestigated, passing through Anetholea anisata (Vickery) Peter G., Wilson [20], and finally being placed in Syzygium anisatum (Vickery) Craven & Biffin [19]. Due to its long history in Backhousia, its leaf oils are still considered here in this review.
McKern was the first to analyse the oil of B. anisata and found it to contain anethole at about 60% of the oil, the oil yield being 0.5% [15]. Brophy and Boland [16] reported that two chemotypes existed, having an oil yield of 1.3-2.0% (w/w fresh leaf) for both chemotypes of this species. The methyl chavicol (18) chemotype was found in approximately 25% of the trees examined (9 trees, including 1 bulk of 3 trees). Blewitt and Southwell [18], in a later and more widespread survey, found that the methyl chavicol (18) chemotype was approximately 1: 4.7 of the E-anethole (19) chemotype. They found that three of the ten sites sampled contained both chemotypes occurring within meters of each other. Southwell et al. also found that a few trees contained approx. equal amounts of both E-anethole and methyl chavicol [17].
Discussion
In their 2012 paper [1], Harrington et al. argued that "there were four strongly supported clades containing two to four taxa, with no support for relationships among clades, and the relationships of Backhousia bancroftii and B. citriodora remain unresolved". They also state that on the analyses of the DNA data "The current distribution of Backhousia is inferred to be largely due to the contraction of Australian rainforests in the Neogene". This is supported by Figure 2 in their paper [1].
From this diagram, it might be expected that species grouped together might have similar leaf oils, and that the closer together the species were grouped, the more similar the leaf oils of the species might be.
Examining the dendrogram, Figure 4, there appear to be a significant number of species where their close proximity is also reflected in the leaf oils. Thus B. leptopetala and B. subargentea, species that have been transferred from the genus Choricarpia, (and in the dendrogram are still mentioned as species of Choricarpia) do possess similar leaf oils, which are heavily based on monoterpenes, with α-pinene, limonene, and 1,8-cineole being prominent compounds in both species. There are, however, other compounds, present in small amounts, that do differ between the species.
Backhousia kingii was relatively recently split form B. sciadophora [30]. Both species possess similar leaf oils, in which monoterpenes predominate, with α-pinene and limonene being prominent components and sesquiterpenes being only minor components.
Backhousia hughesii and B. gundarara do not, however, follow this line, with B. hughesii having an oil rich in sesquiterpenes, with β-elemene and β-bisabolene being the major components. B. gundarara, (Backhousia sp. Prince Regent in Figure 4) while possessing major amounts of globulol, viridiflorol, spathulenol and other sesquiterpene hydrocarbons, also contains considerable amounts of α-pinene, limonene and other monoterpene hydrocarbons. It also contains a series of, as yet, unidentified aromatic compounds, whose mass spectra are given in the footnote to Table 1.
Of the three species in the clade containing B. myrtifolia, B. enata and B. tetraptera, B. myrtifolia stands out distinctively because of the presence of the aromatic ethers, methyl eugenol, E-methyl isoeugenol, elemicin and E-isoelemicin, as a principal component in its leaf oil, vastly overshadowing any other terpenoid components. The other two species contain mainly monoterpenoid leaf oils, with the B. enata oil being dominated by α-pinene and sabinene, while in the case of B. tetraptera (Backhousia sp. Mt. Stuart in Figure 4), the major components were myrtenyl acetate and linalool. B. citriodora, whose leaf oil is dominated by either citral or L-citronellal, stands apart from the other members of this clade. subargentea, species that have been transferred from the genus Choricarpia, (and in drogram are still mentioned as species of Choricarpia) do possess similar leaf oils are heavily based on monoterpenes, with α-pinene, limonene, and 1,8-cineole bein inent compounds in both species. There are, however, other compounds, present amounts, that do differ between the species. In the clade containing B. bancroftii, B. angustifolia and B. oligantha, B. bancroftii has an "unresolved" morphological relationship to the other two species [1], but the contents of its leaf oil, containing major amounts of alkyl acetates and alcohols, is a lot more closely related to the oils of B. oligantha, which also contains significant amounts of these compounds. These two species are the only species of Backhousia to contain the alkyl esters and alcohols in any quantity. B. bancroftii also contains varying amounts (trace to 23%) of 2,4,6-trimethoxy-3-methylacetophenone and bancroftinone (5) (trace->80%), not present in any other species of Backhousia.
Conclusions
The relationship of the leaf oils of a species of Backhousia to that species' place in the dendrogram (Figure 4) is rather problematic. Two species (B. bancroftii, B. oligantha) possess similar oils, containing a series of alkanols and their corresponding acetate esters, rare in the oils of Backhousia, though B. bancroftii bears an 'unresolved' relationship to B. oligantha. In other cases, e.g., B. kingii, B. sciadophora, the leaf oils are very similar and, in fact, B. kingii was split from B. sciadophora on morphological grounds. The two species which, in terms of classes of compounds, are most similar, B. myrtifolia, containing di-or tri-methoxy-allyl or -propenyl benzene, and Syzygium anisatum, containing methoxy-allyl or-propenyl benzene, are no longer in the same genus. It would appear that with our present knowledge, it would be wise to not place too much reliance on the relative grouping of the species when considering their leaf oils: more research on the genes directing the syntheses of these components is required.
|
v3-fos-license
|
2021-10-30T15:07:07.512Z
|
2021-01-01T00:00:00.000
|
240188574
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/91/e3sconf_iims2021_04005.pdf",
"pdf_hash": "2f45d5c6bc1572b22301c9e83a865bb9a7d7241f",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41918",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "c69adda8f70789ade4470c61c323122a441e5953",
"year": 2021
}
|
pes2o/s2orc
|
Study of basic environmental performance indicators of a coal mining enterprise
Current processes of environmental law enforcement require the use of innovative approaches to the problem of environmental management. In this regard, an adequate choice of the environmental performance indicators of an enterprise and the technique for their analysis, aimed at developing efficient, environmentally friendly management decisions, is of great importance. The technique for calculating the environmental and eco-economic performance indicators of a coal mining enterprise, including using the weighted average hazard class of pollutants or production and consumption waste is discussed in the article. Various options for the application of this approach, which is of practical importance for reducing the labor intensity of management decisionmaking by industrial enterprises, are considered.
Introduction
One of the main functions of environmental management at an enterprise is the analysis of cost effectiveness of environmental activities. The analysis of environmental performance indicators of enterprises, especially in coal mining [1][2][3], is the most important component of the environmental management system at the macro and micro levels [4,5].
The purpose of the analysis of environmental performance indicators of enterprises is to form an information basis for making decisions in the field of environmental management, focused on improving the environmental protection activities of an enterprise and increasing the efficiency of the use of natural resources.
To assess the balance between production activities and environmental protection at an enterprise, the most informative indicators (capable of providing a complete analysis in terms of temporal relationships and relationships within the "environment-production" system) should be selected, since the quality of the source information largely depends on the quality of environmental management models.
Results and Discussion
Improving environmental protection at the present stage of economic development consists in the efficient management of eco-economic systems of various levels [6][7][8][9][10][11], which requires the use of eco-economic indicators providing maximum information content with minimum labor intensity of their calculation.
At the same time, it is important to keep an accurate record of the mass of pollutants, which can be expressed not only by the sum of the actual values, but also by the reduced mass, which makes it possible to determine the toxicity of each ingredient to obtain a mono-pollutant. Many eco-economic indicators, such as economic damage from negative impact on the environment and its derivatives [12,13] are calculated on the basis of a mono-pollutant.
When solving most of eco-economic problems, the problem of taking into account the hazard class of pollutants arises, which is especially important for large industrial enterprises with highly diversified negative impact.
The calculation of the weighted average hazard class of a pollutant [12,13], determined by the actual or reduced weight of a pollutant, is proposed in this paper. The reduced mass of a pollutant allows determining the toxicity of individual ingredients through the indicator of relative hazard as the reciprocal of the maximum allowable concentration -formula (1).
where WA HC -weighted average hazard class of pollutants; i -type of pollutant; n -total amount of pollutants; where i M -reduced mass of the i-th pollutant, conv. t, which is calculated by the formula (3): where mi -actual mass of the i-th pollutant, t; Ai -indicator of the relative hazard of the i-th pollutant, conv. t/t, which is calculated by the formula (4): where i RI -regulatory indicator of the i-th pollutant, mg/m 3 . The daily average maximum allowable concentration of the i-th pollutant (MACDi), the one-time maximum allowable concentration of the i-th pollutant (MACOTi) or the approximate safe level of exposure to the i-th pollutant (ASLEi) can be used as a regulatory indicators. Table 1 shows the results of calculating the reduced mass of pollution by the JSC Chernigovets enterprise on the basis of official data. Codes of pollutants are shown in brackets. Table 1 shows that the enterprise emits pollutants of III and IV hazard classes into the air. Nitrogen dioxide (III hazard class), inorganic dust with SiO2 content up to 20% (IV hazard class), carbon monoxide (IV hazard class) and inorganic dust with SiO2 content from 20 to 70% (IV hazard class) have the largest actual mass. Based on the data from the For this calculation, the daily average maximum allowable concentration, and in its absence, the one-time maximum allowable concentration or an approximate safe level of exposure, is used as a priority regulatory indicators. Table 2 presents data on production and consumption waste of the JSC Chernigovets enterprise, indicating their codes in accordance with the Federal Waste Classifier Catalogue (WFCC).
The Table 2 shows that waste of the V hazard class (mainly overburden) amounts for more than 99% of the total mass of the generated waste. If we consider other types of waste, then the maximum mass is this type of waste of the IV hazard class as sludge from cesspools (3375 tons). The method of waste management is of great importance for increasing the efficiency of environmental protection activities. In this case, 93.7% of the total amount of waste transferred for recycling to external parties is waste of IV hazard class. Overburden (low hazard waste of V class) is disposed by the enterprise independently in compliance with environmental requirements.
Below is the calculation of the weighted average waste hazard class. Table 3 shows the results of calculating the economic damage from the negative impact on the soil of production and consumption waste of the JSC Chernigovets enterprise The table 3 shows that during the operation of a coal mining enterprise, the maximum share in the total value of the economic damage caused is occupied by waste of V hazard class -98.74%, which is about 1.25 billion rubles.
The main idea of using this regulatory method modification is that the entire mass of pollution is considered in excess of limits, for which a five-fold multiplier is applied.
Conclusion
Analysis of the environmental performance indicators of a coal mining enterprise using the technique for calculating the weighted average hazard class of pollutants based on the actual or reduced mass or production and consumption waste is of practical importance when conducting the following studies: • calculation of economic damage from negative impact on the environment and other eco-economic indicators; • determination of the hazard class of an enterprise, including for the purpose of exemption from pollution charges; • calculating the level of penalties for violation of environmental legislation and excessive negative impact; • identification of environmental "bottlenecks" of an enterprise to plan the priority environmental protection measures; • substantiation of the effectiveness of the use of one-time and current environmental costs; • solving other eco-economic problems.
|
v3-fos-license
|
2021-05-07T00:03:32.852Z
|
2021-03-02T00:00:00.000
|
233861419
|
{
"extfieldsofstudy": [
"Philosophy"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/papq.12342",
"pdf_hash": "b0894316f92d95469c56b01a6627a666633064b8",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41919",
"s2fieldsofstudy": [
"Philosophy"
],
"sha1": "74fa58d8613b24bc0b41354d33784dffa1a10352",
"year": 2021
}
|
pes2o/s2orc
|
TURNING ABOUTNESS ABOUT
There are two families of influential and stubborn puzzles that many theories of aboutness (intentionality) face: underdetermination puzzles and puzzles concerning representations that appear to be about things that do not exist. I propose an approach that elegantly avoids both kinds of puzzle. The central idea is to explain aboutness (the relation supposed to stand between thoughts and terms and their objects) in terms of relations of co-aboutness (the relation of being about the same thing that stands between the thoughts and terms themselves).
ABOUTNESS AND CO-ABOUTNESS
Representations are often about things. My belief that Porto is beautiful is about Porto, and the name 'Greta Thunberg' is about Greta Thunberg. This aboutness, also known as intentionality, 1 is the focus of this paper. Although many sorts of representations (thoughts, attitudes, linguistic items, maps, pictures, etc.) have aboutness, I will mostly discuss beliefs. Not much hangs on this choice; much of what I say here can be generalized to other kinds of representations.
To use a common illustration, consider a pair of archers drawing their bows at a range of targets. The archers stand in for representations, and (according to the orthodox conception of aboutness at least) the various targets that the archer might hit stand in for the objects that are candidates to be what the representation is about (candidate intentional objects).
There is a related feature of representations that will be of special interest here. Some representations have a common intentional focus, they are about the same thing. Two beliefs of mine may be about Stockholm. Different agents often have beliefs about the same thing. In terms of the archer illustration, the archers may direct their shots at the same target. This relation of having a common focus is sometimes called 'intentional identity', a label due to Geach (1967, p. 627). I will use 'intentional identity', 'co-aboutness', and 'co-intentionality' interchangeably. 2 We can ask whether any pair of representations are co-intentional, but the question is particularly interesting when it concerns apparently empty representations, that is, representations that appear to be about things that do not exist.
TARGET FIRST APPROACHES
According to a common approach to explaining aboutness and co-aboutness, we ought to focus on the objects representations are about (their 'intentional objects'). These are what I will call 'target first' conceptions of aboutness. In terms of the archer simile, target first approaches suggest that which objects the arrow actually hits captures the intentionality of the archer's shot. If this is right, to understand the intentionality of a given representation, we need only identify its intentional object, and to explain why a given representation has the intentionality it has, we need only explain why it is about that object. Roughly speaking, those who adopt the target first approach suggest that aboutness is just reference and that co-aboutness is just co-reference.
According to the target fist approach, what explains co-aboutness is the presence of some object that the representations are about. Priest (2005, p. 65 n. 12), Salmon (2005, pp. 105-108), Parsons (1980, p. 65 n. 2), and others defend theories of co-aboutness that are target first in this way.
Target first approaches to aboutness and co-aboutness are attractive and plausible. However, there are puzzles that are almost universally taken seriously by those working on aboutness that arise only if we adopt the target first approach. This suggests that those figures in the literature are committed, implicitly or explicitly, to the target first approach.
GOALS AND THE PLAN
My primary goal in this paper is to propose an alternative approach to aboutness. I propose that when explaining aboutness and co-aboutness, we ought to focus on the representations themselves and the relationships that stand between them (how the bows are directed). What is distinctive about non-target first approaches is that they do not give a central explanatory role to intentional objects. My proposal gives the objects the representations are about no role in explaining the aboutness of representations. The proposal is also distinctively co-aboutness first; it is not just archer first, so to speak, it is archers first. In a nutshell, the idea is that the aboutness of particular PACIFIC PHILOSOPHICAL QUARTERLY representations is explained in terms of which representations they are co-intentional with. I propose to explain aboutness in terms of relations of co-aboutness. 3 For those familiar with abstractionism in the philosophy of mathematics, my proposal can be understood as a kind of abstractionism about intentionality. I will argue that understanding aboutness in this co-aboutness first way helps us dissolve stubborn and influential puzzles that many leading theories of aboutness face and leads to an otherwise attractive and interesting approach to aboutness.
Here is the plan. I first distinguish the narrow content of representations and their aboutness. In the process, I clarify the explanatory roles that aboutness plays. This will yield some important tools that will allow us to evaluate theories of aboutness later in the paper. I then discuss two sorts of puzzles that most theories of aboutness face, underdetermination puzzles and puzzles concerning representations that appear to be about things that do not exist. These puzzles are seldom tackled together. This is significant because some well-received solutions to one of these kinds of puzzles do not help (and sometimes hinder) efforts to solve the puzzles of the other kind. I will then give a diagnosis of these puzzles; they arise only if we adopt a target first approach to aboutness and co-aboutness. I then sketch my own approach, discuss some constraints on how it should be implemented, and explain how it dissolves the puzzles in question. Finally, I will consider and respond to two lines of objection.
ABOUTNESS AND NARROW CONTENT
Let us distinguish the aboutness of an attitude from its narrow content. I will follow Lewis (1981Lewis ( , 1986 and Jackson (2010Jackson ( , 2015 in claiming that an attitude's intentionality does not supervene on its narrow content. Suppose there are two situations in which Jill is sitting at a bar when she sees a man walk in and sit down. In both cases, she sees the man is wearing a hood (she cannot see his face), and in both cases, she does not recognize the man but forms the belief that the man sitting at the bar is tall. The only difference between the cases is that the man at the bar is Jill's brother in one case, but a stranger in the other. What Jill's belief that the man at the bar is tall is about is different in these two cases; in one, it is about her brother; in the other, it is not. But there is also something that her beliefs have in common across the cases. They are the same from her point of view, at least in some sense. Let us call the feature of beliefs that captures this commonality the 'narrow content' of those beliefs. Suppose the beliefs in question are also true; the man at the bar really is tall in both cases. An attitude's narrow content captures its role in the psychology of the agent, guiding behavior and cognition, and any phenomenology associated with that attitude. The contrast between the two cases is a good reason to believe that an attitude's intentionality does not supervene on its narrow content.
Between the two scenarios, there is a difference in what the beliefs are about but no difference in their narrow content, so an attitude's aboutness does not supervene on its narrow content. By taking the claim that an attitude's intentionality does not supervene on its narrow content as a starting point, I am setting aside views according to which the narrow content of an attitude fully determines its aboutness. 4 Narrow content and aboutness play different theoretical roles. An attitude's narrow content is tied to its psychological, cognitive, behavioral, and phenomenal role. Exactly how to understand narrow content is not the focus of this paper.
A representation's aboutness helps explain co-ordination, communication, agreement, and disagreement concerning the intentional target of that representation. We often communicate, agree, coordinate, or disagree about things. Aboutness allows agents to track things across representations. For example, when I go from knowing little about some object to knowing more about that same object, one of the reasons that this is an instance of learning more about one thing rather than coming to represent a different object is that the earlier and later representations are about the same thing. Or to take an interpersonal example, when someone thinks that Daniel is in his office and someone else thinks Daniel is at home, they may be disagreeing partly in virtue of their beliefs being about the same thing. The aboutness of the representations unifies distinct representations by their tracking of the same subject matter, what they are about. Yablo (2014), Fine (2014Fine ( , 2016, and others suggest that what a representation is about crucially depends on what makes it true or would make it true. In this way, the aboutness (or subject matter) of a representation is supposed to supervene on its truth conditions. For present purposes, I will stay neutral on whether this is correct. It is not obviously correct. For example, on a plausible view of the truth conditions of beliefs, Pierre's belief that London is pretty might be true just in case there is a Londonish thing (a Londonizer) that is pretty, but this belief could be about London all the same (Lewis 1981(Lewis , 1986. For a recent discussion concerning how aboutness and truth conditions might come apart, refer to Sandgren (2019b, sect. 4).
UNDERDETERMINATION
Underdetermination puzzles are much discussed and influential. They take many forms and differ in detail (Wittgenstein 1953;Quine 1960;Benacerraf 1965;Devitt 1981;Kripke 1982;Lewis 1983b;Putnam 1988). 5 Underdetermination puzzles have a common structure; there are too many things that are candidates to be the intentional object of a representation PACIFIC PHILOSOPHICAL QUARTERLY and not enough resources to distinguish between them. That is, it is often underdetermined what the intentional object of a representation is.
Here are two examples to illustrate the structure of underdetermination puzzles, the first adapted from Wiggins (1968) and the second from Kripke (1982). Suppose Harriet, on coming into a room, forms the belief that Tibbles is on the mat. What is this belief about? One thing it seems to be about is Tibbles. But what is Tibbles? There are many objects on the mat, cat legs, whiskers, ears, fusions of whiskers and ears, catsminus-17-hairs, cats-minus-18-hairs, and so on. 6 Which one of these is what the belief is about? Perhaps Harriet intends to pick out a cat, rather than a cat leg and this goes some way to rule out some objects as candidates to be what the belief is about. The problem is that however rich the relevant intentions are, there will always be several objects that are candidates to be what the belief is about that fit these intentions equally well. That is, which object this relatively banal belief is about is underdetermined. There appear to be too many candidates to be what the belief is about and not enough resources in the intension of the believer or anywhere else, to uniquely determine which object the belief is about. 7 Suppose Jack is learning how to add for the first time. His mathematics teacher demonstrates how to add two numbers together and gives him some practice sums. On completing the first sum, 3 and 4, and arriving at 7 as the answer, Jack forms the belief that he has just performed addition. What operation is this belief about? The natural answer is 'addition'. But there are, to put it mildly, a whole lot of different operations that deliver the same output as the plus function when 3 and 4 are the inputs, but deliver quite different results when the inputs are different. Given the sheer number of functions, this will be true for any function and set of inputs. There will always be more than one function that delivers the same outputs as a given function over some set of inputs but deliver different outputs if given other inputs. If there are any functions at all, there an awful lot of them. For this reason, if all we have to go on when assigning functions to representations is the outputs the function delivers in a relatively small set of cases, there will always be more than one function fit to be what the attitude is about. One particular function will never be uniquely selected from the throng. Again, what the representation is about is underdetermined; there are too many candidate targets and not enough resources for distinguishing between them.
When confronted underdetermination puzzles of this kind, some, like McGee and McLaughlin (2000), deny that we can have genuinely singular thoughts about particular objects. Weatherson (2003, pp. 488-489) proposes instead that these problems force us to revise our conception of what it takes to think about particular objects, suggesting, with Jeshion (2002), that it is possible to have a de re belief about an object without being acquainted with it, either directly (e.g., via perception) or indirectly (e.g., via testimony).
Recently, Merlo (2017) and Openshaw (2020) have suggested that representations are in fact about all the candidate intentional objects.
Another common response to these kinds of underdetermination puzzles, defended by Lewis (1983bLewis ( , 1984, Sider (2011), and others, is to claim that some of the candidate objects are especially eligible to be what representations are about. Eligibility is standardly conceived of as coming in degrees; some object might be more eligible than a second but less eligible than a third. This relative eligibility is taken to be independent of the psychology and conventions of representers. If some objects are more eligible than others, we have a way to break the troubling ties between candidate intentional objects. The relevant attitudes might be about the plus function partly because the plus function is intrinsically more eligible to be an intentional object than other similar functions. Harriet's belief might be about the cat on the mat partly in virtue of the relations of relative eligibility that stand between the different candidate intentional objects. To return to the archer illustration, if the eligibility suggestion is correct, some of the targets attract arrows to them more strongly than others. I will not address or evaluate this kind of eligibility move directly, although it will help to keep it in mind as we proceed.
'EMPTY' REPRESENTATIONS
The second kind of puzzle concerns representations that are apparently about things that do not exist. Puzzles concerning empty representations are also influential and stubborn. They significantly guided early analytic philosophy of mind and language (Brentano 1874;Meinong 1960;Russell 1905;Quine 1948). 8 Again, these puzzles come in many forms but have a common structure; certain representations seem to have aboutness, but there appears to be no object such that they are about it. There are apparently not enough candidate intentional objects. Suppose that an agent believes that Vulcan is rocky. This belief appears to be about Vulcan. But how could this be if there is no such planet? The apparent intentional object of this belief appears to be missing. So it seems hard to make sense of the intentionality of this belief (it seems hard to find an intentional object for that belief).
A common response to this kind of puzzle is to bring exotic objects into the picture. The central idea is that apparently empty representations have intentional objects, appearances notwithstanding. They are just not the sort of familiar everyday objects we are familiar with. Different versions of this view involve different claims about the objects that apparently empty representations are about. Some, like Meinong (1960), appeal to non-existent objects. Others appeal to abstract objects (Salmon 2005;Thomasson 1999). Still others appeal to merely possible objects (Lewis 1978(Lewis , 1983a. In each case, the idea is to add in some more objects as possible candidates to be what representations are about so that apparently empty representations end up with intentional objects after all.
INTERPLAY
Appealing to exotic objects to handle cases involving empty representations makes underdetermination puzzles harder. Underdetermination puzzles arise because there are too many candidate objects. By adding the exotic objects as candidate objects, we add even more objects that must be distinguished when assigning intentional objects to representations. Not only will there be all the everyday objects to choose from, there will be all the exotic objects as well.
Maybe there are eligibility constraints on aboutness that will help us distinguish between everyday objects as candidate intentional objects. But it seems like these eligibility constraints will not allow us to make the appropriate distinctions between exotic objects. Exotic objects are one thing; relatively eligible exotic objects are quite another.
Note that eligibility constraints can make some distinctions between exotic objects but not the distinctions we need to handle underdetermination. For example, maybe all the non-existent witches are more eligible than all the non-existent schwitches (where schwitches are the same as witches, except on Tuesdays when they are unicorns). But this sort of relative eligibility will not suffice. To do the required work, the eligibility constraints would have to distinguish between different particular exotic objects and not merely between different kinds of exotic objects. After all, we are interested in representations about particular objects. It seems quite implausible that particular exotic objects stand in these sorts of relative eligibility relationships; that one non-existent, merely possible, or abstract witch is inherently more eligible than another to be what an attitude is about.
These puzzles tend to be discussed separately, and this interplay is seldom noticed. Here is a lesson to take away: when considering and evaluating solutions to these two puzzles, we ought to keep an eye on how the resources to which we are appealing in our solution to one puzzle interact with the other kind of puzzle.
A DIAGNOSIS
These two kinds of puzzles have been hugely influential in shaping the course of (at least) analytic philosophy of mind and language. Each kind of puzzle has given rise to its own enormous literature.
It seems to me that both kinds of puzzles (as they are traditionally posed) only arise if we adopt the target first approach. Underdetermination poses a problem because if it is radically underdetermined which object the representation is about, then the aboutness of those representations is also radically TURNING ABOUTNESS ABOUT underdetermined. In cases involving apparently empty representations, the relevant target appears to be missing and yet the representation has aboutness. This is puzzling if the aboutness of a representation is supposed to be captured in terms of the object it is about.
The observation that the puzzles are so influential, combined with this explanation of how the puzzles arise, is evidence that the target first approach is common, even if it is seldom stated explicitly. Of course, I do not claim that it is impossible to find a solution (rather than a dissolution) to both kinds of puzzles consistent with the target first approach. But if the target first approach gives rise to these stubborn and influential puzzles, it is worth considering alternative approaches. This diagnosis also reveals some interesting theoretical possibilities; if we can explain aboutness and co-aboutness in a non-target first way, we can sidestep these puzzles altogether.
CO-ABOUTNESS AGAIN
The target first conception of aboutness and co-aboutness leads naturally to the following conception of aboutness, co-aboutness, and the relationship between the two: what explains the aboutness of particular representations is the object they are about and what explains co-intentionality is that the representations are about the same object. In terms of the archer illustration, the target hit captures the directedness of the shots and the shots are directed in the same way if, and only if, the archers both hit the same target.
Geach famously poses a challenge to this way of understanding co-aboutness in line with the archer illustration.
[A] number of archers may all point their arrows at one actual target…but we may also be able to verify that they are all pointing their arrows the same way, regardless of finding out whether there is any shootable object at the point where the lines of fire meet (intentional identity). We have intentional identity when a number of people, or one person on different occasions, have attitudes with a common focus, whether or not there actually is something at that focus. (Geach 1967, p. 627) The idea is that representations can be co-intentional even when there is no thing such that they are about it. That is to say, apparently empty representations (such as beliefs about Vulcan, a witch, or the fountain of youth) can be co-intentional. Priest (2005, p. 65 n. 12), Salmon (2005, pp. 105-108), and Parsons (1980, p. 65 n. 2) uphold the target first approach to co-aboutness in the face of Geach's challenge. They argue that when the relevant representations are apparently empty, their co-aboutness is explained by the presence of the object they are about. These accounts differ in detail but involve the same basic explanation of co-aboutness. Co-aboutness is explained by the presence of the object the representations are about (the target both archers hit).
These theories of co-aboutness face significant underdetermination problems of roughly the kind discussed earlier (Thomasson 1996(Thomasson , 1999Sandgren 2018). This should not be surprising if my diagnosis of underdetermination puzzles is correct and given the interplay between the two puzzles discussed earlier. What follows is a summary of a recent formulation of this underdetermination challenge from my 'Which Witch is Which? Exotic Objects and Intentional Identity' (Sandgren 2018, pp. 729-731). The collections of exotic objects are typically uncomfortably large. There is not just one (non-existent, merely possible or abstract) witch, or merely a hundred, or merely a thousand. If we are committed to such things at all, there are ever so many witches, inter-Mercurial planets, and magic fountains. If the presence of an exotic object is going to explain why two apparently empty representations are co-intentional, we need a story about how each representation gets assigned this exotic object rather than some other and how two representations come to be about the very same exotic object. But the abundance of exotic objects, combined with the observation that these objects do not stand in causal relations with agents in the way that everyday objects do, means that it is extremely difficult to give a principled story about how two representations get to be about the same exotic object. There are just too many objects to select from and not enough resources to appeal to when assigning them to representations. This is a kind of underdetermination argument that threatens target first theories of co-aboutness in full force.
Others claim that we can make sense of co-aboutness without appealing to intentional objects in our explanation; they propose archer first theories of co-intentionality. Dennett (1968), Donnellan (1974), andGeach (1976) make early attempts at archer first explanations of co-aboutness. More recently, Perry (2001), Sainsbury (2010), Crane (2013, p. 165), Friend (2014, Pagin (2014), Sandgren (2019a), and Garcia-Carpintero (2020) have all defended archer first theories of co-aboutness. These views differ in detail, but they all attempt to explain co-aboutness without appealing to intentional objects (exotic or not). Instead, these accounts center on the features of the representations themselves and the relations that stand between them. 9 3.2. AN ALTERNATIVE EXPLANATION I propose that we first adopt an archer first conception of co-aboutness and then appeal to relations of co-aboutness to explain the aboutness of particular representations. This proposal reverses the direction of explanation characteristic of target first approaches in two ways. First, instead of the TURNING ABOUTNESS ABOUT aboutness of the respective representations in combination explaining their co-aboutness, their co-aboutness explains their individual aboutness. In this sense, the proposal is archers (in the plural) first. This is the most novel part of the proposal and the part which distinguishes it from other archer first accounts.
Second, according to the proposal, the relations of aboutness and co-aboutness are explanatorily prior to the object the representations are about. Accordingly, the object representations are about does no work in explaining their aboutness.
Here is an illustrative case: suppose I indicate some belief and ask 'what is that belief about?' Suppose you correctly answer 'Stockholm'. When you give this answer, are you providing Stockholm, the city itself, as the answer? In a sense you are, your answer involves representing Stockholm. But there is a sense in which you do not supply the object itself as an answer. To see this, consider a case in which the correct answer to a 'what is that belief about?' question appears to involve representing something that does not exist. For example, the correct answer might well be 'Vulcan', even if the questioner and the answerer agree that there is no such planet. In cases like these, the correct answer seems to involve representing Vulcan without requiring the answerer to produce the planet, as it were. So it seems as if when correctly answering questions like 'what is that belief about?', we indicate an object or set of objects but we do not, in general, provide the object itself as the answer. So perhaps when one correctly answers 'Stockholm' in response to the question, one does not provide Stockholm itself as an answer, rather one provides a representation which, if the answer is correct, is about the same thing as the belief in question. This is an alternative explanation of what is going on in our talk of what representations are about; we are really negotiating relations of co-aboutness, rather than dealing in the objects themselves. The aboutness of a given representation is, on this picture, a product of a broader representational economy. We can, it seems, make sense of much of our discourse about what representations are about while appealing only to relations of co-aboutness roughly along the lines just mentioned. What is more, as long as we adopt an archer first approach to co-intentionality, we can do all this without having to appeal to a fact about which of the many candidate objects the relevant representations are about in our explanation.
Recall that aboutness allows us to track things across representations, thereby facilitating disagreement, agreement, communication, and so forth. According to the orthodox view, the presence of an intentional object is required to unify the relevant representations as being about the same thing. I propose that on the contrary, we deal in co-aboutness directly when explaining this kind of tracking. What is more, many of the phenomena often associated with co-aboutness such as communication, disagreement, and agreement can arise in cases in which the relevant representations appear to be about things that do not exist. We can track what beliefs are about even when the intentional objects are apparently missing. However identification of subject matter is achieved in cases in which the relevant representations are empty; it is not obviously achieved in virtue of there being some object such that the representations are about it. In other words, aboutness is not just reference, and co-aboutness is not just co-reference.
Although the proposal is archers first, it is not archers only. The proposal leaves room for the objects representations are about; it is just that these objects do not do any work in explaining the aboutness of representations. Once we have the relations of being about the same thing that stand between the relevant representation and other representations, we get the object it is about for free. The presence of the object the representations are about is not taken to explain their co-intentionality, rather it is a descriptive (rather than explanatory) fact that when you have relations of co-intentionality, you also have the target the representations are about. This move fits with and is motivated by a kind of easy ontology approach defended by Carnap (1950) and, more recently, Thomasson (2015). We can still talk about 'the object the representation is about', but this target is, so to speak, a mere shadow of the relations of co-aboutness the representation stands in.
Note that the co-aboutness first approach leaves room for representations to have reference. We can sensibly ask about whether a representation has reference, where reference is taken to be something distinct from aboutness. For example, one might suggest that a belief about Stockholm refers to Stockholm in a way that a belief about Vulcan does not refer to Vulcan. This is perfectly compatible with my proposal. The crucial point is that unlike those who adopt a target first approach, my proposal involves rejecting the claim that aboutness just is reference and the claim that aboutness can be explained in referential terms. If my proposal is right, reference and co-reference are not what explains aboutness and co-aboutness. But that does not entail that there is no such thing as reference.
The central idea is that no matter how much a representer tries to think or talk about an object itself, they will, at best, only be able to produce yet another representation about the object. But this need not worry us. We can, I suggest, get everything we need with respect to aboutness and co-aboutness without appealing to the object itself as doing any explanatory work.
CHOOSING A THEORY OF CO-ABOUTNESS
The co-aboutness first proposal crucially involves appealing to an archer first theory of co-aboutness, and we had better choose an archer first view that allows us to recapture as many of the attractive features of the target first approach as possible. There are a number of options here. For the purposes of illustration, it will be helpful to have an example of an archer first view of co-aboutness in mind. To this end, I will outline the causal theory of co-aboutness defended by Donnellan (1974), leaving some details to the side. In broad terms, the idea is that co-aboutness is a matter of the representations themselves standing in the right sort of causal relations with each other. These causal relationships are characterized as usually involving individual psychological connections or deferential uses of words or concepts. 10 Crucially, the causal chain in question need not involve the object or objects the respective representations are about (although sometimes it will). For instance, consider a case in which two people who live in the same village read the same newspaper report claiming that there is a witch terrorizing the village. It seems as if these people can form beliefs about the same witch. According to the simple causal account of co-intentionality, any beliefs they might form concerning the witch are co-intentional because they have a common causal history involving the newspaper article.
I do not endorse the simple causal theory of co-aboutness. In fact, I think the simple causal theory is crucially limited. For example, as Edelberg (1992, pp. 574-575) and Everett (2013, p. 96) argue, there are cases of co-aboutness that do not involve the kind of common causal history present in the newspaper case. Edelberg and Everett discuss cases analogous to Frege's well-known Alpha-Ateb case except that the putative target of the representations is missing. For a recent discussion of this point, refer to Garcia-Carpintero (2020, p. 14). These cases suggest that although causal links between the representations in question are often crucial for explaining co-aboutness, co-aboutness does not always require such a link. There are also complications concerning how causally isolated agents might represent the same abstract object (e.g., a universal or a mathematical function). Again, whatever the explanation of co-aboutness in cases like this is, it cannot involve a causal link between the beliefs themselves because, ex hypothesi, the agents are causally isolated. Finally, there are issues arising from some ingenious cases due to Edelberg (1986Edelberg ( , 1992, which seem to show that co-aboutness is sensitive to how the believers take the facts about the identity to be. It is not clear how one can accommodate these data within a simple causal account. The co-aboutness first approach does not stand or fall with the simple causal account of co-aboutness. There are a number of not-purely-causal archer-first accounts of co-aboutness to choose from, for example, Crane (2013, p. 165), Pagin (2014), Friend (2014, Sandgren (2019a), and Garcia-Carpintero (2020).
Which one do I favor? The short answer is, predictably, my own. My proposal handles both the newspaper case and the cases just discussed that cause trouble for the simple causal account (Sandgren 2019a, p. 3690). Moreover, some of the other not-purely-causal archer-first rivals to my proposal are not as general. For instance, Garcia-Carpintero's view only applies to fictional cases (in which the believers do not take the target of their beliefs to be actual, concrete objects) and not to mythical cases (in which the believers believe that the target of their beliefs is actual and concrete.) 11 My theory also accounts, in a natural way, for the Edelberg-style cases I alluded to earlier that suggest that facts about co-aboutness are often sensitive to how the believers take the identity facts to be (Sandgren 2019a, p. 3692). These cases seem hard to capture within models based on similarity of representation (like Crane's), at least without augmenting the story considerably, refer to Sandgren (2019a, pp. 3683-3685). However, because my proposal is fairly complex and my primary goal here is to discuss the co-aboutness first approach, it is beyond the scope of this paper to spell out and further motivate my theory of co-aboutness. For the purposes of illustrating the co-aboutness first approach to aboutness, I will work with the simple causal view. This choice is harmless for present purposes because my view and many of the other not-purely-causal archer-first theories of co-aboutness behave similarly to the simple causal theory (delivering the same verdicts for similar reasons) in the cases of co-aboutness discussed here.
IGNORANCE, ERROR, AND DISAGREEMENT
Target first approaches to aboutness and co-aboutness have a natural account of how agents can be ignorant or mistaken about what representations are about. Jill in the hooded man case is one such example. When the hooded man is her brother, her belief is about her brother, although she is ignorant of that fact. The explanation in line with the target first account is simple, what makes the difference is that the man she sees in the bar really is her brother in one case but not in the other.
The target first approach also yields a simple and attractive account of how it is possible for agents to disagree about a target. Often part of what is required for disagreement is that the relevant elements of thought and talk are about the same thing. According to the target first approach, inasmuch as thinking and talking about the same thing is required for disagreement, they are disagreeing only if and because there is an object such that the relevent thought and talk is about it.
These and other features of the target first approach can be recaptured on my proposed picture, provided we are careful about which archer first account of co-aboutness we adopt. Consider an extension of the hooded man cases discussed earlier. Suppose that in both cases (in the case that the man at the bar is a stranger and the case when he is Jill's brother), Jill also has a separate belief that her brother is excellent at table tennis. This belief is plausibly about her brother in both cases. But in one case Jill's belief concerning the man at the bar is co-intentional with her table tennis belief, while in the other it is not. If we adopt the straightforward causal view of co-aboutness, this would be because the two beliefs stand in different causal relations in the two cases. Jill may be unaware or mistaken concerning whether her beliefs are co-intentional. According to the co-aboutness first picture, which representations a representation is co-intentional with will determine its aboutness and she may be ignorant or mistaken about which representations are co-intentional. So if Jill is ignorant or mistaken about which representations her belief is co-intentional with, she is ignorant or mistaken about the intentionality of her belief.
Any archer first theory of co-aboutness worth its salt will allow for disagreement between co-intentional representations. Certainly, many archer first theories of co-aboutness do. The simple causal account certainly does. Which causal relations the representations stand in can vary freely with what properties are being ascribed to the intentional target, so there is nothing stopping representations being co-intentional and ascribing conflicting properties.
THE PUZZLES DISSOLVED
The co-aboutness first account does not face the underdetermination puzzles or puzzles concerning empty representations discussed earlier. There is no mystery about how representations that appear to be about things that do not exist have aboutness. When they have intentionality, they will often stand in co-aboutness relations and, as long as we adopt an archer first account of co-aboutness, the explanation of this co-aboutness will not involve identifying some common intentional object. If our explanation of intentionality does not appeal to intentional objects, it should not worry us that in some cases, there appear to be too few intentional objects to go around.
Traditional underdetermination puzzles do not arise either. The co-aboutness first explanation of the aboutness of a given representation will not involve identifying its intentional object. So the fact that there are too many candidate intentional objects need not concern us. Consider the case of Jack learning to add. I propose that the explanation of how he is thinking about the same function as his teacher does not involve both Jack and his teacher selecting the same function from the many. Rather, there is some explanation of how their representations are about the same thing that does not involve their intentional objects, and this directly explains the fact that Jack and his teacher can represent the same function. If we eschew intentional objects as an explanatory resource, our explanation of intentionality is not threatened by the fact that there are too many candidate intentional objects.
These are not solutions but dissolutions of the puzzles. The proposal does not yield guidance on how representations get to refer to one object on the mat rather than another, or to one mathematical function rather than another.
Objections
I will now consider some objections to the co-aboutness first approach. Because the co-aboutness first proposal is the central topic of this paper, I will limit myself to objections to the co-aboutness first approach taken as a whole and set aside objections to this or that archer first theory of co-aboutness.
LONELY REPRESENTATIONS
One might be tempted to object to the co-aboutness first approach as follows: suppose Robyn is looking out into a paddock at a horse. Suppose Robyn forms a belief that the horse in the paddock is gray. There is no one else around. Robyn's representation seems to be about the horse. Yet there is no one else around and, we can suppose, there might not be any representations of the horse other than Robyn's. How can this intentionality be explained by an appeal to the belief's being co-intentional with other representations?
First, it is plausible that Robyn has more than one representation of the horse. For instance, her belief about the horse might be about the same thing as her visual representation of the horse. If this is true, the aboutness of Robyn's belief can be captured partly in terms of its being co-intentional with some of her other representations.
But this response does not really address the spirit of the objection. What if Robyn had no other representations and there are no other representations of the horse? What if the representation is 'lonely'? Isn't the belief about the horse even if it is the only representation of the horse in the scenario? How could an account of aboutness that rests on relations of co-aboutness explain this?
I respond that there is another representation of the horse, even if we insist that there are no other representations of the horse in the scenario. The objector presenting the case is herself representing the horse.
But does this not make the intentional features of Robyn's belief too dependent on how it is described or on its relationship to us qua theorists? I do not think so. Recall that the aboutness of a belief can be distinguished from its role in the agent's narrowly construed psychology. The belief described in the case may only have its aboutness partly in virtue of its being about the same thing as the representation of the theorist considering the case, but its psychological role is independent of that fact. She may reason, talk, and behave in accordance with the belief. These beliefs will guide Robyn's behavior and cognition in a way that does not depend on its aboutness. Recall that aboutness is for tracking objects across representations. Aboutness allows us to track that horse (what the belief is about) across representations. In lonely representation cases, there are no other representations across which to track the intentional object. In these cases, the aboutness of the representation in question is idle, in a sense, except inasmuch as the person describing the case and their representations goes. Within the scenario, there is no other representation across which that horse, qua intentional target, needs to be tracked.
MOVING THE BUMP UNDER THE RUG
Another objection is that the co-aboutness first proposal merely moves the puzzling underdetermination around. After all, for many pairs of representations, it will be underdetermined whether they are co-intentional. Am I not trying to explain away puzzling underdetermination with something that is itself underdetermined in the same puzzling way?
Co-aboutness will be somewhat underdetermined. Indeed, it would be suspicious if our account of co-aboutness did not allow for some underdetermination. Subject matter identification just is a somewhat messy business. However, this kind of underdetermination is importantly different from the kind of radical underdetermination at play in traditional underdetermination puzzles.
Suppose Harriet and Meg walk into a room and, in response to what they see, form beliefs to the effect that Tibbles is on the mat. In line with the traditional underdetermination puzzle, we might wonder how each of these beliefs come to be about Tibbles, rather than one of the other candidate objects. According to the target first understanding of aboutness and coaboutness, for these two beliefs to be about the same thing, the underdetermination has to be resolved, and it has to be resolved such that both Harriet's belief and Meg's belief are about the same object. They both need to uniquely pick out the same object. If we adopt the target first approach to aboutness, the kind of underdetermination tied to the traditional underdetermination puzzles gets in the way of delivering the correct verdicts about co-aboutness in simple cases like this. The kind of underdetermination that remains within a co-aboutness first story does not get in the way of these beliefs being co-intentional. If my proposal is right, what matters is that the representations are co-intentional. The Harriet and Meg case is a clear case of co-aboutness according to all the leading archer first theories of coaboutness. Maybe Harriet and Meg disagree about where, exactly, Tibbles ends (spatially, temporally, or even modally), and this might make a difference to which representations their respective representations are co-intentional with. But this sort of underdetermination is confined, on my picture, to the disputed cases of co-aboutness and does not threaten the clear cases of co-aboutness; the underdetermination is correctly confined to borderline cases.
Concluding Remarks
The co-aboutness first approach is attractive and avoids some serious, influential, and stubborn puzzles that threaten its rivals. I have only presented the view in broad outline, and there are many important details to be filled in and refinements to be made. Nonetheless, I hope I have done enough to suggest that the co-aboutness first approach to aboutness is worth taking seriously and that there is worthwhile work to be done refining the view, getting clear on its limitations, and exploring the theoretical opportunities it offers. 12 Department of Historical, Philosophical and Religious Studies Umeå University NOTES 1 I prefer the 'aboutness' label because intentionality is often confused with intensionality. 2 I prefer the latter two labels because 'intentional identity' suggests that the relation must be explained in terms of identity relations among intentional objects. As we will see, this treatment of co-aboutness should not be taken as given.
3 In this way, my proposal differs from other archer first theories of aboutness defended by Farkas (2008), Kriegel (2008), Montague (2016), Mendelovici (2018), and others. These views, which are typically grouped under the umbrella of 'phenomenal intentionality theories', center on the intrinsic phenomenal features of individual representations in their treatment of intentionality. For a recent discussion of how phenomenal intentionality theories relate to questions of coaboutness, refer to Clutton and Sandgren (2019). 4 For a discussion of the view that aboutness is narrow, refer to Farkas (2008). 5 Underdetermination puzzles have an ancient pedigree. An underdetermination puzzle takes center stage in Plato's Cratylus (385a-390e) in the form of an argument against the conventionalist view attributed to Hermogenes. 6 For a recent excellent discussion of this kind of plenitude, refer to Fairchild (2019). 7 What if one adopts a relatively sparse object ontology such that most of these fine-grained entities do not exist (e.g., the ontology of ordinary objects defended by Korman, 2015)? Isn't one out of the woods here, at least as regards the Tibbles case? In presenting the problem, we seemed to be coherently talking and thinking about the different fine-grained entities. If one has a sparse ontology, one is committed to treating those representations as empty so by making this move, one has turned a problem of too many into a problem of too few. This might be advisable as far as it goes, but the puzzles discussed in Section 2.2 come in full force. 8 Puzzles involving empty representations also have an ancient pedigree. For instance, cases of this sort are central to the discussion in Plato's Sophist (236d-364b). 9 Note that this does not mean that they do not involve intentional objects at all. Rather, the idea is that if intentional objects are part of the story, they do not explain the intentional features of representations. They are archer first, not necessarily archer only. 10 Donnellan's view runs parallel with a causal theory of the semantics of proper names. The debate between purely causal theories of co-aboutness and the causal descriptivist theory of
|
v3-fos-license
|
2023-04-26T01:15:43.078Z
|
2023-04-25T00:00:00.000
|
258309325
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/0164B102B8ED05C4EE9ED0344004F4E7/S0022112023009254a.pdf/div-class-title-flow-induced-oscillations-of-pitching-swept-wings-stability-boundary-vortex-dynamics-and-force-partitioning-div.pdf",
"pdf_hash": "2c2a94cd9d2fcb4c188596abe1ea22bed42af3a3",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41921",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"sha1": "2c2a94cd9d2fcb4c188596abe1ea22bed42af3a3",
"year": 2023
}
|
pes2o/s2orc
|
Flow-induced oscillations of pitching swept wings: stability boundary, vortex dynamics and force partitioning
Abstract We study experimentally the aeroelastic instability boundaries and three-dimensional vortex dynamics of pitching swept wings, with the sweep angle ranging from 0$^\circ$ to 25$^\circ$. The structural dynamics of the wings are simulated using a cyber-physical control system. With a constant flow speed, a prescribed high inertia and a small structural damping, we show that the system undergoes a subcritical Hopf bifurcation to large-amplitude limit-cycle oscillations (LCOs) for all the sweep angles. The onset of LCOs depends largely on the static characteristics of the wing. The saddle-node point is found to change non-monotonically with the sweep angle, which we attribute to the non-monotonic power transfer between the ambient fluid and the elastic mount. An optimal sweep angle is observed to enhance the power extraction performance and thus promote LCOs and destabilize the aeroelastic system. The frequency response of the system reveals a structural-hydrodynamic oscillation mode for wings with relatively high sweep angles. Force, moment and three-dimensional flow structures measured using multi-layer stereoscopic particle image velocimetry are analysed to explain the differences in power extraction for different swept wings. Finally, we employ a physics-based force and moment partitioning method to correlate quantitatively the three-dimensional vortex dynamics with the resultant unsteady aerodynamic moment.
Introduction
The fluid-structure interaction (FSI) of elastically mounted pitching wings can lead to largeamplitude flow-induced oscillations under certain operating conditions.In extreme cases, these flow-induced oscillations may affect structural integrity and even cause catastrophic aeroelastic failures (Dowell et al. 1989).On the other hand, however, hydro-kinetic energy can be harnessed from these oscillations, providing an alternative solution for next-generation renewable energy devices (Xiao & Zhu 2014;Young et al. 2014;Boudreau et al. 2018;Su & Breuer 2019).Moreover, the aero-/hydro-elastic interactions of passively pitching wings/fins have important connections with animal flight (Wang 2005;Bergou et al. 2007;Beatus & Cohen 2015;Wu et al. 2019) and swimming (Long & Nipper 1996;Quinn & Lauder 2021), and understanding these interactions may further aid the design and development of flappingwing micro air vehicles (MAVs) (Shyy et al. 2010;Jafferis et al. 2019) and oscillating-foil autonomous underwater vehicles (AUVs) (Zhong et al. 2021b;Tong et al. 2022).
Flow-induced oscillations of pitching wings originate from the two-way coupling between the structural dynamics of the elastic mount and the fluid force exerted on the wing.While the dynamics of the elastic mount can be approximated by a simple spring-mass-damper model, the fluid forcing term is usually found to be highly nonlinear due to the formation, growth, and shedding of a strong leading-edge vortex (LEV) (McCroskey 1982;Dimitriadis & Li 2009;Mulleners & Raffel 2012;Eldredge & Jones 2019).Onoue et al. (2015) and Onoue & Breuer (2016) experimentally studied the flow-induced oscillations of a pitching plate whose structural stiffness, damping and inertia were defined using a cyber-physical system ( §2.1, see also Hover et al. (1997); Mackowski & Williamson (2011); Zhu et al. (2020)) and, using this approach, identified a subcritical bifurcation to aeroelastic instability.The temporal evolution of the LEV associated with the aeroelastic oscillations was characterized using particle image velocimetry (PIV), and the unsteady flow structures were correlated with the unsteady aerodynamic moments using a potential flow model.Menon & Mittal (2019) numerically studied a similar problem, simulating an elastically mounted two-dimensional NACA-0015 airfoil at a Reynolds number of 1000.An energy approach, which bridges prescribed sinusoidal oscillations and passive flow-induced oscillations, was employed to characterize the dynamics of the aeroelastic system.The energy approach maps out the energy transfer between the ambient flow and the elastic mount over a range of prescribed pitching amplitudes and frequencies and unveils the system stability based on the sign of the energy gradient.
More recently, Zhu et al. (2020) characterized the effect of wing inertia on the flow-induced oscillations of pitching wings and the corresponding LEV dynamics.Two distinct oscillation modes were reported: (i) a structural mode, which occurred via a subcritical bifurcation and was associated with a high-inertia wing, and (ii) a hydrodynamic mode, which occurred via a supercritical bifurcation and was associated with a low-inertia wing.The wing was found to shed one strong LEV during each half-pitching cycle for the hydrodynamic mode, whereas a weak secondary LEV was also shed in the high-inertial structural mode.
These previous studies have collectively demonstrated that LEV dynamics play an important role in shaping flow-induced oscillations and thus regulate the stability characteristics of passively pitching wings.However, these studies have only focused on studying the structural and flow dynamics of two-dimensional wings or airfoils.The extent to which these important findings for two-dimensional wings hold in three dimensions remains unclear.
Swept wings are commonly seen for flapping-wing fliers and swimmers in nature (Ellington et al. 1996;Lentink et al. 2007;Borazjani & Daghooghi 2013;Bottom II et al. 2016;Zurman-Nasution et al. 2021), as well as on many engineered fixed-wing flying vehicles.It is argued that wing sweep can enhance lift generation for flapping wings because it stabilizes the LEV by maintaining its size through spanwise vorticity transport -a mechanism similar to the lift enhancement mechanism of delta wings (Polhamus 1971).Chiereghin et al. (2020) found significant spanwise flow for a high-aspect ratio plunging swept wing at a sweep angle of 40 degrees.In another study, for the same sweep angle, attached LEVs and vortex breakdown were observed just like those on delta wings (Gursul & Cleaver 2019).Recent works have shown that the effect of wing sweep on LEV dynamics depends strongly on wing kinematics.Beem et al. (2012) showed experimentally that for a plunging swept wing, the strong spanwise flow induced by the wing sweep is not sufficient for LEV stabilization.Wong et al. (2013) reinforced this argument by comparing the LEV stability of plunging and flapping swept wings and showed that two-dimensional (i.e.uniform without any velocity gradient) spanwise flow alone cannot stabilize LEVs -there must be spanwise gradients in vorticity or spanwise flow so that vorticity can be convected or stretched.Wong & Rival (2015) demonstrated both theoretically and experimentally that the wing sweep improves relative LEV stability of flapping swept wings by enhancing the spanwise vorticity convection and stretching so as to keep the LEV size below a critical shedding threshold (Rival et al. 2014).Onoue & Breuer (2017) experimentally studied elastically mounted pitching unswept and swept wings and proposed a universal scaling for the LEV formation time and circulation, which incorporated the effects of the pitching frequency, the pivot location and the sweep angle.The vortex circulation was demonstrated to be independent of the three-dimensional vortex dynamics.In addition, they concluded that the stability of LEV can be improved by moderating the LEV circulation through vorticity annihilation, which is largely governed by the shape of the leading-edge sweep, agreeing with the results of Wojcik & Buchholz (2014).More recently, Visbal & Garmann (2019) numerically studied the effect of wing sweep on the dynamic stall of pitching three-dimensional wings and reported that the wing sweep can modify the LEV structures and change the net aerodynamic damping of the wing.The effect of wing sweep on the LEV dynamics and stability, as one can imagine, will further affect the unsteady aerodynamic forces and thereby the aeroelastic response of pitching swept wings.
Another important flow feature associated with unsteady three-dimensional wings is the behavior of the tip vortex (TV).Although the tip vortex usually grows distinctly from the leading-edge vortex for rectangular platforms (Taira & Colonius 2009;Kim & Gharib 2010;Hartloper et al. 2013), studies have suggested that the TV is able to anchor the LEV in the vicinity of the wing tip, which delays LEV shedding (Birch & Dickinson 2001;Hartloper et al. 2013).Moreover, the tip vortex has also been shown to affect the unsteady wake dynamics of both unswept and swept wings (Taira & Colonius 2009;Zhang et al. 2020a,b;Ribeiro et al. 2022;Son et al. 2022a,b).However, it remains elusive how the interactions between LEVs and TVs change with the wing sweep, and more importantly, how this change will in turn affect the response of aeroelastic systems.
To dissect the effects of complex vortex dynamics associated with unsteady wings/airfoils, a physics-based Force and Moment Partitioning Method (FMPM) has been proposed (Quartapelle & Napolitano 1983;Zhang et al. 2015;Moriche et al. 2017;Menon & Mittal 2021a,b,c) (also known as the vortex force/moment map method (Li & Wu 2018;Li et al. 2020a)).The method has attracted attention recently due to its high versatility for analyzing a variety type of vortex-dominated flows.Under this framework, the Navier-Stokes equation is projected onto the gradient of an influence potential to separate the force contributions from the added-mass, vorticity-induced, and viscous terms.It is particularly useful for analyzing vortex-dominated flows because the spatial distribution of the vorticity-induced forces can be visualized, enabling detailed dissections of aerodynamic loads generated by individual vortical structures.For two-dimensional airfoils, Menon & Mittal (2021c) applied FMPM and showed that the strain-dominated region surrounding the rotation-dominated vortices has an important role to play in the generation of unsteady aerodynamic forces.For threedimensional wings, this method has been implemented to study the contributions of spanwise and cross-span vortices to the lift generation of rectangular wings (Menon et al. 2022), the vorticity-induced force distributions on forward-and backward-swept wings at a fixed angle of attack (Zhang & Taira 2022), and the aerodynamic forces on delta wings (Li et al. 2020b).More recently, efforts have been made to apply FMPM to the analysis of experimental data, in particular, flow fields obtained using particle image velocimetry.Zhu et al. (2023) employed FMPM to analyze the vortex dynamics of a two-dimensional wing pitching sinusoidally in a quiescent flow.Several practical issues in applying FMPM to PIV data were discussed, including the effect of phase-averaging and potential error sources.
In this study, we apply FMPM to three-dimensional flow field data measured using three-component PIV, and use the results to gain insight into the three-dimensional vortex dynamics and the corresponding unsteady forces acting on elastically mounted pitching swept wings.We extend the methodology developed in Zhu et al. (2020), and employ a layered stereoscopic PIV technique and the FMPM to quantify the three-dimensional vortex dynamics.In the following sections, we first introduce the experimental setup and method of analysis ( §2).The static force and moment coefficients of the wings are measured ( §3.1) before we characterize the amplitude response ( §3.2) and the frequency response ( §3.3) of the system.Next, we associate the onset of flow-induced oscillations with the static characteristics of the wing ( §3.4) and use an energy approach to explain the nonlinear stability boundaries ( §3.5).The unsteady force and moment measurements, together with the three-dimensional flow structures ( §3.6) are then analyzed to explain the differences in power extraction for unswept and swept wings.Finally, we apply the Force and Moment Partitioning Method to quantitatively correlate the three-dimensional vortex dynamics with the resultant unsteady aerodynamic moment ( §3.7).All the key findings are summarized in §4.
Cyber-physical system and wing geometry
We perform all the experiments in the Brown University free-surface water tunnel, which has a test section of × × = 0.8 m × 0.6 m × 4.0 m.The turbulence intensity in the water tunnel is around 2% at the velocity range tested in the present study.Free-stream turbulence plays a critical role in shaping small-amplitude laminar separation flutter (see Yuan et al. (2015)).However, as we will show later, the flow-induced oscillations and the flow structures observed in the present study are of high amplitude and large size, and we do not expect the free-stream turbulence to play any significant role.Figure 1(a) shows a schematic of the experimental setup.Unswept and swept NACA 0012 wings are mounted vertically in the tunnel, with an endplate on the top as a symmetry plane.The wing tip at the bottom does not have an endplate.The wings are connected to a six-axis force/moment transducer (ATI Delta IP65) via a wing shaft.The shaft further connects the transducer to an optical encoder (US Digital E3-2500) and a servo motor (Parker SM233AE) coupled with a gearbox (SureGear PGCN23-0525).
We implement a cyber-physical system (CPS) to facilitate a wide structural parameter sweep (i.e.stiffness, , damping, , and inertia, ) while simulating real aeroelastic systems with high fidelity.Details of the CPS have been discussed in Zhu et al. (2020), therefore, only a brief introduction will be given here.In the CPS, the force/moment transducer measures the fluid moment, , and feeds the value to the computer via a data acquisition (DAQ) board (National Instruments PCIe-6353).This fluid moment is then added to the stiffness moment () and the damping moment ( ) obtained from the previous time step to get the total moment.Next, we divide this total moment by the desired inertia () to get the acceleration ( ) at the present time step.This acceleration is then integrated once to get the velocity ( ) and twice to get the pitching angle ().This pitching angle signal is output to the servo motor via the same DAQ board.The optical encoder, which is independent of the CPS, is used to measure and verify the pitching angle.At the next time step, the CPS recalculates the total moment based on the measured fluid moment and the desired stiffness and damping, and thereby continues the loop.
Our CPS control loop runs at a frequency of 4000 Hz, which is well beyond the highest Nyquist frequency of the aeroelastic system.Noise in the force/moment measurements can be a potential issue for the CPS.However, because we are using a position control loop, where the acceleration is integrated twice to get the desired position, our system is less susceptive to noise.Therefore, no filter is used within the CPS control loop.The position control loop also requires the pitching motor to follow the commanded position signal as closely as possible.This is achieved by carefully tuning the PID (Proportional-Integral-Derivative) parameters of the pitching motor.The CPS does not rely on any additional tunable parameters other than the virtual inertia, damping, and stiffness.We validate the system using 'ring-down' experiments, as shown in the appendix of Zhu et al. (2020).Moreover, as we will show later, the CPS results match remarkably well with prescribed experiments ( §3.5), demonstrating the robustness of the system.
The unswept and swept wings used in the present study are sketched in figure 1(b).All the wings have a span of = 0.3 m and a chord length of = 0.1 m, which results in a physical aspect ratio of = 3.However, the effective aspect ratio is 6 due to the existence of the symmetry plane (i.e. the endplate).The minimum distance between the wing tip and the bottom of the water tunnel is around 1.5.The chord-based Reynolds number is defined as ≡ ∞ /, where ∞ is the free-stream velocity, and are water density and dynamic viscosity, respectively.We set the free-stream velocity to be ∞ = 0.5 m s −1 for all the experiments (except for particle image velocimetry measurements, see §2.2), which results in a constant Reynolds number of = 50 000, matching the used in Zhu et al. (2020) to facilitate direct comparisons.For both unswept and swept wings, the leading edge (LE) and the trailing edge (TE) are parallel.Their pivot axes, represented by vertical dashed lines in the figure, pass through the mid-chord point / = 0.5 of the mid-span plane / = 0.5.We choose the current location of the pitching axis because it splits the swept wings into two equal-area sections (fore and aft).Moving the pitching axis or making it parallel to the leading edge will presumably result in different system dynamics, which will be investigated in future studies.
The sweep angle, Λ, is defined as the angle between the leading edge and the vertical axis.Five wings with Λ = 0 • (unswept wing), 10 • , 15 • , 20 • and 25 • (swept wings) are used in the experiments.Further expanding the range of wing sweep would presumably bring more interesting fluid-structure interaction behaviors.However, as we will show in the later sections, there is already a series of rich (nonlinear) flow physics associated with the current set of unswept and swept wings.Our selection of the sweep angle is also closely related to the location of the pitching axis.Currently, the pitching axis passes the mid-chord at the mid-span.For a Λ = 25 • wing, the trailing edge is already in front of the pitching axis at the wing root, and the leading edge is behind the pitching axis at the wing tip.Further increasing the sweep angle brings difficulties in physically pitching the wing for our existing setup.
Multi-layer stereoscopic particle image velocimetry
We use multi-layer phase-averaged stereoscopic particle image velocimetry (SPIV) to measure the three-dimensional (3D) velocity field around the pitching wings.We lower the free-stream velocity to ∞ = 0.3 m s −1 to enable higher temporal measurement resolution.The chord-based Reynolds number is consequently decreased to = 30 000.It has been shown by Zhu et al. (2020, see their appendix) that the variation of in the range of 30 000 -60 000 does not affect the system dynamics, as long as the parameters of interest are properly non-dimensionalized.The water flow is seeded using neutrally buoyant 50 m silver-coated hollow ceramic spheres (Potters Industries) and illuminated using a horizontal laser sheet, generated by a double-pulse Nd:YAG laser (532 nm, Quantel EverGreen) with a LaVision laser guiding arm and collimator.Two sCMOS cameras (LaVision, 2560 × 2160 pixels) with Scheimpflug adapters (LaVision) and 35mm lenses (Nikon) are used to capture image pairs of the flow field.These SPIV image pairs are fed into the LaVision DaVis software (v.10) for velocity vector calculation using multi-pass cross-correlations (two passes at 64 × 64 pixels, two passes at 32 × 32 pixels, both with 50% overlap).
To measure the two-dimensional-three-component (2D3C) velocity field at different spanwise layers, we use a motorized vertical traverse system with a range of 120 mm to raise and lower the testing rig (i.e.all the components connected by the shaft) in the -axis (King et al. 2018;Zhong et al. 2021a).Due to the limitation of the traversing range, three measurement volumes (figure 1b, V1, V2 and V3) are needed to cover the entire wing span plus the wing tip region.For each measurement volume, the laser sheet is fixed at the top layer and the rig is traversed upward with a step size of 5 mm.Note that the entire wing stays submerged, even at the highest traversing position, and for all wing positions, free surface effects are not observed.The top two layers of V1 are discarded as the laser sheet is too close to the endplate, which causes reflections.The bottom layer of V1 and the top layer of V2 overlap with each other.The velocity fields of these two layers are averaged to smooth the interface between the two volumes.The interface between V2 and V3 is also smoothed in the same way.For each measurement layer, we phase-average 1250 instantaneously measured 2D3C velocity fields over 25 cycles (i.e.50 measurements per cycle) to eliminate any instantaneous variations of the flow field while maintaining the key coherent features across different layers.Finally, 71 layers of 2D3C velocity fields are stacked together to form a large volume of phase-averaged 3D3C velocity field (∼ 3 × 3 × 3.5).The velocity fields of three wing models (Λ = 0 • , 10 • and 20 • ) are measured.For the two swept wings (Λ = 10 • and 20 • ), the laser volumes are offset horizontally to compensate for the sweep angle (see the bottom subfigure of figure 1b).
Governing equations and non-dimensional parameters
The one-degree-of-freedom aeroelastic system considered in the present study has a governing equation + + = , (2.1) where , , and are the angular position, velocity and acceleration, respectively. = + is the effective inertia, where is the physical inertia of the wing and is the virtual inertia that we prescribe with the CPS.Because the friction is negligible in our system, the effective structural damping, , equals the virtual damping in the CPS. is the effective torsional stiffness and it equals the virtual stiffness .Equation 2.1 resembles a forced torsional spring-mass-damper system, where the fluid moment, , acts as a nonlinear forcing term.Following Onoue et al. (2015) and Zhu et al. (2020), we normalize the effective inertia, damping, stiffness and the fluid moment using the fluid inertia force to get the non-dimensional governing equation of the system: where We should note that the inverse of the non-dimensional stiffness is equivalent to the Cauchy number, = 1/ * , and the non-dimensional inertia, * , is analogous to the mass ratio between the wing and the surrounding fluid.We define the non-dimensional velocity as * = ∞ /(2 ), where is the measured pitching frequency.In addition to the aerodynamic moment, we also measure the aerodynamic forces that are normal and tangential to the wing chord, and , respectively.The resultant lift and drag forces are (2.4) We further normalize the normal force, tangential force, lift and drag to get the corresponding force coefficients where n is the unit vector normal to the boundary, − is the location vector pointing from the pitching axis towards location on the airfoil surface, and e z is the spanwise unit vector (Menon & Mittal 2021b).This influence potential quantifies the spatial influence of any vorticity on the resultant force/moment.It is only a function of the airfoil geometry and the pitching axis, and does not depend on the kinematics of the wing.Note that this influence potential should not be confused with the velocity potential from the potential flow theory.The boundary conditions of equation 2.6 are specified for solving the influence field of the spanwise moment, and they will be different for solving the lift and drag influence fields.
From the three-dimensional velocity data, we can calculate the field (Hunt et al. 1988;Jeong & Hussain 1995) where is the second invariant of the velocity gradient tensor, is the vorticity tensor and S is the strain-rate tensor.The vorticity-induced moment can be evaluated by where ∫ represents the volume integral within the measurement volume.The spatial distribution of the vorticity-induced moment near the pitching wing can thus be represented by the moment density, −2 (i.e. the moment distribution field).In the present study, we focus on the vorticity-induced force (moment) as it has the most important contribution to the overall unsteady aerodynamic load in vortex-dominated flows.Other forces including the added-mass force, the force due to viscous diffusion, the forces associated with irrotational effects and outer domain effects are not considered although they can be estimated using FMPM as well (Menon & Mittal 2021b).The contributions from these other forces, along with experimental errors, might result in a mismatch in the magnitude of the FMPM-estimated force and force transducer measurements, as shown by Zhu et al. (2023), and the exact source of this mismatch is under investigation.
Static characteristics of unswept and swept wings
The static lift and moment coefficient, and , are measured for the unswept (Λ = 0 • ) and swept wings (Λ = 10 • -25 • ) at = 50 000 and the results are shown in figure 2. In figure 2(a), we see that the static lift coefficient, (), has the same behavior for all sweep angles, despite some minor variations for angles of attack higher than the static stall angle = 12 • (0.21 rad).The collapse of () across different swept wings agrees with the classic 'independence principle' (Jones 1947) (i.e. ∼ cos 2 Λ) at relatively small sweep angles.Figure 2(b) shows that, for any fixed angle of attack, the static moment coefficient, , increases with the sweep angle, Λ.This trend is most prominent when the angle of attack exceeds the static stall angle.The inset shows a zoom-in view of the static for = 0.14 -0.26.It is seen that the curves cluster into two groups, with the unswept wing (Λ = 0 • ) being in Group 2 (G2) and all the other swept wings (Λ = 10 • -25 • ) being in Group 1 (G1).As we will show later, this grouping behavior is closely related to the onset of flow-induced oscillations ( §3.2 & §3.4) and it is important for understanding the system stability.No hysteresis is observed for both static and , presumably due to free-stream turbulence in the water tunnel.
Subcritical bifurcations to flow-induced oscillations
We conduct bifurcation tests to study the stability boundaries of the elastically mounted pitching wings.Zhu et al. (2020) have shown that for unswept wings, the onset of limit- cycle oscillations (LCOs) is independent of the wing inertia and the bifurcation type (i.e.subcritical or supercritical).It has also been shown that the extinction of LCOs for subcritical bifurcations at different wing inertias occurs at a fixed value of the non-dimensional velocity * .For these reasons, we choose to focus on one high-inertia case ( * = 10.6) in the present study.In the experiments, the free-stream velocity is maintained at ∞ = 0.5 m s −1 .We fix the structural damping of the system at a small value, * = 0.13, keep the initial angle of attack at zero, and use the Cauchy number, , as the control parameter.To test for the onset of LCOs, we begin the test with a high-stiffness virtual spring (i.e.low ) and incrementally increase by decreasing the torsional stiffness, * .We then reverse the operation to test for the extinction of LCOs and to check for any hysteresis.The amplitude response of the system, , is measured as the peak absolute pitching angle (averaged over many pitching cycles).By this definition, is half of the peak-to-peak amplitude.The divergence angle, , is defined as the mean absolute pitching angle.Although all the divergence angles are shown to be positive, the wing can diverge to both positive and negative angles in experiments.
Figure 3 shows the pitching amplitude response and the static divergence angle for swept wings with Λ = 10 • to 25 • .Data for the unswept wing (Λ = 0 • ) are also replotted from Zhu et al. (2020) for comparison.It can be seen that the system first remains stable without any noticeable oscillations or divergence (regime 1 in the figure) when is small.In this regime, the high stiffness of the system is able to pull the system back to a stable fixed point despite any small perturbations.As we further increase , the system diverges to a small static angle, where the fluid moment is balanced by the virtual spring.This transition is presumably triggered by free-stream turbulence, and both positive and negative directions are possible.Due to the existence of random flow disturbances and the decreasing spring stiffness, some small-amplitude oscillations around the static divergence angle start to emerge (regime 2 ).As is further increased above a critical value (i.e. the Hopf point), the amplitude response of the system abruptly jumps into large-amplitude self-sustained LCOs and the static divergence angle drops back to zero, indicating that the oscillations are symmetric about the zero angle of attack.The large-amplitude LCOs are observed to be near-sinusoidal and have a dominant characteristic frequency.After the bifurcation, the amplitude response of the system continues to increase with (regime 3 ).We then decrease and find that the large-amplitude LCOs persist even when is decreased below the Hopf point (regime 4 ).Finally, the system drops back to the stable fixed point regime via a saddle-node (SN) point.A hysteretic bistable region is thus created in between the Hopf point and the saddlenode point -a hallmark of a subcritical Hopf bifurcation.In the bistable region, the system features two stable solutions -a stable fixed point (regime 1 ) and a stable LCO (regime 4 ) -as well as an unstable LCO solution, which is not observable in experiments (Strogatz 1994).
We observe that the Hopf points of unswept and swept wings can be roughly divided into two groups (figure 3, G1 & G2), with the unswept wing (Λ = 0 • ) being in G2 and all the other swept wings (Λ = 10 • -25 • ) being in G1, which agrees with the trend observed in figure 2(b) for the static moment coefficient.This connection will be discussed further in §3.4.It is also seen that as the sweep angle increases, the LCO amplitude at the saddle-node point decreases monotonically.However, the at which the saddle-node point occurs first extends towards a lower value (Λ = 0 • → 10 • ) but then moves back towards a higher (Λ = 10 • → 25 • ).This indicates that increasing the sweep angle first destabilizes the system from Λ = 0 • to 10 • and then re-stabilizes it from Λ = 10 • to 25 • .This non-monotonic behavior of the saddle-node point will be revisited from a perspective of energy in §3.5.The pitching amplitude response, , follows a similar non-monotonic trend.Between Λ = 0 • and 10 • , is slightly higher at higher values for the Λ = 10 • wing, whereas between Λ = 10 • and 25 • , decreases monotonically, indicating that a higher sweep angle is not able to sustain LCOs at higher amplitudes.The non-monotonic behaviors of the saddle-node point and the LCO amplitude both suggest that there exists an optimal sweep angle, Λ = 10 • , which promotes flow-induced oscillations of pitching swept wings.
Frequency response of the system
The characteristic frequencies of the flow-induced LCOs observed in figure 3 provide us with more information about the driving mechanism of the oscillations.Figure 4 is the structural frequency of the system (Rao 1995).We observe that for all the wings tested in the experiments and over most of the regimes tested, the measured pitching frequency, * , locks onto the calculated structural frequency, * , indicating that the oscillations are dominated by the balance between the structural stiffness and inertia.These oscillations, therefore, correspond to the structural mode reported by Zhu et al. (2020), and feature characteristics of high-inertial aeroelastic instabilities.We can decompose the moments experienced by the wing into the inertial moment, * * , the structural damping moment, * * , the stiffness moment, * * , and the fluid moment, .As an example, for the Λ = 10 • wing pitching at * = 0.069 (i.e. the filled orange triangle in figure 4a), these moments are plotted in figure 4(b).We see that for the structural mode, the stiffness moment is mainly balanced by the inertial moment, while the structural damping moment and the fluid moment remain relatively small.
In addition to the structural mode, Zhu et al. (2020) also observed a hydrodynamic mode, which corresponds to a low-inertia wing.In the hydrodynamic mode, the oscillations are dominated by the fluid forcing, so that the measured pitching frequency, * , stays relatively constant for a varying .In figure 4(a), we see that for the Λ = 20 • and 25 • wings, * flattens near the saddle-node boundary.This flattening trend shows an emerging fluiddominated time scale, resembling a hydrodynamic mode despite the high wing inertia.Taking Λ = 20 • , * = 0.068 (i.e. the filled green diamond in figure 4a) as an example, we can examine the different contributions to the pitching moments in figure 4(c).It is observed that in this oscillation mode, the stiffness moment balances both the inertial moment and the fluid moment.This is different from both the structural mode and the hydrodynamic mode, and for this reason, we define this hybrid oscillation mode as the structural-hydrodynamic mode.
There are currently no quantitative descriptions of the structural-hydrodynamic mode.However, it can be qualitatively identified as when the pitching frequency of a (1:1 lockin) structural mode flattens as the natural (structural) frequency increases.Based on the observations in the present study, we believe this mode is not a fixed fraction of the structural frequency.Instead, the frequency response shows a mostly flat trend (figure 4a, green and dark green curves) at high * , indicating an increasingly dominating fluid forcing frequency.For a structural mode, the oscillation frequency locks onto the natural frequency due to the high inertial moment.However, as the sweep angle increases, the fluid moment also increases (see also figure 8a).The structural-hydrodynamic mode emerges as the fluid forcing term starts to dominate in the nonlinear oscillator.
For a fixed structural frequency, * , as the sweep angle increases, the measured pitching frequency, * , deviates from the 1:1 lock-in curve and moves to lower frequencies.This deviation suggests a growing added-mass effect, as the pitching frequency ∼ √︁ 1/( + ).Because the structural inertia is prescribed, a decreasing suggests an increasing addedmass inertia, .This is expected because of the way we pitch the wings in the experiments (see the inset of figure 3).As Λ increases, the accelerated fluid near the wing root and the wing tip produces more moments due to the increase of the moment arm, which amplifies the added-mass effect.The peak added-mass moment is estimated to be around 2%, 3%, and 5% of the peak total moment for the Λ = 0 • , 10 • , and 20 • wings, respectively.Because this effect is small compared to the structural and vortex-induced forces, we will not quantify this added-mass effect further in the present study but will leave it for future work.
Onset of flow-induced oscillations
In figure 3, we have observed that the Hopf point of unswept and swept wings can be roughly divided into two groups (figure 3, G1 & G2).In this section, we explain this phenomenon.Figure 5(a) and (b) shows the temporal evolution of the pitching angle, (), the fluid moment, (), and the stiffness moment, * * (), for the Λ = 15 • swept wing as the Cauchy number is increased past the Hopf point.We see that the wing undergoes small amplitude oscillations around the divergence angle just prior to the Hopf point ( < 645 s).The divergence angle is lower than the static stall angle, , and so we know that the flow stays mostly attached, and the fluid moment, , is balanced by the stiffness moment, * * (figure 5b).When the Cauchy number, = 1/ * , is increased above the Hopf point (figure 5a, > 645 s), * * is no longer able to hold the pitching angle below .Once the pitching angle exceeds , stall occurs and the wing experiences a sudden drop in .The stiffness moment, * * , loses its counterpart and starts to accelerate the wing to pitch towards the opposite direction.This acceleration introduces unsteadiness to the system and the small-amplitude oscillations gradually transition to large-amplitude LCOs over the course of several cycles, until the inertial moment kicks in to balance * * (see also figure 4b).This transition process confirms the fact that the onset of large-amplitude LCOs depends largely on the static characteristics of the wing -the LCOs are triggered when the static stall angle is exceeded.The triggering of flow-induced LCOs starts from exceeding the static stall angle after * is decreased below the Hopf point, causing to drop below * * .At this value of * , the slope of the static stall point should be equal to the stiffness at the Hopf point, * (i.e. = * * , where is the static stall moment).This argument is verified by figure 5(c), in which we replot the static moment coefficients of unswept and swept wings from figure 2(b) (error bars omitted for clarity), together with the corresponding * * .We see that the * * lines all roughly pass through the static stall points ( * , ) of the corresponding Λ.Note that * * of Λ = 15 • and 20 • overlap with each other.Similar to the trend observed for the Hopf point in figure 3, the static stall moment can also be divided into two groups, with the unswept wing (Λ = 0 • ) being in G2 and all the other wings (Λ = 10 • -25 • ) being in G1 (see also figure 2b).The inset compares the predicted Hopf point, / * , with the measured Hopf point, * , and we see that data closely follow a 1:1 relationship.This reinforces the argument that the onset of flow-induced LCOs is shaped by the static characteristics of the wing, and that this explanation applies to both unswept and swept wings.
It aeroelastic wing and showed that the aeroelastic instability is triggered by a zero-frequency linear divergence mode.This agrees in part with our experimental observation that the flowinduced oscillations emerge from the static divergence state.However, as we have discussed in this section, the onset of large-amplitude aeroelastic oscillations in our system occurs when the divergence angle exceeds the static stall angle, whereas no stall is involved in the study of Negi et al. (2021).In fact, Negi et al. (2021) focused on laminar separation flutter, where the pitching amplitude is small ( ∼ 6 • ).In contrast, we focus on large-amplitude (45 • < < 120 • ) flow-induced oscillations.
Power coefficient map and system stability
In this section, we analyze the stability of elastically mounted unswept and swept wings from the perspective of energy transfer.Menon & Mittal (2019) and Zhu et al. (2020) have shown numerically and experimentally that the flow-induced oscillations of elastically mounted wings can only sustain when the net energy transfer between the ambient fluid and the elastic mount equals zero.To map out this energy transfer for a large range of pitching frequencies and amplitudes, we prescribe the pitching motion of the wing using a sinusoidal profile where 0 ⩽ ⩽ 2.5 rad and 0.15 Hz ⩽ ⩽ 0.6 Hz.The fluid moment measured with these prescribed sinusoidal motions can be directly correlated to those measured in the passive flow-induced oscillations because the flow-induced oscillations are near-sinusoidal (see §3.2, and figure 5a, > 700 s).By integrating the governing equation of the passive system 2.2 over = 20 cycles and taking the cycle average (Zhu et al. 2020), we can get the power coefficient of the system where 0 is the starting time, is the pitching period and * = ∞ / is the non-dimensional time.In this equation, the * term represents the power injected into the system from the free-stream flow, whereas the * * 2 term represents the power dissipated by the structural damping of the elastic mount.The power coefficient maps of unswept and swept wings are shown in figure 6(a-e).In these maps, orange regions correspond to > 0, where the power injected by the ambient flow is higher than that dissipated by the structural damping.On the contrary, < 0 in the blue regions.The colored dashed lines indicate the = 0 contours, where the power injection balances the power dissipation, and the system is in equilibrium.
The = 0 equilibrium boundary can be divided into three branches.Zhu et al. (2020) have shown that for unswept wings, the top branch corresponds to a stable LCO solution for the structural oscillation mode, the middle branch represents an unstable LCO solution for the structural mode, but a stable LCO solution for the hydrodynamic mode, and the bottom branch is a fixed point solution.
To correlate the power coefficient maps of prescribed oscillations with the stability boundaries of flow-induced oscillations, we overlay the bifurcation diagrams of the passive system from figure 3 onto figure 6(a-e).The measured pitching frequencies, , are used to calculate the non-dimensional velocity, * , for large-amplitude LCOs (filled triangles).Because it is difficult to measure frequencies of fixed points and small-amplitude oscillations, we use the calculated structural frequency, , to evaluate * for non-LCO data points (hollow triangles).Figure 6(a-e) show that for all the wings tested, the flow-induced large-amplitude LCOs match well with the top branch of the = 0 curve, indicating the broad applicability of the energy approach for both unswept and swept wings, and confirming that this instability is a structural mode, as seen in the frequency response (figure 4a).This correspondence was also observed by Menon & Mittal (2019) and Zhu et al. (2020) and is expected for instabilities that are well-described by sinusoidal motions (Morse & Williamson 2009).The small discrepancies for large sweep angles can be attributed to the low gradient near = 0.The junction between the top and the middle = 0 branches, which corresponds to the saddle-node point, stays relatively sharp for Λ = 0 • -15 • and becomes smoother for Λ = 20 • -25 • .These smooth turnings result in a smooth transition from the structural mode to the hydrodynamic mode, giving rise to the structural-hydrodynamic mode discussed in §3.3.
The = 0 curves for Λ = 0 • -25 • are summarized in figure 6(f ).It is seen that the trend of the top branch is similar to that observed in figure 3 for large-amplitude LCOs.The location of the junction between the top branch and the middle branch changes nonmonotonically with Λ, which accounts for the non-monotonic behavior of the saddle-node point.In addition, figures 6(a-e) show that the maximum power transfer from the fluid also has a non-monotonic dependency on the sweep angle (see the shade variation of the positive regions as a function of the sweep angle), with an optimal sweep angle at Λ = 10 • , which might inspire future designs of higher efficiency oscillating-foil energy harvesting devices.
Force, moment and three-dimensional flow structures
In the previous section, §3.5, we have established the connection between prescribed oscillations and flow-induced instabilities using the energy approach.However, the question remains what causes the differences in the power coefficients measured for prescribed pitching wings with different sweep angles (figure 6).In this section, we analyze the aerodynamic force, moment and the corresponding three-dimensional flow structures to gain more insights.We focus on one pitching case, = 1.05 (60 • ) and * = 0.085 (i.e. the black star on figure 6f ), and three sweep angles, Λ = 0 • , 10 • and 20 • .This particular pitching kinematic is selected because it sits right on the = 0 curve for Λ = 0 • but in the positive region for Λ = 10 • and in the negative region for Λ = 20 • (see figure 6a,b,d,f ).
Phase-averaged coefficients of the aerodynamic moment, , the normal force, , the tangential force, , the lift force, , and the drag force, , are plotted in figure 7(a-c), respectively.Similar to the three-dimensional velocity fields, the moment and force measurements are phase-averaged over 25 cycles.We see that the moment coefficient (figure 7a) behaves differently for different sweep angles, whereas the shape of other force coefficients (figure 7b,c) does not change with sweep angle, resembling the trend observed in the static measurements (figure 2).The observation that the wing sweep (Λ = 0 • to 25 • ) has minimal effects on the aerodynamic force generation is non-intuitive, as one would assume that the sweep-induced spanwise flow can enhance spanwise vorticity transport in the leadingedge vortex and thereby alter the LEV stability as well as the resultant aerodynamic load.However, our measurements show the opposite, a result which is backed up by the experiments of heaving (plunging) swept wings by Beem et al. (2012) The collapse of the normal force, , at different sweep angles suggests that the wing sweep regulates the aerodynamic moment, , by changing the moment arm, , as = .This argument will be revisited later when we discuss the leading-edge vortex and tip vortex dynamics.
Figure 7(a) shows that as the sweep angle increases, the moment coefficient, , peaks at a later time in the cycle, and has an increased maximum value.To further analyze and its effects on the power coefficient, , for different wings sweeps, we compare and for Λ = 0 • , 10 • and 20 • in figure 7(d-f ), respectively.Note that here we define the power coefficient as = * , which is different from equation 3.3 in a way that this is timedependent instead of cycle-averaged, and that the power dissipated by the structure, * * 2 is not considered (this power dissipation is small because a small * is used in the experiments).The normalized pitching angle, /, and pitching velocity, /(2 ), are also plotted for reference.We see that at the beginning of the cycle (0 ⩽ / < 0.15), (/) grows near-linearly for all three wings.Because > 0 for the first quarter cycle, the -intercept of determines the starting point of the positive (/) region, corresponding to the left edge of the green panels in the figures.The > 0 region starts at / = 0 for the unswept wing as has a near-zero -intercept.For the Λ = 10 • swept wing, because has a small positive -intercept, the > 0 region starts even before / = 0. On the contrary, the > 0 region starts after / = 0 for the Λ = 20 • swept wing due to a small negative -intercept of .Owing to the combined effect of an increasing and a decreasing , the power coefficient peaks around / = 0.125 for all the wings.The maximum of the Λ = 10 • wing is slightly higher than that of the other two wings, due to a slightly higher .
As the pitching cycle continues, (/) peaks around / = 0.15, 0.17 and 0.28 for Λ = 0 • , 10 • and 20 • , respectively.The pitch reversal occurs at / = 0.25, where reaches its maximum and switches its sign to negative.Because the pitching velocity is now negative, the green panels terminate as drops below zero, suggesting that starts to dissipate energy into the ambient fluid.However, because continues to grow after / = 0.25 for the Λ = 20 • wing, it generates a much more negative as compared to the wings with a lower sweep angle.Figure 7(a) shows that decreases faster for the Λ = 10 • wing than the unswept wing at 0.25 ⩽ / < 0.5.This difference results in a less negative for the Λ = 10 • wing as compared to the Λ = 0 • wing.The faster decrease of for the Λ = 10 • wing also makes it the first to switch back to positive power generation, where and are both negative.The same story repeats after / = 0.5 due to the symmetry of the pitching cycle.In summary, we see that subtle differences in the alignment of and can result in considerable changes of for different sweep angles.The start of the > 0 region is determined by the phase of , whereas the termination of the > 0 region depends on .A non-monotonic duration of the > 0 region (i.e. the size of the green panels) is observed as the sweep angle increases.The cycle-averaged power coefficient, which dictates the stability of aeroelastic systems (see §3.5), is regulated by both the amplitude and phase of the aerodynamic moment.
Next, we analyze the effect of wing sweep on the leading-edge vortex and tip vortex dynamics and the resultant impact on the aerodynamic moment.Figure 8 shows (a) the moment measurements, (b-d) the phase-averaged three-dimensional flow structures at 1 / = 0.14, 2 / = 0.22 and 3 / = 0.30, and (e-g) the corresponding leading-edge vortex and tip vortex geometries for the Λ = 0 • , 10 • and 20 • wings.The three equally spaced time instants 1 / = 0.14, 2 / = 0.22 and 3 / = 0.30 are selected because they correspond to the times of the formation, growth and shedding of the leading-edge vortex.The three-dimensional flow structures are visualized using iso- surfaces with a value of 50 s −2 and colored by the non-dimensional spanwise vorticity, / ∞ .In this view, the leading edge of the wing is pitching towards us, but for clarity, the flow field is always plotted with the coordinate system oriented so that the chord line is aligned with the −axis.
The initial linear growth of the moment coefficient before 1 / for all three wings corresponds to the formation of a strong leading-edge vortex, as depicted in figure 8(bd) at 1 / = 0.14, which brings the lift and moment coefficients above the static stall limit.At this stage, we see that the structure of the leading-edge vortex is similar across different wing sweeps, despite some minor variations near the wing tip.For the unswept wing, the LEV stays mostly attached along the wing span, whereas for the two swept wings, the LEV starts to detach near the tip region (see the small holes on the feeding shear layer near the wing tip).A positive vortex tube on the surface near the trailing edge is observed for all three wings, along with the negative vortex tubes shed from the trailing edge.We also observe a streamwise-oriented tip vortex wrapping over the wing tip, and this tip vortex grows stronger with the sweep angle, presumably due to the higher tip velocity associated with the larger wing sweep.Another possible cause for a stronger TV at a higher sweep angle is that the effective angle of attack becomes higher at the wing tip as the wing sweep increases.
The tracking of the vortex geometry (figure 8e-g) provides a more quantitative measure to analyze the LEV and TV dynamics.We see that at 1 / = 0.14, the LEVs for all three wings are mostly aligned with the leading edge except for the tip region (/ = 0).For the two swept wings, the LEV also stays closer to the leading edge near the wing root (/ = 3).Due to the high wing sweep of the Λ = 20 • wing, a small portion of the LEV falls behind the pivot axis, presumably contributing to a negative moment.However, the mean distance between the LEV and the pivot axis (i.e. the LEV moment arm) stays roughly constant across different wing sweeps, potentially explaining the agreement between the for different wings during the linear growth region.On the other hand, the tip vortex moves downstream as the wing sweep increases due to the wing geometry.For the unswept wing and the Λ = 10 • swept wing, the majority of the tip vortex stays behind the pivot axis.For the Λ = 20 • swept wing, the TV stays entirely behind the pivot axis.As a result, the TV mostly contributes to the generation of negative moments, which counteracts the LEV moment contribution.
At 2 / = 0.22, figure 8(b) and the front view of figure 8(e) show that the LEV mostly from the wing surface for the unswept wing except for a small portion near the wing tip, which stays attached.A similar flow structure was observed by Yilmaz & Rockwell (2012) for finite-span wings undergoing linear pitch-up motions, and by Son et al. (2022a) for high-aspect-ratio plunging wings.For the Λ = 10 • wing, this small portion of the attached LEV shrinks (see the front view of figure 8f ).The top portion of the LEV near the wing root is also observed to stay attached to the wing surface as compared to the Λ = 0 • case.For the Λ = 20 • wing, as shown by the front view of figure 8(g), the attached portion of the LEV near the wing tip further shrinks and almost detaches, while the top portion of the LEV also attaches to the wing surface, similar to that observed for Λ = 10 • .The shrinking of the LEV attached region near the wing tip as a function of the wing sweep is presumably caused by the decreased anchoring effect of the tip vortex.The shrinking of the attached LEV could also be a result of an increased effective angle of attack.The side views of figure 8(e-g) show that the LEV moves towards the pivot axis at this time instant.The swept wing LEVs have slightly longer mean moment arms due to their attached portions near the wing root.This is more prominent for the Λ = 20 • wing, potentially explaining the of Λ = 20 • exceeding the other two wings at 2 /.The tip vortex moves upwards and outwards with respect to the wing surface from 1 / to 2 /.
During the pitch reversal ( 3 / = 0.30), the LEV further detaches from the wing surface, and the TV also starts to detach.For the unswept wing, the LEV mostly aligns with the pivot axis except for the tip portion, which still remains attached.For the Λ = 10 • swept wing, the LEV also roughly aligns with the pivot axis, with both the root and the tip portions staying near the wing surface, forming a more prominent arch-like shape (see the front view of figure 8f ) as compared to the previous time step.For the Λ = 20 • wing, the root portion of the LEV stays attached and remains far in front of the pivot axis.The LEV detaches near the wing tip and joins with the detached TV, as shown by figure 8(d) and the front and top views of figure 8(g).The attachment of the LEV near the wing root and the detachment of the TV near the wing tip both contribute to a more positive , as compared to the other two wings with lower sweep.The change of the LEV geometry as a function of the sweep angle can be associated with the arch vortices reported by Visbal & Garmann (2019).In their numerical study, it has been shown that for pitching unswept wings with free tips on both ends, an arch-type vortical structure began to form as the pitch reversal started (see their figure 6c).
In our experiments, the wings have a free tip and an endplate (i.e. a wing-body junction, or symmetry plane).Therefore, the vortical structure shown in figure 8(b) is equivalent to one-half of the arch vortex.If we mirror the flow structures about the wing root (i.e. the endplate), we can get a complete arch vortex similar to that observed by Visbal & Garmann (2019).For swept wings, we observe one complete arch vortex for both Λ = 10 • (figure 8c) and 20 • (figure 8d).Again, if we mirror the flow structures about the wing root, there will be two arch vortices for each swept wing, agreeing well with the observation of Visbal & Garmann (2019) (see their figures 10c and 13c).Moreover, Visbal & Garmann (2019) reported that for swept wings, as Λ increases, the vortex arch moves towards the wing tip, which is also seen in our experiments (compare the front views of figure 8e-g).
Insights obtained from moment partitioning
We have shown in the previous section, §3.6, that the aerodynamic moment is jointly determined by the leading-edge vortex and the tip vortex dynamics.Specifically, the spatial locations and geometries of the LEV and TV, as well as the vortex strength, have a combined effect on the unsteady aerodynamic moment.To obtain further insights into this complex combined effect, we use the Force and Moment Partitioning Method (FMPM) to analyze the three-dimensional flow fields.
As we discussed in §2.4, the first step of applying FMPM is to construct an 'influence potential', .We solve equation 2.6 numerically using the MATLAB Partial Differential Equation Toolbox (Finite Element Method, code publicly available on MATLAB File Exchange).We use a 3D domain of 10 × 10 × 20, and a mesh resolution of 0.02 on the surface of the wing and 0.1 on the outer domain.We visualize the calculated threedimensional influence field, , for the Λ = 0 • , 10 • and 20 • wings using iso- surfaces in figure 9(a-c).Figure 9(d-f ) illustrates the corresponding side views, with the wing boundaries outlined by yellow dotted lines and the pitching axes indicated by green dashed lines.We see that for the unswept wing, the iso- surfaces show symmetry with respect to the pivot axis and the wing chord, resulting in a quadrant distribution of the influence field.The magnitude of peaks on the wing surface and decreases towards the far field.The slight asymmetry of with respect to the pitching axis (see figure 9d) is caused by the difference between the rounded leading edge and the sharp trailing edge of the NACA 0012 wing (see also the 2D influence field reported in Zhu et al. (2023)).The size of the iso- surfaces stays relatively constant along the wing span, except at the wing tip, where the surfaces wrap around and seal the tube.
As the sweep angle is increased to Λ = 10 • and 20 • , we see that the quadrant distribution of the influence field persists.However, the iso- surfaces form funnel-like shapes on the fore wing and teardrop shapes on the aft wing.This is caused by the variation of the effective pivot axis along the wing span.Figure 9(e) and (f ) show that, for swept wings, the negative regions extend over the entire chord near the wing root, even behind the pitching axis.Similarly, the positive regions (almost) cover the entire wing tip and even spill over in front of the pitching axis.As we will show next, this behavior of the field for swept wings will result in some non-intuitive distributions of the aerodynamic moment.In addition, the magnitude of the field is observed to increase with the sweep angle, due to the increase of the effective moment arm (Zhu et al. 2021).
We multiply the three-dimensional field by the influence field, , and get the spanwise moment (density) distribution field, −2.To visualize the moment distributions, we recolor the same iso- surface plots shown in figure 8 with the moment density, −2, which are shown in figure 10(a-c).As before, the wings and flow fields are rotated by so that we are always looking from a viewpoint normal to the chord line, giving a better view of the flow structures.In these iso- surface plots, red regions indicate that the vortical structure induces a positive spanwise moment, whereas blue regions represent the generation of a negative spanwise moment.In between red and blue regions, white regions have zero contribution to the spanwise moment.
At 1 / = 0.14 (figure 10a), as expected, we see that the entire LEV on the unswept wing is generating a positive moment.For the Λ = 10 • swept wing, however, the LEV generates a near-zero moment near the wing tip, and for the Λ = 20 • swept wing, the tip region of the LEV contributes a negative moment due to the non-conventional distribution of the field.The TV generates almost no moment for the unswept wing, but contributes a negative moment for the swept wings.The vortex tube formed near the trailing edge of the wing surface contributes entirely to negative moments for the unswept wing, but its top portion starts to generate positive moments as the sweep angle increases.The contributions of each vortical structure on the moment generation for the three wings become more clear if we plot the spanwise distribution of the vorticity-induced moment.
By integrating the moment distribution field −2 over the horizontal (, )-plane at each spanwise location, , we are able to obtain the spanwise distribution of the vorticity-induced moment, shown in figure 10(d-f ).For the unswept wing, Λ = 0 • , figure 10(d) shows that the LEV generates a near-uniform positive moment across the span.As the sweep angle increases (Λ = 10 • ), the LEV generates a higher positive moment near the wing root, and the TV starts to generate a negative moment.For the Λ = 20 • wing, this trend persists.It is also interesting to see that the spanwise moment distribution curves for the three wings intersect around the mid span, where the effective pivot axis coincides at the mid chord.For the two swept wings, the more positive moments near the wing root counteract the negative LEV and TV contributions near the wing tip, resulting in a similar overall moment as compared to the unswept wing.The FMPM thus quantitatively explains why the three wings generate similar unsteady moments at this time instant (figure 8a).
At 2 / = 0.22 (figure 10b), the LEV starts to detach and moves towards the pitching axis.As discussed in the previous section, §3.6, the LEV forms a half-arch for the unswept wing, with only the tip region staying attached, and a complete arch for swept wings, with both the root and tip regions staying attached.These arch-like LEV geometries, together with the special shapes of the three-dimensional influence field, lead to some special distributions of the aerodynamic moments.For the unswept wing, the color of the LEV becomes lighter as compared to the 1 / case, indicating a decreasing contribution to positive moments.However, the attached portion of the LEV still generates a positive moment as it remains attached, close to the wing, and in front of the pitching axis.Comparing the two swept wing cases, the LEV for the Λ = 20 • wing generates more positive moments near the wing root as compared to the Λ = 10 • wing due to the magnitude of the field (figure 9).The TVs for the three wings behave similarly to the cases at 1 /.The aft wing vortex tube on the wing surface breaks into two smaller tubes.Because of their small volumes, we do not expect them to affect the total moment generation.Figure 10(e) shows that the large part of the LEV does not contribute to any moment generation for the unswept wing -only the tip region (0 ⩽ / ⩽ 1) generates positive moments.As compared to 1 /, the LEV generates more positive moments near the wing root for the two swept wings, especially for the Λ = 20 • wing, and the TV generates slightly more negative moments.The overall trend observed in figure 10(e) further explains the moment measurements shown in figure 8(a), where the Λ = 20 • wing produces the highest , followed by the Λ = 10 • wing and then the unswept wing at 2 /.
At 3 / = 0.30 (figure 10c), the LEV further detaches from the wing surface.For the unswept wing, the LEV color becomes even lighter.Comparing the temporal evolution of the LEV color for the unswept wing, we see that the LEV progressively generates lower positive moments, agreeing well with the decreasing moment measurement shown in figure 8(a).The LEV continues to generate positive moments near the root region and negative moments near the tip region for the Λ = 10 • swept wing, although it is largely aligned with the pivot axis (see also the side view of figure 8f ).This is again a result of the non-conventional funnel-shaped field near the wing root and the teardrop-like field near the wing tip (figure 9b and e).This trend persists for the Λ = 20 • wing.However, the LEV generates more positive moments due to its shorter distance from the leading edge and the wing surface near the wing root.Moreover, the size of the LEV iso- surface also becomes larger for the Λ = 20 • wing as compared to the previous time steps, indicating a stronger LEV and thus a higher aerodynamic moment, which explains why the of Λ = 20 • peaks around 3 / in figure 8 (a).This is also reflected in the spanwise moment plot in figure 10(f ), where the LEV generates more positive moments for the Λ = 20 • wing than the Λ = 10 • wing.The tip vortex again behaves similarly to the previous time steps for all three wings, although it becomes less coherent and detaches from the wing surface.
It is worth noting that the integral of −2 over the (, )-plane (i.e.figure 10d-f ) also includes contributions from other vortical structures.In figure 10(a-c), we can see that there are four main structures on each wing: the LEV, the TV, the TEV, and the vortex tube on the aft wing surface.Figure 9 shows that the amplitude of the influence field, , is zero near the trailing edge due to symmetry.This means that the contribution to the moment by the TEV is negligible, because −2 approaches zero in this region and makes no contribution to the integrand.The aft wing vortex tube is small in size compared to the LEV and TV.In addition, it is not as coherent, because it breaks down at 2 / = 0.22.Therefore, we would expect its contribution to the integral to be small as well.
In summary, the Force and Moment Partitioning Method enables us to associate the complex three-dimensional vortex dynamics with the corresponding vorticity-induced moments, and quantitatively explains the mechanisms behind the observed differences in the unsteady moment generation, which further drives the pitching motion of these swept wings.These insightful analyses would not have been possible without the FMPM.
Conclusion
In this experimental study, we have explored the nonlinear flow-induced oscillations and three-dimensional vortex dynamics of cyber-physically mounted pitching unswept and swept wings, with the pitching axis passes through the mid-chord point at the mid-span plane, and with the sweep angle varied from 0 • to 25 • .At a constant flow speed, a prescribed high inertia and a small structural damping, we adjusted the wing stiffness to systematically study the onset and extinction of large-amplitude flow-induced oscillations.For the current selections of the pitching axis location and the range of the sweep angle, the amplitude response revealed subcritical Hopf bifurcations for all the unswept and swept wings, with a clustering behavior for the Hopf point and a non-monotonic saddle-node point as a function of the sweep angle.The flow-induced oscillations have been correlated with the structural oscillation mode, where the oscillations are dominated by the inertial behavior of the wing.For swept wings with high sweep angles, a hybrid oscillation mode, namely the structural-hydrodynamic mode, has been observed and characterized, in which the oscillations were regulated by both the inertial moment and the fluid moment.The onset of flow-induced oscillations (i.e. the Hopf point) has been shown to depend on the static characteristics of the wing.The non-monotonic trend of the saddle-node point against the sweep angle can be attributed to the non-monotonic power transfer between the ambient fluid and the elastic mount, which further depends on the amplitude and phase of the unsteady aerodynamic moment.Force and moment measurements have shown that, perhaps surprisingly, the wing sweep has a minimal effect on the aerodynamic forces and it was therefore inferred that the wing sweep modulates the aerodynamic moment by affecting the moment arm.Phase-averaged three-dimensional flow structures measured using stereoscopic PIV have been analyzed to characterize the dynamics of the leading-edge vortex and tip vortex.Finally, by employing the Force and Moment Partitioning Method (FMPM), we have successfully correlated the complex LEV and TV dynamics with the resultant aerodynamic moment in a quantitative manner.
In addition to reporting new observations and providing physical insights on the effects of moderate wing sweep in large-amplitude aeroelastic oscillations, the present study can serve as a source of validation data for future theoretical/computational models.Furthermore, the optimal sweep angle (Λ = 10 • ) observed for promoting flow-induced oscillations may have engineering implications.For example, one should avoid this sweep angle for aero-structure designs to stay away from aeroelastic instabilities.On the other hand, this angle could potentially be employed for developing higher-efficiency flapping-foil energy-harvesting devices.Lastly, the use of FMPM to analyze (especially three-dimensional) flow fields obtained from PIV experiments has shown great utility, and the results further demonstrated the powerful capability of this emerging method to provide valuable physical insights into vortex-dominated flows, paving the way for more applications of this method to data from future experimental and numerical studies.
Figure 1.(a) A schematic of the experimental setup.(b) Sketches of unswept and swept wings used in the experiments.The pivot axes are indicated by black dashed lines.The green panels represent volumes traversed by the laser sheet for three-dimensional phase-averaged stereoscopic PIV measurements.
Force and Moment Partitioning Method To apply FMPM to three-dimensional PIV data, we first construct an influence potential that satisfies Laplace's equation and two different Neumann boundary conditions on the airfoil and the outer boundary ∇ 2 = 0, and n = [( − ) × n] • e z
Figure 2 .
Figure 2. (a) Static lift coefficient and (b) moment coefficient of unswept and swept wings.Error bars denote standard deviations of the measurement over 20 seconds.
Amplitude response and static divergence for unswept and swept wings.⊲: increasing , ⊳: decreasing .The inset illustrates the wing geometry and the pivot axis.The colors of the wings correspond to the colors of the amplitude and divergence curves in the figure.
Figure 4 .
Figure 4. (a) Frequency response of unswept and swept wings.(b, c) Force decomposition of the structural mode and the structural-hydrodynamic mode.(b) and (c) correspond to the filled orange triangle and the filled green diamond shown in (a), respectively.Note that / = 0 corresponds to = 0.
(a) shows the measured frequency response, * , as a function of the calculated natural (structural) frequency, * , and sweep angle.In the figure, * = / ∞ and * = / ∞ , where is the measured pitching frequency and Temporal evolution of (a) the pitching angle , (b) the fluid moment , and the stiffness moment * * near the Hopf point for the Λ = 15 • swept wing.The vertical gray dashed line indicates the time instant ( = 645 s) at which is increased above the Hopf point.(c) Static moment coefficients of unswept and swept wings.Inset: The predicted Hopf point based on the static stall angle and the corresponding moment, / * , versus the measured Hopf point, * .The black dashed line shows a 1:1 scaling.
Figure5(a) and (b) shows the temporal evolution of the pitching angle, (), the fluid moment, (), and the stiffness moment, * * (), for the Λ = 15 • swept wing as the Cauchy number is increased past the Hopf point.We see that the wing undergoes small amplitude oscillations around the divergence angle just prior to the Hopf point ( < 645 s).The divergence angle is lower than the static stall angle, , and so we know that the flow stays mostly attached, and the fluid moment, , is balanced by the stiffness moment, * * (figure5b).When the Cauchy number, = 1/ * , is increased above the Hopf point (figure5a, > 645 s), * * is no longer able to hold the pitching angle below .Once the pitching angle exceeds , stall occurs and the wing experiences a sudden drop in .The stiffness moment, * * , loses its counterpart and starts to accelerate the wing to pitch towards the opposite direction.This acceleration introduces unsteadiness to the system and the small-amplitude oscillations gradually transition to large-amplitude LCOs over the course of several cycles, until the inertial moment kicks in to balance * * (see also figure4b).This transition process confirms the fact that the onset of large-amplitude LCOs depends largely on the static characteristics of the wing -the LCOs are triggered when the static stall angle is exceeded.The triggering of flow-induced LCOs starts from exceeding the static stall angle after * is decreased below the Hopf point, causing to drop below * * .At this value of * , the slope of the static stall point should be equal to the stiffness at the Hopf point, * (i.e. = * * , where is the static stall moment).This argument is verified by figure5(c), in which we replot the static moment coefficients of unswept and swept wings from figure 2(b) (error bars omitted for clarity), together with the corresponding * * .We see that the * * lines all roughly pass through the static stall points ( * , ) of the corresponding Λ.Note that * * of Λ = 15 • and 20 • overlap with each other.Similar to the trend observed for the Hopf point in figure3, the static stall moment can also be divided into two groups, with the unswept wing (Λ = 0 • ) being in G2 and all the other wings (Λ = 10 • -25 • ) being in G1 (see also figure2b).The inset compares the predicted Hopf point, / * , with the measured Hopf point, * , and we see that data closely follow a 1:1 relationship.This reinforces the argument that the onset of flow-induced LCOs is shaped by the static characteristics of the wing, and that this explanation applies to both unswept and swept wings.It is worth noting thatNegi et al. (2021) performed global linear stability analysis on an
Figure 8 .
Figure 8.(a) Moment coefficients replotted from figure 7(a) for half pitching cycle.Three representative time instants 1 / = 0.14, 2 / = 0.22 and 3 / = 0.30 are selected for studying the evolution of the leading-edge vortex (LEV) and tip vortex (TV).(b-d) Phase-averaged three-dimensional flow structures for the Λ = 0 • unswept wing, and the Λ = 10 • and Λ = 20 • swept wings.The flow structures are visualized with iso- surfaces ( = 50 s −2 ) and colored by the non-dimensional spanwise vorticity, / ∞ .All the flow fields are rotated by the pitching angle to keep the wing at a zero angle of attack for better visualization of the flow structures.A video capturing the three-dimensional flow structures for the entire pitching cycle can be found in the supplementary material.(e-g) Side views and front views of the corresponding three-dimensional LEV and TV geometries.Solid curves represent LEVs and dotted lines represent TVs.
Figure 9 .
Figure 9. Iso-surface plots of three-dimensional influence potentials for (a) the Λ = 0 • unswept wing, (b) the Λ = 10 • swept wing, and (c) the Λ = 20 • swept wing.(d-f ) The corresponding side views, with the wing boundaries outlined by yellow dotted lines and the pitching axes indicated by green dashed lines.
Figure 10
Figure 10.(a-c) Phase-averaged iso- surfaces ( = 50 s −2 ) for the Λ = 0 • unswept wing and the Λ = 10 • and 20 • swept wings, colored by the vorticity-induced moment density, −2 (m 2 s −2 ), at 1 / = 0.14, 2 / = 0.22 and 3 / = 0.30.Note that the wings and flow fields are rotated in the spanwise direction to maintain a zero angle of attack, for a better view of the flow structures.(d-f ) Spanwise distributions of the vorticity-induced moment for the three wings at the three representative time instants, obtained by integrating −2 at different spanwise locations.
|
v3-fos-license
|
2022-02-04T14:07:21.543Z
|
2022-02-01T00:00:00.000
|
246488853
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.watres.2022.118132",
"pdf_hash": "0224a48c4c94b5ed3fcc1457e492adea2094459c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41924",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "282599e8ed5b475790d094a1eba739ff850105a8",
"year": 2022
}
|
pes2o/s2orc
|
Evaluation of process limit of detection and quantification variation of SARS-CoV-2 RT-qPCR and RT-dPCR assays for wastewater surveillance
Effective wastewater surveillance of SARS-CoV-2 RNA requires the rigorous characterization of the limit of detection resulting from the entire sampling process - the process limit of detection (PLOD). Yet to date, no studies have gone beyond quantifying the assay limit of detection (ALOD) for RT-qPCR or RT-dPCR assays. While the ALOD is the lowest number of gene copies (GC) associated with a 95% probability of detection in a single PCR reaction, the PLOD represents the sensitivity of the method after considering the efficiency of all processing steps (e.g., sample handling, concentration, nucleic acid extraction, and PCR assays) to determine the number of GC in the wastewater sample matrix with a specific probability of detection. The primary objective of this study was to estimate the PLOD resulting from the combination of primary concentration and extraction with six SARS-CoV-2 assays: five RT-qPCR assays (US CDC N1 and N2, China CDC N and ORF1ab (CCDC N and CCDC ORF1ab), and E_Sarbeco RT-qPCR, and one RT-dPCR assay (US CDC N1 RT-dPCR) using two models (exponential survival and cumulative Gaussian). An adsorption extraction (AE) concentration method (i.e., virus adsorption on membrane and the RNA extraction from the membrane) was used to concentrate gamma-irradiated SARS-CoV-2 seeded into 36 wastewater samples. Overall, the US CDC N1 RT-dPCR and RT-qPCR assays had the lowest ALODs (< 10 GC/reaction) and PLODs (<3,954 GC/50 mL; 95% probability of detection) regardless of the seeding level and model used. Nevertheless, consistent amplification and detection rates decreased when seeding levels were < 2.32 × 103 GC/50 mL even for US CDC N1 RT-qPCR and RT-dPCR assays. Consequently, when SARS-CoV-2 RNA concentrations are expected to be low, it may be necessary to improve the positive detection rates of wastewater surveillance by analyzing additional field and RT-PCR replicates. To the best of our knowledge, this is the first study to assess the SARS-CoV-2 PLOD for wastewater and provides important insights on the analytical limitations for trace detection of SARS-CoV-2 RNA in wastewater.
Introduction
Wastewater surveillance is being utilized in many countries to monitor coronavirus disease 2019 (COVID-19) via severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) RNA presence and levels in community wastewater. SARS-CoV-2 RNA excreted by infected individuals is diluted by many orders of magnitude in wastewater. Therefore, to achieve trace detection and quantification of RNA, the wastewater samples require the application of optimized concentration methods (with varying primary and/or secondary concentration steps) before extraction of the RNA fragments and, finally, highly-sensitive molecular assays using reverse transcription PCR (RT-PCR) or RT quantitative PCR (RT-qPCR) or RT digital PCR (RT-dPCR) (Ahmed et al., 2020a;Medema et al., 2020a;Prado et al., 2020;Randazzo et al., 2020;Bertrand et al., 2021;Gibas et al., 2021;Navarro et al., 2021;Westhaus et al., 2021).
Given the complexities of such workflows, there are many factors that may contribute to false negative and erroneous results (wastewater negative despite the presence of COVID-19 in the relevant population), which are thoroughly discussed in several review articles (Ahmed et al., 2020b;Medema et al., 2020b;Michael-Kordatou et al., 2020;Ahmed et al., 2021a;Bivins et al., 2021a). Interpreting discordant results is further complicated by the many RT-PCR assays in use. Multiple assays have been developed targeting various nucleic acid sequences of SARS-CoV-2 specific to the nucleocapsid (N), envelope protein (E), RNA dependent RNA polymerase (RdRp), open reading frame one (ORF1), membrane protein (M), and surface protein (S) genes (Li et al., 2020). While these assays were developed for clinical testing of nasopharyngeal swab samples, they are also being used for wastewater surveillance (Bertrand et al., 2021;Bivins et al., 2021b;Gibas et al., 2021;Navarro et al., 2021;Westhaus et al., 2021).
Several studies have examined the analytical sensitivity, as measured by the assay limit of detection (ALOD), of various RT-qPCR assays primarily for clinical testing. For example, Iglói et al. (2020) compared 13 commercially-available real-time RT-PCR assays using SARS-CoV-2 cell-cultured virus stock. Analytical sensitivity of assays in the different kits varied between 3.3 to 330 RNA gene copies (GC)/reaction. Muenchhof et al. (2020) compared 11 different RT-qPCR assays used in seven diagnostic laboratories in Germany using an RNA sample extracted from one SARS-CoV-2-positive stool sample. Serially-diluted RNA sub-samples were shared with participating laboratories to determine sensitivity. Most RT-qPCR assays for SARS-CoV-2 examined in their study successfully detected ~5 GC/reaction, reflecting a high sensitivity. A reduced sensitivity was noted for the original RdRp assay from Charité Institute of Virology (Charité), which may have impacted the confirmation of some COVID-19 cases in the early weeks of the pandemic. A study by Vogels et al. (2020) compared the performance of nine primer-probe combinations targeting several genes (i.e., E, N, ORF1, RdRp) recommended by the World Health Organization (e.g., those developed by the China CDC, US CDC, Charité (Corman et al., 2020), and Hong Kong University). This comparison was performed with standard reference materials and clinical samples (e.g., nasopharyngeal swabs, saliva, urine, and rectal swabs) seeded with the reference material. The authors demonstrated that at low viral concentrations (1 to 10 GC/µL), not all assays yielded positive results; thus, suggesting that some assays may be more prone to false-negative errors than others (Vogels et al., 2020). Most notably, the RdRp reverse primer had mismatches with the reference material that were attributed to evolution of the virus, causing low analytical sensitivity.
While the ALOD is a useful assessment of the analytical sensitivity of SARS-CoV-2 RT-qPCR assays, the ALOD values of various SARS-CoV-2 assays during wastewater surveillance have ranged between 1 to 100 GC/reaction (Gerrity et al., 2021;Randazzo et al., 2020;Chavarria-Miró et al., 2021). For wastewater surveillance, the analytical sensitivity of methods must also account for the efficiency of the various processing steps, including primary concentration, loss through nucleic acid extraction, and inhibition of reverse transcription or PCR amplification. Together the RT-qPCR/dPCR ALOD and the process recovery efficiency (loss of target through all sample processing steps) determine the process limit of detection (PLOD), which has not yet been well characterized for any method used to concentrate SARS-CoV-2 RNA from wastewater. Frequently, assumed process recoveries (e.g., 100%) and empirically determined ALODs are combined to produce idealized estimates of the PLOD , but empirical determination of the PLOD itself is vital for a robust characterization of the surveillance method.
A recent study evaluated four different SARS-CoV-2 RT-qPCR detection assays (US CDC N1, China CDC N (CCDC N), N_Sarbeco and E_Sarbeco) for the accurate and reliable quantification of SARS-CoV-2 in wastewater (Zhang et al., 2022). The authors recommended the CCDC N assay for its high sensitivity and reproducibility when analysing plasmid control material, plasmid-seeded wastewater, and real wastewater samples. While the US CDC N1 assay demonstrated high sensitivity in their ambient wastewater samples, it showed poor reproducibility and linearity at low concentrations against their plasmid-seeded samples (Zhang et al., 2022). However, the generalizability of such findings is constrained by several limitations. The RT-qPCR cycling parameters were the same for all assays and deviations from the optimized assay-specific cycling parameters may have affected the sensitivity of one assay compared to another. Furthermore, plasmid control materials, rather than SARS-CoV-2 virions, were seeded by Zhang et al. (2022) into RNA extracted from wastewater to determine the spiked mocks limit of detection using seeded wastewater. The use of plasmid control materials (which are double-stranded DNA) for SARS-CoV-2 (single-stranded RNA virus) has well documented limitations, including heterogeneity in PCR efficiencies and non-linearity for RT-qPCR experiments .
The primary objective of this study was to evaluate the PLODs of five RT-qPCR assays and one RT-dPCR assay for the detection of SARS-CoV-2 RNA in wastewater inclusive of the processing workflow efficiency. This was achieved by seeding a dilution series of known concentrations of gamma-irradiated SARS-CoV-2 virions into wastewater followed by primary concentration, nucleic acid extraction, and RT-qPCR/RT-dPCR analysis using each assay. In addition to determining the PLOD, the quantitative data from the seeding experiments were also used to assess the variation in SARS-CoV-2 RNA copy number along the dilution gradient. In conjunction with these experiments, the effect of variable seeding levels of SARS-CoV-2 in wastewater on recovery efficiency of each assay was also examined using 36 different wastewater samples.
SARS-CoV-2 seeding materials
Gamma-irradiated SARS-CoV-2 hCoV-19/Australia/VIC01/2020 was provided by the Australian Centre for Disease Preparedness (ACDP), CSIRO. Gamma radiation was administered with a 50 kilogray or 5 Mrad dose using an MDS Nordion Irradiator. Gamma irradiation was necessary to mitigate the risk of infection associated with handling infectious SARS-CoV-2 in a biosecurity containment level 2 (BC2) laboratory where this study was conducted. The gamma-irradiated SARS-CoV-2 stock was stored at − 80 • C for twelve weeks before use in this study. Immediately prior to seeding experiments, the mean concentration and standard deviation (4.60 × 10 6 ± 2.50 × 10 5 GC/µL) of the SARS-CoV-2 stock was determined directly from aliquots (n = 3) of the stock suspension using the US CDC N1 RT-dPCR assay as described in the following section.
Sources of wastewater samples
Wastewater samples used in this study were from the Queensland Health Wastewater Surveillance program (https://www.qld.gov.au /health/conditions/health-alerts/coronavirus-covid-19/current-statu s/wastewater). In total, 36 wastewater samples were selected that had been collected between 30/08/2021 and 01/09/2021 from 36 wastewater treatment plants (WWTPs) across Queensland, Australia. At each WWTP, untreated wastewater samples ranging from 500 mL to 1 L in volume were collected as time-based composites using an automated sampler operating in time-proportional mode (taking subsamples every 15 mins for 24 h) (Ahmed et al., 2020a). These composite wastewater samples had been previously screened by RT-qPCR for SARS-CoV-2 RNA in conjunction with the surveillance program and confirmed to be negative using US CDC N1 and N2 assay (https://www.qld.gov.au /health/conditions/health-alerts/coronavirus-covid-19/current-statu s/wastewater).
Wastewater seeding experiments
To determine the ability of each RT-qPCR and US CDC N1 RT-dPCR assays to detect SARS-CoV-2 RNA in wastewater, known concentrations of gamma-irradiated SARS-CoV-2 were prepared by serial diluting stock suspension using DNase and RNase free water and seeding these serial dilutions into 50-mL wastewater samples. Final SARS-CoV-2 seeding levels ranged from 2.32 × 10 5 to 2.32 × 10 2 GC/50 mL along a serial dilution in 10-fold decrements to yield four unique titers of SARS-CoV-2 RNA.
Virus concentration
Viruses were concentrated from the SARS-CoV-2 seeded wastewater samples using the adsorption extraction (AE) method. This method has been commonly used to concentrate SARS-CoV-2 RNA from wastewater (Ahmed et al., 2020a;Jafferali et al., 2021;Juel et al., 2021;Sapula et al., 2021). The AE method began with the addition of dissolved MgCl 2 to the sample to achieve a final concentration of 25 mM MgCl 2 . After amendment with MgCl 2 , wastewater samples were immediately filtered through a 0.45-µm pore-size, 47-mm diameter electronegative HA membrane (HAWP04700; Merck Millipore Ltd, Sydney, Australia) via a magnetic filter funnel (Pall Corporation) and filter flask (Merck Millipore Ltd.) (Ahmed et al., 2020a). Following filtration, using aseptic technique, the membrane was immediately removed, rolled, and inserted into a 5-mL-bead-beating tube (Qiagen, Valencia, CA) for nucleic acid extraction.
Nucleic acid extraction
Immediately after virus concentration, nucleic acid was extracted directly from the HA membranes using the RNeasy PowerWater Kit (Cat. No. 14700-50-NF) (Qiagen, Valencia, CA). Prior to homogenization, 990 µL of buffer PM1 and 10 µL of β-Mercaptoethanol (Sigma-Aldrich; M6250-10 mL) were added into each bead-beating tube. The beadbeating tubes were then homogenized using a Precellys 24 tissue homogenizer (Bertin Technologies, Montigny-le-Bretonneux, FR) set for 3 × 15 s at 10,000 rpm at a 10 s interval. After homogenization, the tubes were centrifuged at 4,000 g for 5 min to pellet the filter debris and beads. Sample lysate supernatant ranging from 600-800 µL in volume was then used to extract nucleic acid following the manufacturer's specified protocol. Two modifications were made: (i) the use of DNase I solution was omitted from the protocol to isolate nucleic acid (i.e., both RNA and DNA); (ii) nucleic acid was eluted with 200 µL of DNase and RNase free water instead of 100 µL. Nucleic acid purity was verified by measuring 260/280 ratio using a DeNovix Spectrophotometer & Fluorometer (Wilmington, DE, USA).
Inhibition assessment
After homogenization and before completing the rest of the nucleic acid extraction, known quantities (1.5 × 10 4 GC) of murine hepatitis virus (MHV) were seeded into each lysate and pellet as an inhibition process control. The same quantity of MHV suspension was also added to a distilled water extraction control (same volume of lysate) and subjected to extraction. The presence of PCR inhibition in nucleic acid samples extracted from wastewater was assessed using an MHV RT-PCR assay (Besselsen et al., 2002). The reference PCR quantification cycle (Cq) values obtained for MHV seeded into distilled water for all methods were compared with the Cq values of the MHV seeded into wastewater lysate to obtain information on potential RT-qPCR inhibition. If the Cq value resulting from the sample was > 2 Cq difference from the reference Cq value for the distilled water control, the sample was considered inhibited (Ahmed et al., 2018;Ahmed et al., 2020c). In addition to the extraction control, all samples were analyzed alongside three PCR negative controls.
RT-qPCR and RT-dPCR analysis
Previously published RT-qPCR assays were used for MHV (Besselsen et al., 2002) and SARS-CoV-2 detection/quantification (US CDC, 2020; China CDC, 2020; Corman et al., 2020) (Supplementary Table ST1). For the MHV assay, positive control material in the form of gBlocks gene fragments was purchased from Integrated DNA Technologies (Integrated DNA Technology Coralville, IA, US). Gamma-irradiated SARS-CoV-2, as previously described, was used as an RT-qPCR standard for the SARS-CoV-2 US CDC N1, US CDC N2, CCDC N, CCDC ORF1ab, and E_Sarbeco assays. Prepared standard curve dilutions ranged from 6 × 10 5 to 0.6 GC/reaction. Primer and probe sequences, reaction concentrations, and thermal cycling conditions are listed in Table ST1.
RT-qPCR analyses were performed in 20-µL reaction mixtures using TaqMan TM Fast Virus 1-Step Master Mix (Applied Biosystem, California, USA). MHV RT-qPCR mixture contained 5 µL of Supermix, 300 nM of forward primer, 300 nM of reverse primer, 400 nM of probe, and 5 µL of template RNA. US CDC N1 and N2 RT-qPCR mixture contained 5 µL of Supermix, 500 nM of forward primer, 500 nM of reverse primer, 125 nM of probe, and 5 µL of template RNA. CCDC N RT-qPCR mixture contained 5 µL of Supermix, 400 nM of forward primer, 400 nM of reverse primer, 250 nM of probe, and 5 µL of template RNA. CCDC ORF1ab RT-qPCR mixture contained 5 µL of Supermix, 300 nM of forward primer, 300 nM of reverse primer, 300 nM of probe, and 5 µL of template RNA. E_Sarbeco RT-qPCR mixture contained 5 µL of Supermix, 400 nM of forward primer, 400 nM of reverse primer, 200 nM of probe, and 5 µL of template RNA. The RT-qPCR experiments were performed using a Bio-Rad CFX96 thermal cycler (Bio-Rad Laboratories, Richmond, CA, USA) using manual settings for threshold and baseline.
For digital RT-PCR (RT-dPCR), the US CDC N1 assay was performed in 40-µL reaction mixtures using the QIAcuity One-Step Viral RT-PCR Kit (Cat No. 1123145; Qiagen) and 26K 24-well Nanoplates (Cat No. 250001; Qiagen). The QIAcuity 26K 24-well Nanoplates are microfluidic dPCR plates that allow for the processing of up to 24 samples with up to 26K partitions/well. The PCR reaction occurs in each partition and the partition volume is 0.91 nL. RT-dPCR analyses were performed in 40 µL reaction mixtures using Qiacuity One-Step Viral RT-PCR Kit (Qiagen). US CDC N1 RT-dPCR mixture contained 10 µL of Supermix, 800 nM of forward primer, 800 nM of reverse primer, 200 nM of probe, 0.4 µL of reverse transcriptase and 10 µL of template RNA. Two RT-dPCR replicates were analyzed for each sample. The 40-µL RT-dPCR reactions were prepared in a 96-well pre-plate and then transferred into the 26K 24well Nanoplate. The Nanoplate was then loaded onto the QIAcuity dPCR 5-plex platform (Qiagen) and subjected to a workflow that included: (i) a priming and rolling step to generate and isolate the chamber partitions; (ii) an amplification step using the thermal cycling protocol; and (iii) a final imaging step in the FAM channel. Each RT-dPCR experiment was performed using duplicate RT-dPCR no-template and positive (gamma-irradiated SARS-CoV-2 RNA) controls. Data were analyzed using the QIAcuity Suite Software V1.1.3 193 (Qiagen, Germany) and quantities exported as GC/µL of reaction. The RT-dPCR assays were performed using automatic settings for threshold and baseline. MIQE and dMIQE checklists are provided in Supplementary Tables ST4 and ST5, respectively.
RT-qPCR and RT-dPCR ALODs
To determine RT-qPCR and RT-dPCR assay limit of detection (ALOD), gamma-irradiated SARS-CoV-2 were diluted (6 × 10 5 to 0.6 GC/reaction) and analyzed using RT-qPCR and RT-dPCR. At each dilution, 15 replicates were analyzed. The 95% ALOD was defined by fitting an exponential survival model to the proportion of PCR replicates positive at each step along the gradient (Verbyla et al., 2016).
Quality control
To minimize RT-qPCR and RT-dPCR contamination, nucleic acid extraction and RT-qPCR/dPCR set up were performed in separate laboratories. A sample negative control was included during the concentration process. An extraction negative control was also included during nucleic acid extraction to account for any contamination during extraction. All sample and extraction negative controls were negative for the analyzed targets.
Data analysis
For RT-qPCR and RT-dPCR, the ALOD is defined as the minimum GC number with a 95% probability of detection and determined as previously described (Verbyla et al., 2016). For RT-qPCR, samples were considered positive (SARS-CoV-2 detected) if amplification was observed in at least one of the three replicates within 45 cycles. Samples were considered quantifiable if amplification was observed in all three replicates with concentrations above the ALOD. For RT-dPCR, samples were considered positive if there was at least one positive partition following the merging of nano wells from two replicate wells. Samples were considered quantifiable by RT-dPCR if the concentrations were above the ALOD, and the average number of partitions was >11,000 per sample well.
For RT-qPCR and RT-dPCR, the PLOD is defined as the minimum GC number with a 95% probability of detection, incorporating the loss of SARS-CoV-2 through sample concentration and RNA extraction and determined as previously described (Stokdyk et al., 2016;Verbyla et al., 2016). The PLOD was estimated for each assay using two probability models that predict the proportion of positive amplification within 45 cycles: the exponential survival model and the beta-Poisson model (Verbyla et al., 2016). For comparison, the cumulative Gaussian model Stokdyk et al., 2016) was also used to estimate the 50 and 95% detection probabilities. Maximum likelihood estimates of model parameters were obtained by finding parameter values that minimized the deviance based on the positive and total number of replicates analyzed at the different concentrations of seeded SARS-CoV-2. Since the exponential model is a special case of the beta-Poisson model (Haas et al., 2014), a chi-squared test was performed to determine if the fit provided by the beta-Poisson model was worth the extra degree of freedom.
For the seeded levels that yielded a 100% detection rate, the variation in the estimated SARS-CoV-2 RNA GC number for each assay was assessed via the coefficient of variation (CV). The SARS-CoV-2 recovery efficiency for all RT-qPCR and RT-dPCR was calculated based on the GC quantified as follows: Recovery Efficiency (%) = GC recovered in concentrated wastewater GC seeded × 100 At each concentration step with a 100% detection rate, differences in recovery efficiency between assays were assessed by the Kruskal-Wallis H test with Dunn's post hoc test (Kruskal and Wallis, 1952;Dunn, 1964). Differences in recovery efficiency between concentration steps for each assay were assessed by the Mann-Whitney U test (Mann and Whitney, 1947). Statistical significance was defined as p < 0.05.
Assay performance and relevant QA/QC
A 260/280 nm absorbance ratio of nucleic acid >1.80 for wastewater RNA from all samples was considered acceptable RNA quality (Supplementary Table ST2) (Sambrook et al., 1989). The RT-qPCR standard curves prepared from gamma-irradiated SARS-CoV-2 had a linear dynamic range of quantification from 6 × 10 5 to 6 GC/reaction (1.2 × 10 5 to 1.2 GC/µL). The slopes of the standard curves ranged between -3.31 (CCDC N) and -3.48 (E_Sarbeco) ( Table 1). The ranges for amplification efficiencies (94.0 to 100%) and y-intercepts (36.3, US CDC N1 to 39.8, E_Sarbeco) were within the prescribed range of MIQE guidelines (Bustin et al., 2009). The correlation coefficients (r 2 ) ranged from 0.98 (CCDC N) to 0.99 (E_Sarbeco). The ALODs for the RT-qPCR assays were between 9.50 and 48.1 GC/reaction, being the lowest for the US CDC N1 and greatest for E_Sarbeco assays, respectively ( Table 1). The RT-dPCR ALOD was 3.30 GC/reaction for the US CDC N1 assay. All method, extraction, and RT-qPCR/RT-dPCR negative controls were negative. All positive controls or standard curves amplified in each PCR run. For the US CDC N1 RT-dPCR, the number of partitions ranged from 11,522 -25, 417 with a mean of 21,148 and a SD of 3,437. PCR inhibition was not identified in any RNA samples based on the seeded GC of MHV (all within 2-Cq values of the reference Cq value) (Supplementary Table ST3).
Overall, the US CDC N1 RT-qPCR and the US CDC N1 RT-dPCR assays outperformed other assays. CDC N2 and E_Sarbeco were the least sensitive, and this was most evident at the lower seeding dilution. For all six RT-qPCR and RT-dPCR assays, the exponential survival model effectively estimated the probabilities of SARS-CoV-2 detection considering the entire methodological process (Table 3) (Verbyla et al., 2016). Any improvements in fit provided by the beta-Poisson model were not worth the extra degree of freedom, as indicated by chi-squared tests Table 1 RT-qPCR performance characteristics and assay limit of detection (ALOD). (p-values were all >0.05). Among the RT-qPCR assays and considering the definition of PLOD as the 95% probability of detection, the lowest PLOD value was 3,954 GC/50 mL for US CDC N1 assay followed by 6, 651 GC/50 mL for CCDC N. PLOD values of US CDC N2 and E_Sarbeco were much greater than other assays. US CDC N1 RT-dPCR exhibited the lowest limits of detection, ranging from 33.4 (5% probability of detection) to 1,952 (95% probability of detection) GC/50 mL. We also determined the 50 and 95% probabilities of detection of SARS-CoV-2 RNA for all six RT-qPCR and RT-dPCR assays using the cumulative Gaussian model (Table 3) . Among the RT-qPCR assays, the lowest 95% PLOD value estimated by the cumulative Gaussian model was again the US CDC N1 assay (3,621 GC/50 mL) followed by CCDC ORF1ab (3,948 GC/50 mL). The PLOD values of US CDC N2 and E_Sarbeco were much greater than they were for other assays. US CDC N1 RT-dPCR exhibited the lowest limits of detection, 448 (50% probability of detection) to 2,970 (95% probability of detection) GC/50 mL.
Variation in Quantification
For all RT-qPCR and RT-dPCR assays, SARS-CoV-2 RNA was only quantifiable at seeding values ≥2.32 × 10 4 GC/50 mL wastewater as shown in Table 4. The US CDC N2 RT-qPCR assay demonstrated the largest variation in quantifiable samples with CVs of 89% and 147% at 2.32 × 10 5 GC/50 mL and 2.32 × 10 4 GC/50 mL, respectively. As shown in Fig. 1, the CV of the RT-qPCR assays for CCDC N (50%, 52%), CCDC ORF1ab (51%, 40%), and E_Sarbeco (44%, 59%) were similar to that of the US CDC N1 RT-dPCR assay (52%, 54%) at each of the higher seeding levels. Besides the US CDC N2 RT-qPCR assay previously mentioned, the greatest increase in the CV between the two higher seeding levels was observed for the US CDC N1 RT-qPCR (47% to 76%). Interestingly for the CCDC ORF1ab RT-qPCR, the CV decreased from 51% to 40% suggesting improved quantitative precision as the seeded level decreased. For all others, the precision decreased with decreasing seeding level, as is expected. No assay yielded quantitative results when wastewater was seeded with 2.32 × 10 3 GC/50 or 2.32 × 10 2 GC/50 mL.
Recovery efficiency
As summarized in Table 5, the mean recoveries for RT-qPCR and RT-dPCR assays ranged from 3.63% (CCDC N) to 41.2% (E_Sarbeco) at 2.32 × 10 5 GC/50 mL to 2.77% (CCDC N) to 23.8% (E_Sarbeco) at 2.32 × 10 4 GC/50 mL. At both these higher seeding levels, the CCDC N assay demonstrated the lowest mean recovery and the E_Sarbeco assay Table 2 Proportion of samples and replicates positive for SARS-CoV-2 RNA in wastewater seeded at four concentrations using five RT-qPCR assays and one RT-dPCR assay.
Table 3
The probability of detecting SARS-CoV-2 RNA as determined using wastewater samples seeded with SARS-CoV-2, the adsorption extraction concentration method, and assayed using five RT-qPCR assays and one RT-dPCR assay. The % probability of detection was estimated using two probability models: exponential survival and the cumulative Gaussian models. The process limit of detection (PLOD) was defined as the concentration associated with a 95% probability of detection. Fig. 2, the largest variation in recovery efficiency was found for the US CDC N2 RT-qPCR with a CV of 85.1% at 2.32 × 10 5 and 130% at 2.32 × 10 4 GC/50 mL. While mean recoveries did vary, statistically significant differences were only observed between the E_Sarbeco RT-qPCR assay and the US CDC N1 (p = 0.028), US CDC N2 (p = 0.024), CCDC N (p < 0.0001), and RT-dPCR N1 (p = 0.0003) assays at the higher seeding level and the E_Sarbeco RT-qPCR and the US CDC N2 (p = 0.002), CCDC N (p < 0.0001), CCDC ORF1ab (p = 0.026), and RT-dPCR N1 (p = 0.0003) at the lower seeding level. Mean recovery efficiency significantly increased with increasing seed level for the US CDC N2 RT-qPCR assay (p = 0.011), CCDC ORF1ab (p = 0.008), and E_Sarbeco (p = 0.040). The US CDC N1 RT-qPCR assay, CCDC N assay, and US CDC N1 RT-dPCR assay recovery efficiencies remained similar between the two seeding levels. Aside from the E_Sarbeco RT-qPCR assay, the mean recovery efficiencies were most often below 10% across all assays and seeding levels.
Discussion
Many wastewater SARS-CoV-2 surveillance studies have provided ALOD values by serially-diluting standard materials and assaying the dilution series with various RT-qPCR assays Gerrity et al., 2021;Randazzo et al., 2020;Chavarria-Miró et al., 2021). Assuming a Poisson model for the distribution of target GC into PCR reactions, the theoretical 95% probability of detection is approximately 3 gene copies per PCR reaction (-ln(1-0.95)) (Bustin et al., 2009). But the models and methods used to determine ALODs and their estimated values vary widely between studies, even among wastewater surveillance studies for SARS-CoV-2 RNA . For example, during wastewater surveillance in Virginia using droplet dPCR (ddPCR), Gonzalez et al. (2020) determined ALODs using 60% probability of detection (N1 = 14.6 GC/reaction). While, in northern Indiana, Bivins et al. (2021b), using the same ddPCR platform, estimated the ALOD using 95% probability of detection (N1 = 3.3 GC/reaction). ALODs reported in the SARS-CoV-2 wastewater surveillance literature have ranged from 1 GC/reaction (Gerrity et al., 2021), to 50 GC/reaction (Randazzo et al., 2020), up to as high as 100 GC/reaction (Chavarria-Miró et al., 2021). However, it should be noted that the control materials and statistical methods used to estimate the ALODs are not consistent from one study to another.
In the current study, we have determined the ALOD for each RT-qPCR/dPCR assay using an exponential survival model (Verbyla et al., 2016). The exponential survival model deviates from the Poisson by incorporating a probability, r, that when the target is present describes the probability of "survival" of the target throughout the entire analytical workflow (Poisson assumes r = 1.0) to be successfully detected, so the estimated ALOD increases above the Poisson by the inverse of r ([-ln [1-0.95]/r). The observed ALODs of the RT-qPCR assays suggest that the "survival" probabilities deviate from 1 with the US CDC N1 assay demonstrating the highest survival probability (3/9.5 = 0.32) and E_Sarbeco demonstrating the lowest (3/48.1 = 0.06). Conversely, the RT-dPCR US CDC N1 assay probability of survival of 0.91 (3/3.3) was the closest to 1. These results suggest that for the RT-qPCR assays there are sources of error that reduce the detection probability even when a copy of the target has a high probability of having been added to the reaction well (according to the Poisson distribution). The sources of error could include inhibition associated with the matrix or inefficiency of the PCR reaction in amplifying the control material. In the current study, RNA extracted from gamma-irradiated SARS-CoV-2 was used as the control material, so inhibition is expected to be minimal. Since the ALODs were measured over two days with minimal freeze-thawing, the degradation of RNA during this time frame is also expected to be minimal.
While the ALOD provides information on the lowest number of GC than can be reliably detected by the RT-qPCR assays when analyzing pure culture, plasmid, or other materials, it does not incorporate the target genome loss in the sample matrix by primary, and secondary concentration and nucleic acid extraction. For clinical samples, the results of pure reverse-transcribed RNA transcript standards and seeded samples were different for CCDC-N and CCDC-ORF1ab (Vogels et al., 2020). This indicates that the comparison of RT-qPCR assays should be performed for relevant matrices, such as clinical and wastewater samples. The PLOD is the matrix-relevant limit of detection incorporating the inefficiencies of the entire workflow. Determination of PLOD is not common practice in research literature, especially for wastewater samples, but has been conducted for microbial source tracking marker genes, pathogens, and indicator viruses in surface and drinking water samples (Stokdyk et al., 2016;Symonds et al., 2016;Staley et al., 2012;Verbyla et al., 2016;Ahmed et al., 2018). In this study, we determined PLOD values of SARS-CoV-2 RNA by seeding known concentrations of SARS-CoV-2 in wastewater samples using five different RT-qPCR and one RT-dPCR assays. To the best of our knowledge, this is the first study to assess the SARS-CoV-2 PLOD for wastewater and provides insights on the analytical limitations for trace detection of SARS-CoV-2 in wastewater.
For PLOD determination, we used 36 different wastewater samples collected from 36 WWTPs representing varying wastewater characteristics. For seeding values 2.32 × 10 5 and 2.32 × 10 4 GC/50 mL, all 18 wastewater samples and PCR replicates were positive by each assay RT-qPCR and RT-dPCR assays suggesting at these seeding levels, detection of SARS-CoV-2 in wastewater may be quite straightforward using the combination of adsorption extraction and the Power Water RNeasy Kit. However, with seeding values 2.32 × 10 3 and 2.32 × 10 2 GC/50 mL, detection rates decreased with the lowest detection rates observed at the seeding level of 2.32 × 10 2 GC. At these two seeding levels, consistent amplification was not observed by all RT-qPCR and RT-dPCR assays. Interestingly, the US CDC N1 assay outperformed other assays regarding detection sensitivity. The superior performance of the US CDC N1 assays compared to other assays has been reported in research literature (Ahmed et al., 2022;Chavarria-Miró et al., 2021;Feng et al., 2021;Pecson et al., 2021;Pérez-Cataluña et al., 2021). Our findings also corroborate with a recent study that suggested US CDC N1 is suitable for screening SARS-CoV-2 in wastewater with low COVID-19 prevalence (Zhang et al., 2022). The performance of US CDC N1 RT-qPCR and RT-dPCR were similar, however, RT-dPCR yielded more positive replicates than RT-qPCR. The increased sensitivity of RT-dPCR for the detection of SARS-CoV-2 RNA in wastewater compared to RT-qPCR has also been reported (Ahmed et al., 2022;Ciesielski et al., 2021;Graham et al., 2020). These findings clearly suggest that sub-sampling error is occurring when the seeding level of SARS-CoV-2 RNA is <2.32 × 10 3 GC/50 mL wastewater , which could introduce false negative errors in RNA detection (Taylor et al., 2019). Furthermore, factors such as stochastic amplification, measurement uncertainty, variability in virus concentration and RNA extraction methods, inefficient RT-qPCR, and stochastic levels of inhibitors could affect the PLOD and the reproducibility of results.
If the objective of the SARS-CoV-2 wastewater surveillance is early detection of COVID-19 cases in a community, then the factors mentioned above, and associated variabilities should be carefully considered by analytical laboratories and public health units. Every amplification, whether it is reproducible between replicates or not, or even with a Cq value >40 should be considered as a potential positive (given appropriate negative results in all negative controls) and reported to the relevant agencies. For a virus such as SARS-CoV-2, which is highly contagious, it may be better to be overly cautious to effectively manage outbreaks. The results of this study also highlight the importance of modelling detection limits in PCR-based methods as probabilities rather than fixed limits. This approach has been described previously to determine the ALOD (Verbyla et al., 2016;Forootan et al., 2017), but we have now applied it to determine the PLOD. Considering the loss of SARS-CoV-2 RNA throughout the sample concentration and analysis process and the observation that the survival of the target during the PCR process is frequently less than 100%, using a modelling approach can help determine a laboratory's probability of detecting the target when the concentrations are very low. For example, in this present study, we found that SARS-CoV-2 is detected using the US CDC N1 RT-qPCR assay with a probability of 50% when the concentration is 915 GC/50 mL of wastewater. Therefore, if the concentration is 915 GC/50 mL, then processing 50 mL samples in triplicate would increase the probability of detection in at least one of the field triplicates to 87.5% (1-0.5 3 =0.875). From a public health perspective, it is important to be able to detect the virus at concentrations below the 95% PLOD; however, laboratories may not be able to simply increase the volume of sample concentrated (e.g., due to filter clogging or increasing likelihood of inhibition).
Thus, another way to improve the positive predictive value of wastewater surveillance would be to analyze field replicates, which would increase the probability of amplification in at least one replicate. Developing a probability of detection model would allow a wastewater surveillance team to weigh the costs and benefits of adding field replicates to the standard operating procedure. Characterizing the sources of variation in SARS-CoV-2 RNA measurements in wastewater and fully reporting the associated uncertainty is also an important component of effective wastewater surveillance for public health applications (McClary-Gutierrez et al., 2021a;McClary-Gutierrez et al., 2021b). The acceptable variation in PCR-based quantification systems has been formalized as the limit of quantification (ALOQ); however, allowable variation is debated with proposed CV threshold values from 25 to 35% (Klymus et al., 2020;Kralik and Ricchi, 2017;Forootan et al., 2017). For wastewater surveillance of SARS-CoV-2 RNA the intrinsic and workflow-derived variability remains largely uncharacterized due to limited method replication and the relationship between an ALOQ derived via control materials and idealized matrices and actual wastewater samples remains uncertain (McClary-Gutierrez et al., 2021a;(Medema et al., 2020b).
In the current study, we have characterized the variation through the entire sampling workflow by seeding SARS-CoV-2 RNA into wastewater from different WWTPs with replication of both filters and RT-qPCR/ dPCR reactions. For all assays and measurement platforms, SARS-CoV-2 RNA was quantifiable only when seeded levels were > 2.32 × 10 4 GC/50 mL wastewater. The US CDC N2 RT-qPCR demonstrated the greatest variation with CVs of 89 and 147% at 2.32 × 10 5 GC and 2.32 × 10 4 GC/50 mL wastewater, respectively. The remaining RT-qPCR assays demonstrated CVs ranging from 40 to 60% across both quantifiable seeding levels, which was comparable with that demonstrated by the US CDC N1 RT-dPCR. The CVs of the US CDC N1 RT-dPCR at the two seeding levels in the current study (54%, 52%) were greater than those reported for filter replicates during wastewater surveillance in Wisconsin, USA, (24% to 29%), but it is unclear whether the results are comparable since increased variation would be expected at lower SARS-CoV-2 RNA concentrations (Feng et al., 2021). Furthermore, differences in wastewater characteristics and analysis of exogenous and endogenous SARS-CoV-2 introduce heterogeneity between studies.
During an inter-laboratory comparison of 36 sampling methods, recovery-corrected SARS-CoV-2 RNA 10th and 90th percentile measurements spanned 2.3 log 10 GC across all methods that yielded quantifiable results ( (Pecson et al., 2021). A smaller methods comparison in Canada found that across laboratories and methods, seeded SARS-CoV-2 and human coronavirus strain 229E concentrations were within 1 log 10 GC of one another (Chik et al., 2021). A wastewater surveillance interlaboratory study conducted at three WWTPs in Utah reported 10th percentile to 90th percentile measurements spanning roughly 0.6 log 10 GC (Weidhaas et al., 2021). In the current study, we observed 10 th to 90 th percentiles spanning 0.47 to 0.99 log 10 gene copies at the 2.32 × 10 5 GC/50 mL seeding level and 0.42 to 2.33 log 10 gene copies at the 2.32 × 10 5 GC/50 mL seeding level. While the variation of SARS-CoV-2 RNA measurements in the current study seems to be in reasonable agreement with those previously reported, direct comparisons are precluded by uncertainty regarding the similarity of the endogenous and seeded SARS-CoV-2 RNA concentration in the wastewater used in the studies. Nonetheless, our robust study design, which included replication of both filters and RT-qPCR/RT-dPCR reactions and wastewater samples from multiple WWTPs, indicates that after accounting for process inefficiencies the variation in measured SARS-CoV-2 RNA may exceed standards typically considered acceptable for limits of quantification.
Since we seeded known concentrations of SARS-CoV-2 in wastewater samples, we were able to calculate the full process recovery efficiency (loss of SARS-CoV-2 through concentration and extraction methods used in this study) using different assays and platforms. We calculated recovery efficiency for 2.32 × 10 5 and 2.32 × 10 4 seeding levels because all wastewater samples and associated RT-qPCR replicates were quantifiable using all RT-qPCR assays. The mean recovery efficiencies for all assays were greater for 2.32 × 10 5 seeding levels than 2.32 × 10 4 , suggesting the concentrations of SARS-CoV-2 present in wastewater influence method recovery. The greater the concentration, the greater the recovery and downstream detection rates. Similar results have been reported in a recent study where greater recovery was observed for bacteriophage phi 6 surrogate with the highest seed compared to the lowest seed levels (Sangsanont et al., 2022). This is not unexpected considering we used wastewater samples from different WWTPs with variable suspended solids (TSS = 670 to 825 mg/ L), while most of the recovery assessment studies used a single or a limited number of bulk wastewater samples (Ahmed et al., 2020d). Feng et al. (2021) observed a significant difference in recovery of BCoV between the WWTPs with some WWTPs having more consistent recovery rates (0.21% to 3.0%) than others (0.89% to 28.0%).
However, relatively consistent recoveries were obtained for SARS-CoV-2 using the US CDC N1 (both RT-qPCR and RT-dPCR), CCDC N and CCDC ORF1 assays. The variations in recovery efficiencies were greater for US CDC N2 and E_Sarbeco assays suggesting these assays alone may not be sensitive enough to detect trace concentrations of SARS-CoV-2 in wastewater and should be used in combination with other assays. This is most likely due to the lower ALOD of these two assays compared to the others. However, other factors such as wastewater characteristics, RT-qPCR efficiency, and Y-intercepts and standard curve materials also introduce variability between assays. Based on the recovery data obtained in this study, we recommend recovery assessment via the most consistent assay if possible, using a dPCR platform. The recovery efficiency presented in this study should be interpreted with care because measuring the actual concentrations of SARS-CoV-2 in seeding stock is not straightforward (Kantor et al., 2021). Importantly, the findings of the current study are based on seeding wastewater with gamma-irradiated SARS-CoV-2. The behavior of an exogenous control, such as the one used in the current study, compared to an endogenous SARS-CoV-2 shed into wastewater via infected individuals remains uncharacterized.
Conclusions
• Of the six assays evaluated, the US CDC N1 RT-dPCR, followed by its RT-qPCR assay, was most sensitive regardless of the statistical model or seeding concentrations used to determine the ALOD and PLOD associated with the AE concentration method. • The US CDC N2 and E_Sarbeco assays were the least sensitive, especially with decreasing seeding concentrations, when evaluated alone (ALOD) and as a part of the AE concentration process (PLOD). • Trends in SARS-CoV-2 RNA recovery efficiency mirrored the analytical sensitivities with recovery efficiencies being less variable for US CDC N1 RT-dPCR and RT-qPCR and most variable for US CDC N2 RT-qPCR and E_Sarbeco RT-qPCR. • The greater the SARS-CoV-2 RNA concentration in a wastewater sample, the greater the recovery and downstream detection probability. When SARS-CoV-2 RNA seeding levels were < 2.32 × 10 3 GC/50 mL, inconsistent amplification was observed, and detection rates decreased for all assays. • Thus, when SARS-CoV-2 RNA concentrations are expected to be low in wastewater, it may be necessary to improve the positive predictive value of wastewater surveillance by analyzing additional field and RT-PCR replicates. • Comparing the behavior of endogenous SARS-CoV-2 RNA compared to various exogeneous controls, such as the seeded SARS-CoV-2 in the current study, remains a critical research need for wastewater surveillance.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
v3-fos-license
|
2018-03-31T18:45:29.356Z
|
2017-05-22T00:00:00.000
|
4204388
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ojrd.biomedcentral.com/track/pdf/10.1186/s13023-017-0648-7",
"pdf_hash": "7896d4e50f851ac5041e1b283d9e41b810170d36",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41925",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "7896d4e50f851ac5041e1b283d9e41b810170d36",
"year": 2017
}
|
pes2o/s2orc
|
Vineland adaptive behavior scales to identify neurodevelopmental problems in children with Congenital Hyperinsulinism (CHI)
Background Congenital Hyperinsulinism (CHI) is a disease of severe hypoglycaemia caused by excess insulin secretion and associated with adverse neurodevelopment in a third of children. The Vineland Adaptive Behavior Scales Second Edition (VABS-II) is a parent report measure of adaptive functioning that could be used as a developmental screening tool in patients with CHI. We have investigated the performance of VABS-II as a screening tool to identify developmental delay in a relatively large cohort of children with CHI. VABS-II questionnaires testing communication, daily living skills, social skills, motor skills and behaviour domains were completed by parents of 64 children with CHI, presenting both in the early neonatal period (Early-CHI, n = 48) and later in infancy (Late-CHI, n = 16). Individual and adaptive composite (Total) domain scores were converted to standard deviation scores (SDS). VABS-II scores were tested for correlation with objective developmental assessment reported separately by developmental paediatricians, clinical and educational psychologists. VABS-II scores were also investigated for correlation with the timing of hypoglycaemia, gender and phenotype of CHI. Results Median (range) total VABS-II SDS was low in CHI [-0.48 (-3.60, 4.00)] with scores < -2.0 SDS in 9 (12%) children. VABS-II Total scores correctly identified developmental delay diagnosed by objective assessment in the majority [odds ratio (OR) (95% confidence intervals, CI) 0.52 (0.38, 0.73), p < 0.001] with 95% specificity [area under curve (CI) 0.80 (0.68, 0.90), p < 0.001] for cut-off < -2.0 SDS, although with low sensitivity (26%). VABS-II Total scores were inversely correlated (adjusted R2 = 0.19, p = 0.001) with age at presentation (p = 0.024) and male gender (p = 0.036), males having lower scores than females in those with Late-CHI [-1.40 (-3.60, 0.87) v 0.20 (-1.07, 1.27), p = 0.014]. The presence of a genetic mutation representing severe CHI also predicted lower scores (R2 = 0.19, p = 0.039). Conclusions The parent report VABS-II is a reliable and specific tool to identify developmental delay in CHI patients. Male gender, later age at presentation and severity of disease are independent risk factors for lower VABS-II scores.
(Continued from previous page)
Conclusions: The parent report VABS-II is a reliable and specific tool to identify developmental delay in CHI patients. Male gender, later age at presentation and severity of disease are independent risk factors for lower VABS-II scores.
Keywords: Glucose, Insulin, Vineland, Development, Cognitive assessment, Neurodevelopment, Developmental delay Background Congenital Hyperinsulinism (CHI) is a significant disorder of hypoglycaemia caused by excessive and unregulated insulin secretion [1,2]. CHI usually presents early in the neonatal period (Early-CHI), but later presentation (Late-CHI) is also well recognised [3][4][5]. A significant proportion of children with Congenital Hyperinsulinism (CHI) have adverse neurodevelopmental abnormalities in spite of improvements in medical care [3,4,6]. Identifying neurodevelopmental outcomes is a priority for children in follow up care; children with developmental needs may require additional support for physical and learning disabilities. The Vineland Adaptive Behavior Scales II© (VABS-II; Pearson Education Incorporated, San Antonio, Texas) parent report questionnaire is a tool that has been standardised for factors including gender, race, age and parental education, to identify children with developmental delay in the domains of communication, daily living skills, social skills, motor skills and behaviour. VABS-II is useful as an adaptive functioning inventory that could be completed at home without time consuming hospital visits and assessments. VABS-II has been used in a few children with CHI [7] but its reliability as a general screening tool to assess developmental delay in this population has not been assessed. VABS-II could be a credible tool to screen for adverse neurodevelopment in children with CHI, particularly at a younger age before formal time consuming cognitive testing is feasible. In this study, we have investigated the utility of VABS-II questionnaires as a parent report screening tool to identify developmental abnormalities in a relatively large population of children with CHI.
Aims
We aimed to 1. investigate performance of VABS-II to identify developmental delay in CHI and 2. identify patient factors correlating with VABS-II scores.
Methods
Parents of a cohort of children with CHI (n = 64), presenting consecutively between 2013 and 2015 to a specialist CHI treatment centre, completed the VABS-II questionnaire following consent. The diagnosis and treatment of CHI was based on established criteria and clinical practice [1,2]. Medical and surgical treatment was individualised for each child. Patients' characteristics and clinical outcome data were obtained from a database of patients. CHI was considered early (Early-CHI) if hypoglycaemia presentation was in the first month of life. Children who presented with hypoglycaemia later than one month had Late-CHI. In such children, neonatal records did not provide evidence for persistent hypoglycaemia. Following hospital discharge, children with Early and Late-CHI were assessed in the outpatient department by the clinical team comprising of clinicians, specialist nurse practitioners, dieticians, speech and language therapists and one clinical psychologist. VABS-II was discussed with parents as a routine screening tool for development after the age of 1 year.
VABS-II is a validated measure of intellectual and developmental functioning, and has been used in children with neonatal conditions [8], in neurological problems [9] and in children with genetic problems [10]. VABS-II has also been used in a small cohort of children with CHI, but its reliability as a screening tool has not been assessed [7]. Although VABS-II can be applied in children from birth, milder forms of developmental delay may not be apparent until an older age when clear progress in several developmental domains is obvious. Therefore, the minimum age for using the VABS-II was chosen at 18 months. No upper limit was specified; however as scores for motor skills in children > 6 years are estimates, analysis of VABS-II scores were run both with and without children > 6 years.
The VABS-II questionnaire was posted out to parents by the clinical psychologist (JN) who was trained and accredited to use and interpret the VABS-II. Where necessary she contacted parents by telephone to discuss queries about VABS-II responses. Populated questionnaires were returned to her for analysis in each of the domains of Communication, Daily Living Skills, Social Skills and Motor Skills [http://www.pearsonclinical.com/]. Domain scores were then compounded to derive the Adaptive Behaviour Composite (Total) score. For each VABS-II domain, scores were converted to Standard Deviation Scores (SDS) based on mean (SD) 15 (3). Total VABS-II scores were also converted to Total VABS-II SDS (VABS-II Total) based on mean (SD) 100 (15), (total scores are not additive). For each VABS-II domain and for composite scores, SDS < -2.0 was indicative of significant developmental delay. The Behaviour component of VABS-II was independently scored for internalising, externalising and total maladaptive behaviour scores, with raw scores having inverse correlation with behaviour outcomes. High scores corresponded to poor behaviour outcomes; for total Behaviour, scores ≥ 20 were considered unsatisfactory.
VABS-II was assessed for repeat variability by comparing initial report with a second report in a volunteer group (n = 7) after at least 1 year. The repeat assessment was performed to investigate if VABS-II demonstrated variability in relation to the timing of the test that could affect the interpretation of results. VABS-II was also analysed in a group of children with idiopathic ketotic hypoglycaemia (IKH) with normal neurodevelopment (n = 9) to assess test performance in an alternative condition of hypoglycaemia without significant adverse neurodevelopment.
VABS-II scores were compared with objective developmental assessment performed within 6 months of reporting. This assessment was performed by developmental paediatricians, clinical psychologists and educational psychologists who were unaware of VABS-II performance scores. The parents of children with CHI were unaware of the VABS-II scores and report until after the objective developmental assessment was performed. However, this testing was not centralised to the CHI centre; instead objective developmental assessment relied on methods specific to the local health authority, and were blinded to the results of VABS-II scores. However, derogation to local services meant that uniformity of formal testing was not maintained although the choice of formal developmental assessment allowed flexible testing in children of all abilities. The following developmental assessments were utilised -Wechsler Preschool and Primary Scale of Intelligence for Children -UK 4th Edition (WPPSI-IV), Wechsler Intelligence Scales for Children, 4th edition (WISC-IV UK), Movement Assessment Battery for Children -UK Second Edition (MAS-2), Bayley Scales of Infant and Toddler Development -Third Edition (Bayley-III) and Griffiths Developmental Scales. Objective assessment reports were available in 15 children, 6 from our centre and 9 from elsewhere. In the rest, information describing cognitive and developmental outcomes was obtained from patient clinical correspondence and reports obtained from community paediatricians and school assessments by educational psychologists. Formal or informal developmental testing was performed independent of VABS-II testing in all children in the cohort; therefore the study design did not control for the severity of neurodevelopmental outcome. As these tests varied in their reporting styles, no attempt was made to achieve uniformity of output, except for recording the presence or absence of delay in one or more domains of childhood development in the following categoriesgross motor, fine motor, social and adaptive, communication and language. Brain neuroimaging was not performed routinely but reserved for clinical need.
VABS-II scores were also investigated for correlations with the timing of hypoglycaemia presentation, gender and phenotypes of CHI which included focal (solitary hyperfunctioning lesion in the pancreas), diffuse (hyperfunction in all islets in the pancreas) and transient (resolving hypoglycaemia, not requiring surgical treatment or long term medical therapy) forms, and treatment response. Transient and persistent CHI were defined as per previous descriptions [3,11] with persistent forms indicating requirement for medication or need for pancreatic surgery. Genetic mutation status was determined by testing for known genes associated with CHI as previously described using standard methods [11]. Mutation status was positive if any pathologic mutation was present, in heterozygous or homozygous form, regardless of the mode of inheritance. CHI gene mutation was utilised as a proxy for greater severity, with known genetic forms having a greater requirement for medical or surgical therapy and less likely to achieve resolution of disease [11]. However, it is accepted that severity can be variable within individuals with the same genotype, between heterozygous, homozygous and compound heterozygous mutations and between K-ATP channel genes and non K-ATP channel genes. It is also possible that children without genetic mutations may have severe disease. While genetic mutation status is not an ideal severity marker, other markers such glucose infusion rates were not obtainable in patients with mild forms of CHI and those presenting late. We also utilised other severity markers such as response to diazoxide, transient or persistent CHI and requirement for surgery, although we accept that such markers are not validated, may be non-concordant, may represent a disproportionately severe end of the spectrum of disease and introduce bias in statistical correlations.
VABS-II was also tested in 9 children with IKH (age range 3.00 to 5.40 years), to assess performance in children with hypoglycaemia not due to CHI. These children presented with ketotic forms of hypoglycaemia in 2014-2015 and underwent investigations to exclude known causes of hypoglycaemia including CHI. Children who were recruited to the study did not have formal developmental assessment. However they were reviewed by the clinical psychologist who ensured normal neurodevelopment. Their hypoglycaemia was not treated by regular medication/ food supplements but instead emergency hypoglycaemia prevention protocols incorporating additional carbohydrate intake during illness episodes were adopted.
Statistical analysis was performed by SPSS IBM© version 23.0 (IBM, New York, USA). VABS-II SDS between groups were compared by non-parametric tests. For repeat samples, paired tests with unequal variances were utilised. Probability of group membership was assessed by odds ratio, while sensitivity and specificity of tests were checked by receiver operating characteristic (ROC) curve analysis. The study was supported by the North West Research Ethics Committee, Project Reference Number 07/H1010/88. Fig. 1]. VABS-II scores were in the normal population range [-0.33 (-1.73, 1.13)] for 9 children with IKH, in keeping with prior normal objective developmental assessment. VABS-II was repeated in 7 children (age range 3.00 to 9.30 years) with CHI; individual domain and total scores remained similar [paired samples test, p = 0.18 to p = 0.95], suggesting that the VABS-II was valid on repetition without significant deviation with advancing age.
VABS-II scores in CHI correlate with developmental delay
VABS-II scores were correlated with developmental delay (affected in at least one developmental domain) identified by objective developmental assessment [ Table 1]. Correlation was also sustained for involvement of one or more developmental domains regarded as a yes/no binary variable [odds ratio, OR (confidence interval, CI) 0.28 (0.13, 0.61), p = 0.001]. As motor skills scores were estimated in a group of children > 6 years old (n = 14), the analysis was re-run in the subgroup of children Fig. 3].
(2)Linking VABS-II scores to CHI severity: VABS-II scores were analysed for correlation with disease severity. Various phenotypic characteristics describing severity of CHI in our cohort has been provided in Appendix. Carriage of CHI gene mutations serving as a possible proxy for greater severity correlated with lower VABS-II Total scores in analysis of covariance with age at presentation as the covariate [adjusted R 2 = 0.19, p = 0.039] with a modest effect size of 10%. In keeping with mutation status as a marker of severity, responsiveness to diazoxide was also positively correlated with VABS-II, i.e. unresponsiveness to diazoxide was associated with lower VABS-II Total scores (p = 0.019); however, no significant effects were noted for the following factors: transient or permanent CHI (p = 0.413), focal CHI (p = 0.742) and pancreatic surgery (p = 0.132).
Discussion
In this study, we have assessed the reliability of VABS-II as a screening tool for developmental delay in children with CHI. VABS-II and maladaptive behaviour scores individually correlated with developmental delay. VABS-II had high specificity, indicating accuracy of screening investigations for formal developmental assessments. Male gender and late age of hypoglycaemia presentation were risk factors for lower VABS-II scores. Diazoxide unresponsiveness and carriage of CHI genetic mutations, proxy markers of severity of CHI, were also associated with lower VABS-II scores.
Although VABS-II is applied widely to children with different conditions, its utility and accuracy in children with CHI has not been investigated previously. Our study has reviewed the performance of VABS-II in a relatively large cohort of children with the rare disease of CHI, over a two year period and correlated parent reports with objective clinical developmental examination. Further, our study examined VABS-II in IKH, where hypoglycaemia is not usually associated with significant brain injury; as expected VABS-II showed normal variation in IKH individuals. Our study also found consistency in repeating VABS-II in later life, thereby eliminating age dependent bias. The strength of association of VABS-II with developmental delay and the observation that a third of children have abnormal neurodevelopment [3] makes a compelling case to use VABS-II routinely in children with CHI with any clinical concern about neurodevelopment.
Although VABS-II demonstrated strong correlation with developmental delay and had high specificity, formal assessment methods to diagnose developmental delay were not uniform. However as cognitive testing using inventories such as WISC-IV UK can be time consuming, require trained psychologists and may be unsuitable for younger children, flexibility of objective developmental assessment was inevitable. Nonetheless, the non-uniformity of formal developmental testing remains a weakness, which may account for the skewed sensitivity values of VABS-II. It does reflect the variability that exists within clinical practices, however.
The study design did not specify formal developmental testing for all patients. It is possible that more patients who had severe neurodevelopmental delay were tested by formal methods than those with mild delay. However, in those tested formally in our cohort, nearly half were tested routinely. In such patients, testing was not biased by the severity of adverse neurodevelopment. It is possible that the sensitivity of VABS-II for milder adverse neurodevelopment is higher than that observed. An important corollary from our observations is the need for a more rigorous test of performance of VABS-II in the detection of milder forms of developmental delay. We recognise that mild abnormalities due to early life hypoglycaemia may not be reversed by early detection by VABS-II; however it is possible that early therapeutic interventions and adaptions in home and learning environments could be beneficial. The identification of relatively subtle neurodevelopmental abnormalities and interventions on quality of life could be a logical follow on study.
In this study, brain imaging was not performed as routine. Instead brain magnetic resonance imaging was prioritised for clinical need as in a previous observational study [3]. While the absence of brain imaging could be construed as a weakness in the study design, the value of routine brain imaging, a resource intense investigation, has not been substantiated for neurodevelopmental screening of CHI other than to identify the topography of lesions [12].
VABS-II had high specificity but low sensitivity for developmental delay. Therefore in the context of screening, VABS-II may not be relied upon as a primary tool for developmental referral in CHI. However, in clinical practice, developmental concerns are routinely discussed in outpatient follow up, which would obviate the requirement for VABS-II to be a highly sensitive instrument. The clinical suspicion of delayed development could be construed as a first line screening test, followed by VABS-II as a next step screening tool. The high specificity of VABS-II in the context of clinical concerns should generate sufficient concern to trigger referral for formal developmental assessments.
It is not clear why males with CHI have lower VABS-II scores than females. Frequency of males was greater in our cohort; however, gender as a variable was controlled in analysis at several levels. Our study has raised the interesting question whether gender difference could be an independent predictor over hypoglycaemic brain injury to cause intellectual disability. This question cannot be answered within the remit of our study design and needs to be examined in larger cohorts. Recent observations suggest modulation of sensitivity of arcuate nuclei to hypoglycaemia by suprachiasmatic nuclei in male rats [13]; it remains to be seen if similar mechanisms apply to male brains in children with CHI.
Our study shows that children presenting later with CHI have lower VABS-II scores, correlating with adverse developmental outcomes. This observation remains even after adjustment for male gender, another risk factor for lower VABS-II scores. CHI usually presents in the neonatal period; it is possible that patients with Late CHI had neonatal hypoglycaemia but that the diagnosis was achieved later. However, examination of neonatal records makes this less likely, although not impossible. A previous observation of later presentation has been associated with long term neurological disabilities [4] suggesting recurrent and unrecognised hypoglycaemia impacting on developmental outcomes. In our cohort, it is likely that recurrent hypoglycaemia was missed prior to the eventual diagnosis of CHI. Such hypoglycaemic episodes could be responsible for greater severity of adverse neurodevelopment in those with Late CHI. The inverse correlation with age at presentation indicates the need for early recognition and treatment of hypoglycaemia in early life [1,14]. As expected, diazoxide unresponsiveness and carriage of genetic mutations, proxy markers of severity of CHI, were associated with lower VABS-II scores. It therefore follows that treatment response and gene mutation testing in the initial phase of clinical management may provide prognostic markers to determine neurodevelopmental trajectories in CHI. It is well recognised that patients undergoing pancreatectomy for CHI have a high frequency of neurobehavioural deficits [15]. Our study adds further evidence to the impact of severity phenotyping on long-term outcomes.
Conclusions
We have evaluated the performance of Vineland Adaptive Behavior Scales, 2nd edition (VABS-II) in children with CHI and noted lower scores correlating with the presence of developmental delay with high specificity. Male gender, late age at presentation and severity of CHI are risk factors for adverse outcomes. VABS-II can be reliably used in neurodevelopmental follow up of CHI patients to trigger formal developmental assessment.
|
v3-fos-license
|
2023-05-21T15:18:48.107Z
|
2023-05-19T00:00:00.000
|
258813202
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://openresearch.nihr.ac.uk/articles/3-28/pdf",
"pdf_hash": "99b0a174c3f081806c8b7cbbe00a8583b3d4417a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41926",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0f58e28861af59dda1535900c27bd61f672328dd",
"year": 2023
}
|
pes2o/s2orc
|
Covariates of success in quitting smoking: a systematic review of studies from 2008 to 2021 conducted to inform the statistical analyses of quitting outcomes of a hospital-based tobacco dependence treatment service in the United Kingdom
Background Smoking cessation interventions are being introduced into routine secondary care in the United Kingdom (UK), but there are person and setting-related factors that could moderate their success in quitting smoking. This review was conducted as part of an evaluation of the QUIT hospital-based tobacco dependence treatment service ( https://sybics-quit.co.uk). The aim of the review was to identify a comprehensive set of variables associated with quitting success among tobacco smokers contacting secondary healthcare services in the UK who are offered support to quit smoking and subsequently set a quit date. The results would then be used to inform the development of a statistical analysis plan to investigate quitting outcomes. Methods Systematic literature review of five electronic databases. Studies eligible for inclusion investigated quitting success in one of three contexts: (a) the general population in the UK; (b) people with a mental health condition; (c) quit attempts initiated within a secondary care setting. The outcome measures were parameters from statistical analysis showing the effects of covariates on quitting success with a statistically significant (i.e., p-value <0.05) association. Results The review identified 29 relevant studies and 14 covariates of quitting success, which we grouped into four categories: demographics (age; sex; ethnicity; socio-economic conditions; relationship status, cohabitation and social network), individual health status and healthcare setting (physical health, mental health), tobacco smoking variables (current tobacco consumption, smoking history, nicotine dependence; motivation to quit; quitting history), and intervention characteristics (reduction in amount smoked prior to quitting, the nature of behavioural support, tobacco dependence treatment duration, pharmacological aids). Conclusions In total, 14 data fields were identified that should be considered for inclusion in datasets and statistical analysis plans for evaluating the quitting outcomes of smoking cessation interventions initiated in secondary care contexts in the UK. PROSPERO registration CRD42021254551 (13/05/2021)
care setting.The outcome measures were parameters from statistical analysis showing the effects of covariates on quitting success with a statistically significant (i.e., p-value <0.05) association.
Results
The review identified 29 relevant studies and 14 covariates of quitting success, which we grouped into four categories: demographics (age; sex; ethnicity; socio-economic conditions; relationship status, cohabitation and social network), individual health status and healthcare setting (physical health, mental health), tobacco smoking variables (current tobacco consumption, smoking history, nicotine dependence; motivation to quit; quitting history), and intervention characteristics (reduction in amount smoked prior to quitting, the nature of behavioural support, tobacco dependence treatment duration, pharmacological aids).
Conclusions
In total, 14 data fields were identified that should be considered for inclusion in datasets and statistical analysis plans for evaluating the quitting outcomes of smoking cessation interventions initiated in secondary care contexts in the UK.
Introduction
Stop smoking interventions are increasingly being incorporated as a systematic and opt-out component of secondary healthcare services in the United Kingdom's (UK's) National Health Service (NHS), driven by a commitment to do so in the NHS's Long Term Plan [1][2][3] .The general specification of the service pathway in acute inpatient settings is: (i) on admission, determine if the patient smokes; (ii) provide advice and treatment to support patient smokers not to smoke whilst in hospital; (iii) provide follow-up support after discharge from hospital to support the patient to quit smoking completely.This service pathway is based on the "Ottawa Model", following the early implementation of a hospital based tobacco dependence treatment service in Ottawa, Canada 4 , and subsequent implementation in the UK by the CURE service in Greater Manchester 5 .An evaluation framework for hospital based smoking cessation services in the UK was developed by consensus among UK stakeholders in acute and mental health NHS hospital Trusts 6 , and provides a guide to the key data fields to collect for service monitoring and evaluation.However, there is no specific guidance on what data fields might be important when undertaking "deep dives" into the data to investigate factors that might influence quitting success, which in this review we generically group under the term 'covariates' of quitting success.Without a comprehensive list of potentially influential covariates, there is a risk that important data fields might be omitted from the routine collection of service data or from statistical analyses that aim to investigate quitting outcomes.
The current best evidence on the covariates of tobacco smoking quit success comes from a systematic review by Vangeli et al. 7 , which examined worldwide evidence among the adult general population.The evidence presented by Vangeli et al. highlighted decreased quit success among smokers with higher nicotine dependence, smokers who smoked more cigarettes each day, smokers who had made a previously unsuccessful quit attempt, and smokers who had not previously gone without smoking for a week or more.Older age and higher socio-economic status or income were also found by the review to be associated with higher quit success.However, there could also be factors specific to patient health, healthcare setting, and the features of smoking cessation interventions initiated in secondary care settings that Vangeli et al.'s review of factors in the general population did not include.For example, in the British Thoracic Society's national audits of smoking and smoking cessation intervention activities in acute NHS hospital Trusts [8][9][10] , the key characteristics that were used to describe variation in whether current smokers received care for their tobacco dependence were gender, age, consultant speciality, and the patients' route of contact with the secondary care service (elective / emergency).This review was designed to support the evaluation of smoking cessation services in secondary care settings in the UK by identifying covariates worth considering in plans for the statistical analysis of quit success following contact with a hospital-based stop smoking advisor.The review was instigated by the need to identify key variables to include in the statistical analysis of quitting outcomes as part of an evaluation of the QUIT hospital-based tobacco dependence treatment service (https://sybics-quit.co.uk).The review was based on the question: 'What patient-, service-and setting-related factors influence the success of a quit attempt, including when initiated in a secondary care setting?'The populations of most interest were the UK and Canada, given that the Canadian Ottawa model is the exemplar for UK services.The review question and population restrictions aimed to capture covariates of quitting success relevant to the UK general population, relevant to people with a mental health condition in any setting and in any country, and relevant to care for tobacco dependence initiated within a secondary acute or mental health service in any country.Within each study identified, the sign of the statistical coefficient for each variable investigated was taken as a measure of the direction of its association with quitting success, and the statistical significance of that coefficient at the 95% level was used to indicate if the association was potentially identified by chance or not.
Patient and Public Involvement
Patients and the public were not involved in this review.
Study design
We undertook a systematic review of studies that used a statistical model to explore what covariates are associated with quitting success.We followed a systematic review approach but the review did make compromises as it was conducted as part of the process of the evaluation of a particular service and needed to fit into the time and resources available.These compromises were guided by the rapid review approach recommended by Tricco et al. 11,12 : searching more than one database in one iteration, published literature, searches
Amendments from Version 1
The key differences between this updated version of the article and the previously published one: 1. PRISMA Diagram Correction: We have rectified a discrepancy in the PRISMA diagram that displayed the number of studies included in the synthesis.It has been adjusted to accurately represent the total number of studies included.
2. Acknowledgment of English In-Patient Service Pathway: We now acknowledge the specific service pathway introduced for in-patient settings in England.This pathway encompasses several critical factors influencing quit success, including systematic identification of patient smoking status, the provision of nicotine replacement therapy (NRT), care transfers, trained advisors, effective project management, robust IT systems, and financial investment.These factors could significantly impact quit success by determining both the "reach" and "effectiveness" of the intervention.
Clearer Identification of Study Limitations:
In response to feedback, we have made a more explicit identification of the study's limitations.This includes acknowledging the constraints associated with the rapid review methodology and the study's ability to shed light on the broader determinants of quitting success.
Any further responses from the reviewers can be found at the end of the article limited by date and language, research scope specified by two researchers and a health librarian, and study selection and data abstraction by one reviewer and one verifier.Quality appraisal of studies was based on whether the reporting of statistical analysis was sufficient to provide estimates of the coefficient for each variable investigated and its statistical significance at the 95% level.The review approach taken thus aimed to produce a synthesis of available knowledge that was sufficient to meet the review's aim more quickly, ensuring logistical feasibility alongside restricted timelines, while minimising risk of bias 11,13 .The protocol was registered on PROSPERO CRD42021254551 on 13 th May 2021.Reporting follows PRISMA principles (http://www.prisma-statement.org/)(see Extended data 14 ).
Definition of covariates, effect size, and statistical significance We defined a covariate of quitting success (that we term a 'factor') as any independent variable that can strengthen, diminish, negate, or otherwise alter the association between independent and dependent variables (in this study, the dependent variables quantify success in quitting smoking) 15 .
As the dependent variable is binary (i.e., quit achieved or not by a particular time after initiating the quit attempt), we assumed that the most common statistical analysis conducted would be a form of logistic regression with effect sizes presented as oddsratios (ORs) or unconverted beta coefficients.For descriptive purposes, when discussing effect sizes we use the following terminology whereby the binary 'outcome' is quitting success 16 In keeping with the review's aim to identify a list of potentially important covariates of quitting success, we focused on identifying which covariates have been estimated to have a statistically significant relationship with quitting success (with statistical significance defined as p < 0.05) rather than focussing on effect size magnitude.We define 'no relationship' as meaning that a covariate did not have a statistically significant relationship with quit success (i.e., p ≥ 0.05).We did not consider whether a relationship is causal or not, as we were interested only in association.If a study presented both univariate and multivariate analyses, we based the identification of important covariates on the multivariate analysis as this adjusts for the associations of other variables with quitting success.
Eligibility criteria
Inclusion was restricted to studies published in peer-reviewed journals, in English, and dating from 2008, the year of the National Institute of Health and Care Excellence Guidance PH10 (for England and Wales), in which Recommendation 8 stated that smoking cessation advice and support should be available in secondary care settings for everyone who smokes.Reviews were not included, but we checked references for any relevant studies.We included studies that presented statistical estimates of the effects of covariates on the success of a quit attempt.
We searched for studies statistically assessing quit attempts in three contexts: (a) the general population instigated in any setting within the UK; (b) people with a mental health condition instigated in any setting and in any country; (c) initiated within a secondary acute or mental health service in any country.The scope of (a) was limited to the UK for relevance and feasibility given the large number of studies worldwide.
Information sources
Searches were conducted in April 2021.A focused search strategy combining free-text terms with subject headings (e.g., MeSH) was run and translated for optimal effectiveness across the following databases: MEDLINE (including In-Process and Epub ahead of print); EMBASE; PsycINFO (all via Ovid); CINAHL (via EBSCO) and the Cochrane Library.
Search process
The search strategy was constructed around the facets of: Smoking cessation AND quitting success AND (UK OR mental health OR hospital setting).Due to the time-constrained nature of this review, searches prioritised specificity over sensitivity, but to mitigate the risk of missing relevant papers the strategy was validated against six studies already known by the authors to be potentially relevant: Le Grande et al. 17 , Lubitz et al. 18 , Ussher et al. 19 , Smit et al. 20 , Vangeli et al. 7 , and Zhou et al. 21ll six studies were retrieved by the search (see the Extended data 14 ).Database search results were extracted directly to reference management software.
Study selection
Screening for studies relevant to each of our three contexts (a-c) was performed simultaneously, with included studies marked for relevance to each.During the data extraction process, we began to develop an organisational framework by categorising studies according to our three contexts, the covariates investigated and their effects on quit success.The organisational framework was then revised as results synthesis progressed.Covariates were grouped according to our final organisational framework.
Results
From 2,499 retrieved records, 29 studies were included in the synthesis (Figure 1), representing 21 studies relating to the UK general population context, six studies relating to mental health in the UK or Canada, and two studies relating to secondary care in the UK or Canada.A list of excluded studies with reasons is available in Table S1 in Supplement 2 of the Extended data 14 .
Description of included studies
The characteristics of the included studies and participants' characteristics are summarised in Table S2 and Table S3 in Supplement 2 of the Extended data 14 .Most studies had prospective, cross-sectional or retrospective designs; three studies were randomised controlled trials (RCTs).
Methodological differences between studies Methodological differences are reported in Table S4 in Supplement 2 of the Extended data 14 .Smoking cessation was assessed in a variety of different ways across studies.The time horizon for reporting smoking abstinence following a quit attempt ranged from 2 weeks to 1 year.Abstinence was assessed as both point-prevalent and continuous, both by selfreport (most frequently used for continuous abstinence) and validated by expired air carbon monoxide (CO; most frequently used to verify 7-day or 2-week point-prevalent abstinence, at ≤10 or ≤8 ppm).If a study conducted separate analyses for different durations of abstinence following a quit attempt, we reported the findings from each analysis independently.All studies reported odds-ratios from a logistic regression, and two studies reported beta coefficients.
In terms of sample, the majority of UK studies were of the general population (15 studies) or community smoking cessation services (four studies), with three studies examining samples with specific characteristics (i.e., pregnant women, people aged 25-59 years, and English residents of Bangladeshi origin; see Table S3 in Supplement 2 of the Extended data 14 ).Mental health population studies were from Canada and sampled from people attending community mental health services (four studies) or from the general population (two studies).The two secondary care studies recruited participants from a Canadian hospital-based smoking cessation clinic or UK cardiac rehabilitation setting.
Covariates of success in quitting tobacco smoking Figure 2 summarises the covariates that had a statistically significant relationship with quit attempt success.Table S5 in Supplement 2 of the Extended data 14 summarises the relationships between covariates and quit success.Table S6 in Supplement 2 of the Extended data 14 provides a full description of the size and direction of covariate effects and the corresponding statistical significance.
Demographics.
Overall, 16 studies included demographic covariates; the factors related to quit outcome were age, sex, ethnicity, socioeconomic characteristics, smoker's relationship status, cohabitation and social network situation (Table S5 and Table S6 in Supplement 2 of the Extended data 14 ).
Age. Five studies showed higher odds of quit success with increasing age [22][23][24][25][26] .Six analyses reported in five papers found no relationship between age and quit success in the UK general population [27][28][29][30][31] , two studies found no relationship for age in people with mental health conditions 32,33 , and two studies found no relationship in a secondary care setting 34,35 .
Sex.There were inconsistent findings for sex: in the UK general population, three studies reported higher odds of quitting success for males 22,24,29 and two studies reported higher odds of quitting success for females 25,27 .Two studies in an outpatient setting (cardiology and mental health services) found higher odds of quitting success in males 34,36 .Six studies found no relationship between sex and quitting success in the UK general population 23,26,28,30,31,37 , and two studies found no relationship in people with mental health conditions 32,33 .Table S6 in Supplement 2 of the Extended data 14 provides a full description of the size and direction of covariate effects and the corresponding statistical significance.
Ethnicity.One study reported higher odds of quitting success for Black ethnicity vs. White British ethnicity 24 .One study reported no relationship between ethnicity and quitting success in the UK general population 30 .
Socioeconomic characteristics.
There was a varied definition of socioeconomic characteristics in the studies identified.Higher odds of quitting success were reported for people: with higher social grades 24,[28][29][30]38 ; living in less deprived areas 26 ; higher income 37,39 ; higher occupational grades 22,39 ; more education 27,39 ; who paid for prescriptions vs. were exempt 22,23 ; had a higher reading level 30 ; people whose mothers worked in higher grade occupations during their childhood 39 ; and people who did not live in social housing 40 . Inthe UK general population, one study reported no relationship between quitting success and the geographic Index of Multiple Deprivation (IMD) score for the location of the smoking cessation service 22 , five studies reported no relationship between quitting success and education 25,27,30,37,39 , one study for prescription exemption status 25 , and one study for employment status 25 .In a secondary care setting, two studies reported no relationship between quitting success and the employment status of patients 34,35 .
Relationship status, cohabitation and social network.A study in the UK general population found higher odds of quitting success for people who were single, divorced or separated vs. were married or living with a partner 25 .However, a study of patients in care for cardiac rehabilitation found higher odds of quitting success for people who were married vs. single 35 .In the UK general population, studies reported finding no relationship between quitting success and marital status 30 , cohabitation status 39 , or number of household smokers 24,25 .One study of people with severe and persistent mental illness reported higher odds of quitting success for people with more social support for quitting from family/friends 32 .
Health and healthcare setting.There were eight studies that investigated the association between quitting success and the smoker's health or the healthcare setting in which the quit attempt was instigated; five reported covariates that had statistically significant relationships to quitting success (Table S5 and Table S6 in Supplement 2 of the Extended data 14 ): level of cardiovascular risk; number of comorbidities; having a mental health diagnosis; having a history of depression; having a history of substance abuse.
Physical health.One study in an outpatient setting reported higher odds of quitting success for patients with low (vs.moderate or high) cardiovascular risk and patients with fewer comorbidities 35 .However, no relationship was found between quitting success and moderate (vs.high) cardiovascular risk 35 .
Another study found no relationship between quitting success and the number of comorbidities that a patient had 34 .One study reported no relationship between the clinical setting in which the patient was located at the time that they were referred to stop smoking support (Cardiology services/clinics vs. Respirology services/clinics vs. other hospital services/clinics) 34 .
Mental health.Lower odds of quitting success were reported for people with: a primary diagnosis of anxiety disorder vs. no disorder 33 ; recurrent, current or recent depression vs. no history of depression 41 ; history of opiate abuse vs. history of alcohol abuse 33 ; history of alcohol abuse, opiate abuse and marijuana abuse vs. no history of substance abuse 42 .No relationship with quitting success was reported in three studies that investigated primary mental health diagnosis 32,36,42 , two studies of PHQ-9 score 32,43 , one study of having a history of substance abuse 32 , one study of HADS anxiety score and HADS depression score 35 , and one study of history of psychiatric disorder and history of co-occurring substance use and psychiatric disorder 34 .
Tobacco smoking variables.There were 17 studies in this category; 14 reported factors significantly related to quitting success (Table S5 and Table S6 in Supplement 2 of the Extended data 14 ): daily cigarette consumption, carbon monoxide (CO) level at baseline, level of nicotine dependence, the most difficult situation not to smoke, determination / motivation to quit, and the history of previous attempts to quit smoking.
Current and previous cigarette consumption.Higher odds of pregnant women quitting smoking successfully were reported among women with lower pre-pregnancy cigarette consumption 39 .No relationship between quitting success and the daily cigarette consumption prior to quitting was identified in one study in the UK general population 29 , two studies of people with a mental health condition 32,33 and one study in a secondary care setting 34 .No relationship between quitting success and the age at which someone started to smoke regularly (age at smoking initiation) was reported by one study in the UK general population 30 , two studies in people with a mental health condition 32,33 , and one study in a secondary care setting 34 .
Carbon monoxide (CO) level.The single study to find a relationship between quitting success and CO level prior to quitting was of a tailored smoking cessation programme for individuals with substance use disorders and mental illness; lower CO levels when the quit attempt began had higher odds of quitting success 42 .No relationship between quitting success and CO level was found by one study in people with a mental health condition 33 , and one study in a secondary care setting 34 .
Level of nicotine dependence.The 11 studies which identified statistically significant associations between quitting success and nicotine dependence prior to the quit attempt found mixed results: higher odds of quitting in smokers with lower nicotine dependence was found by nine studies in the UK general population 22,[25][26][27][28][29][30]37,44 and two studies of smoking cessation delivered in an outpatient setting 32,33 . No reationship between quitting success and nicotine dependence was found by one study in the UK general population 24 , two studies in people with a mental health condition 36,42 , and one study in a secondary care setting 34 .One study in the UK general population found higher odds of quitting success in smokers whose most difficult situation not to smoke was when feeling the urge to smoke, but the same study found no relationship with quitting success for when socialising, first thing in the morning, when angry or frustrated, when relaxing, and for 'any other reason' 30 .One study found no relationship between quitting success and the reported enjoyment of smoking 28 .
Motivation to quit.Two studies in the UK general population found higher odds of quitting successfully for smokers who reported a determination to quit 24 or being motivated to quit 37 .
No relationships between quitting success and reported readiness to quit were found in one study in the UK general population 30 , one study in people with a mental health condition 32 , and one study in a secondary care setting 34 .One UK general population study found no relationship between quitting success and the reported reasons for quitting, main advantage of quitting, or main disadvantage of quitting 30 .
Quitting characteristics.In terms of previous quit attempts, three studies in the UK general population 27,29,30 and one study in a mental health setting 33 found higher odds of quitting successfully among smokers who had made more previous quit attempts or had previously been abstinent for longer periods.Specifically, higher odds of quitting successfully were found among those who had previously quit smoking for 3 months or more 30 , made ≥2 quit attempts in the past 6 months 29 , and had a longer duration of abstinence at the last attempt to quit 27,33 .Three studies in the UK general population reported no relationship between quitting success and the number or duration of previous quit attempts 25,29,45 , as did one study in people with a mental health condition 32 , and one study in an outpatient setting 34 .One study in a UK general population reported no relationship between success in the current quit attempt and the time since the start of the last unsuccessful quit attempt 29 .
Intervention characteristics.There were 21 studies that investigated the influence on quitting success of characteristics of the attempt to quit smoking; 17 studies reported factors significantly related to the success of quit attempts (Table S5 and Table S6 in Supplement 2 of the Extended data 14 ).Factors related to the behaviour and choices of the individual smokers were whether smokers reduced or temporarily abstained from smoking before making a quit attempt, and various descriptors of the nature of support for the quit attempt.Pharmacological characteristics of the quit attempt were the type of pharmacological aid use, whether this was used alongside behavioural support, and the degree of compliance of the smoker making the quit attempt with the recommended guidelines for use of the pharmacotherapy chosen.
Reduction in amount smoked and/or temporary abstinence before quitting.Two studies found higher odds of quitting successfully for smokers who reduced the amount they smoked before attempting to quit smoking 29,46 , including if this was with the support of pharmacotherapy 46 .One study found no relationship between quitting success and whether the quit attempt was spontaneous, i.e. initiated as soon as the decision to quit has been made (compared with not making a spontaneous quit attempt) 29 , and one study found no relationship between quitting success and whether the smoker reduced the amount smoked prior to quitting (compared with quitting without first reducing the amount smoked) 27 .
Behavioural support type, setting and mode of contact.For the UK general population, higher odds of quitting were found for smokers who used a smoking cessation clinic and websites (compared with no support) 40,47 , for smokers who used pharmacotherapy alongside help from a health professional or specialist smoking cessation advisor (compared with no support) 47 , and for smokers who received support in specialist clinics 22,45 , in the community (compared with other settings) 25,26 , and with group support (compared with one-to-one or other support) 22,23 .
Lower odds of quitting were reported for smokers who used drop-in support (compared with one-to-one support) 45 , and telephone support (compared with no support) 40 .Other studies found no relationships between quitting success and the receipt of in-person behavioural support 40 , the use of self-help materials 40 , having one-to-one support 48 , the setting of support for smoking cessation 22,23,26,45 , having group therapy, or receiving support from a doctor or other health professional 47 .
Tobacco dependence treatment duration and number of contacts.Higher odds of quitting success were associated with the number of contacts that a smoker had with a stop smoking advisor in the UK general population 24 , and in studies of people with a mental health condition 33,36,42 .Other studies found no relationship between quitting success and treatment duration or number of contacts 22,32,34 .
Pharmacological aids.In the UK general population, higher odds of quitting success were found for smokers who used NRT (compared with no NRT/no cessation aids) 22,40,45 , combination NRT (compared with single NRT) 31 , varenicline (compared with no varenicline, no medication, or NRT) 22,26,40,45 , bupropion (compared with no medication and NRT) 22,25 , and for the use of any pharmacotherapy in general 47,49 .There were also higher odds of quitting success with the use of e-cigarettes (compared with no e-cigarettes, no cessation aid, and NRT) 37,40,48 .There was also evidence in the UK general population of higher odds of quitting successfully when smokers have greater compliance with the recommended guidelines for pharmacotherapy use 24 .
One study in the UK general population found lower odds of quitting successfully for smokers who bought NRT over the counter (compared with no cessation aids) 49 .Other studies in the UK general population found no relationships between quitting success and the use of prescription NRT 40 , NRT bought over the counter 40 , bupropion 40,45 , or e-cigarette use 37 .For people with a mental health condition, no relationship with quitting success was found for the use of pharmacotherapy 32,33,36 , or the number of weeks of NRT, varenicline and bupropion use 33 .
Discussion
The review has identified a list of covariates worth considering in plans for the statistical analysis of quitting success following a smoking cessation intervention initiated in a secondary care setting in the UK.The findings support and supplement the previous reviews that have investigated covariates of quitting success, and add to the evaluation framework for hospital based smoking cessation services in the UK 6 by highlighting the data fields important to consider in "deep dives" into service data to investigate the reasons for variation in quitting outcomes.
This review formed part of the larger evaluation of the QUIT hospital-based tobacco dependence treatment service in South Yorkshire and Bassetlaw, England (https://sybics-quit.co.uk), and supported the development of the statistical analysis plan for the evaluation.The service pathway being implemented for inpatient settings in England identifies people who smoke who are admitted to hospital after which they receive an assessment by an in-house tobacco dependency advisor and are started on NRT 50 .They are discharged from hospital with a two-week supply of NRT and have their care transferred from the hospital-based team to local community stop smoking services.The service pathway is still being implemented and has experienced a wide range of implementation barriers 2,51,52 .These barriers will affect quit success as they determine who has contact with the new hospital-based service and the effectiveness of the service in leading patients who smoke to quit smoking.Some of these factors could be identified from the general implementation science literature rather than the specific smoking cessation literature, which was the focus of this review.Another part of the evaluation of the QUIT service has conducted interviews, workshops and surveys with patients and hospital staff to understand the wider determinants of quit success beyond those identified by this review.
Strength and limitations
The strengths of this review lie in the rapid but systematic review approach taken 11,12 and in the design of the research question and population restrictions to be specific to smoking cessation interventions initiated in a secondary care setting in the UK.The limitations lie in the compromises made as part of the review approach, for example, our focus only on studies published in English, not searching grey literature, limited critical appraisal of the studies found.The review only included studies from the UK and Canada, which was intended to limit the influence of variation in service delivery internationally, while noting our interest was specific to the UK.Whilst this restriction increased relevance, only two studies were identified from a secondary healthcare setting.It is possible that expanding the search worldwide would have identified more covariates specific to understanding the influence of health and the healthcare setting on quitting success.However, healthcare systems differ widely worldwide, and our decisions to limit the scope of this review are in line with recommended best practice for rapid reviews 11,12 .
Informing real-world data collection: supporting clinical care and public health policy Improvement of smoking cessation interventions embedded into NHS secondary care services requires the use of real-world data for service monitoring and ongoing evaluation.There will be incremental improvement in services over time, including attempts to address factors observed to influence the success of quit attempts.This review provides a starting point for understanding what data fields might be important to collect to ensure that sufficient information is available to guide activities aimed at service improvement.The NICE real-world evidence framework 53 encourages service evaluators to identify the data fields needed through a systematic, transparent and reproducible search.The current review of the covariates of quitting success is part of that systematic approach and could aid the planning of data fields to be collected.
Evidence-based care: trial-based and real-world evidence When conducting an evaluation of intervention efficacy or comparative effectiveness, be it based on a randomised or non-randomised study design (noting service evaluations are not permitted to randomise patients to treatment assignment), developing a statistical analysis plan is an important step towards reducing potential bias in the evidence base 53 .Service evaluations and associated real-world evidence are often dependent on the real-world data available, hence the importance of considering which covariates to collect data on.For a statistical analysis plan, the interest is usually in adjusting estimates of service outcomes for the influence of confounding variables, but investigations can become more complex by situating covariates within a causal framework for evaluating service outcomes, for example using directed acyclic graphs 53 .The list of covariates identified in the current review could aid the development of a range of plans for statistical analysis to inform the evidence-base, focussed either on association or causality depending on the intention of the analysis and required evidence-base.
Understanding service complexity: informing adaptive logic models
There is increasing recognition in real world implementation and evaluation of healthcare interventions of the complexity of even seemingly "simple" treatments.Healthcare has been described as a complex adaptive system which requires understanding of multiple elements and the way in which they interact, in order to lead to transformation 54 .In common with many evaluations, evaluations of tobacco dependence treatment services in the UK draw on a theory of change approach in order to aid understanding of implementation and the effects of the tobacco dependence treatment service on outcomes for smoking and health 55 .The data fields identified during this review help to inform the development of service logic models 56 , which act as a visual summary of the complexity by which the intervention produces outcomes.These models can help to build our conceptualization and understanding of hypothesized causal links underpinning quitting smoking 57 .
Conclusion
In total, 14 broad categories of covariate were identified as having a statistically significant association with the success in quitting smoking and therefore worth considering in plans for the statistical analysis of quit success following contact with a smoking cessation intervention initiated within secondary healthcare services in the UK.These covariates also indicate the data fields it might be important to collect as part of the ongoing monitoring and evaluation of such services.
Extended data
Open Science Framework: Supplementary information for "Covariates of success in quitting smoking: a systematic review of studies from 2008 to 2021 conducted to inform the statistical analyses of quitting outcomes of a hospital-based tobacco dependence treatment service in the United Kingdom.https://doi.org/10.17605/OSF.IO/UW8DZ 14 .
This project contains the following extended data: -Supplement 1 -search strategies in full -Supplement 2 -results tables -Supplementary Table S1: Studies excluded at full text screening -Supplementary Table S2.Characteristics of included studies.
-Supplementary Table S3: Participant baseline characteristics of included studies -Supplementary Table S4: Outcome measurement and analyses in included studies -Supplementary Table S5.Relationships between covariates and quit success.
-Supplementary Table S6 categorized covariables into three categories to explore the determinants of quitting smoking..The background and rationale of the study were clear and comprehensive.In addition, authors reported in detail the methodology of the study.The discussion and results were satisfactory to give readers insight of the subject.I have a few comments: Disagreements were resolved through discussion, with no need to involve a third reviewer.Why did the authors not involve a third reviewer?
The numbers of studies in figure one showed 95 tests screened out of them 66 were excluded.So, the resulting included in the synthesis should be 29.However, the number mentioned by authors in the figure is 30 studies included in the synthesis and their categorization by general population, mental health and secondary care is 21+6+2=29.
○ "Age.All studies showed higher odds of quit success with increasing age.Six analyses reported in five papers found no relationship between age and quit success in the UK general population, two studies found no relationship for age in people with mental health conditions, and two studies found no relationship in a secondary care setting." I did not understand how all studies showed a high odd of quitting when the authors mentioned studies showing no relationship.Does it mean that despite heigh odds the risk association was not significant?It needs clarification.
○
The authors should identify limitations of the study.
○
Are the rationale for, and objectives of, the Systematic Review clearly stated?Yes
Are sufficient details of the methods and analysis provided to allow replication by others? Yes
Is the statistical analysis and its interpretation appropriate?Not applicable
Are the conclusions drawn adequately supported by the results presented in the review? Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: community medicine and public health, epidemiology of HIV/AIDS and STIs I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
Reviewer Report 27 June 2023 https://doi.org/10.3310/nihropenres.14562.r29469 Whilst the information provided in this systematic review is academically sound and very useful, it may be that an additional study be undertaken to look at the wider determinants of quit success beyond those identified using the search strategy for this systematic review.
Are the rationale for, and objectives of, the Systematic Review clearly stated?Yes Are sufficient details of the methods and analysis provided to allow replication by others?Yes Is the statistical analysis and its interpretation appropriate?I cannot comment.A qualified statistician is required.
Are the conclusions drawn adequately supported by the results presented in the review? Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Tobacco control policy and implementation of tobacco dependency services I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
Figure 1 .
Figure 1.PRISMA flow diagram of study inclusion.
Figure 2 .
Figure 2. List of covariates found to have a statistically significant association with quitting success in at least one study.TableS6in Supplement 2 of the Extended data14 provides a full description of the size and direction of covariate effects and the corresponding statistical significance.
time-point, measure of abstinence, whether ORs and model coefficients were captured, the model type, and whether a univariate or multivariate model.Detailed statistical results were also extracted: the whole model, where reported, including intercept and other coefficients, dependent and independent variable, any reported p-values, and goodness of fit statistics, if reported.
Titles and abstracts were screened by one of three reviewers (EH, MF or SB); 70% of abstracts were checked by another reviewer (EH or MF).Full texts were assessed for inclusion by one reviewer and checked by another reviewer (EH or MF).Disagreements were resolved through discussion, with no need to involve a third reviewer.Data extraction and synthesisEH and MF designed and tested a spreadsheet for data extraction.Data were extracted and charted by EH and checked in regular meetings with MF and DG.The following data items were extracted: Reference information (first author and date), study type, country, setting (e.g., hospital type/department/ward), participant baseline characteristics (e.g., age, sex, socio-economic status, reason for admission, cigarettes/day smoked, number of previous quit attempts, nicotine dependence), measure of quit success (point prevalence abstinence or continuous abstinence, any time point but recorded separately per time-point).Relevant characteristics of the analysis were noted.For example, method of data collection, sample size, time horizon, cessation 2008 to 2021 conducted to inform the statistical analyses of quitting outcomes of a hospital-based tobacco dependence treatment service in the United Kingdom'.https://doi.org/10.17605/OSF.IO/UW8DZ 14 .Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2001-08-09T00:00:00.000
|
15708997
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcmedgenet.biomedcentral.com/track/pdf/10.1186/1471-2350-2-9",
"pdf_hash": "ef5a78bdb273a244565970c6c190bc0759529bd3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41928",
"s2fieldsofstudy": [
"Biology",
"Psychology",
"Medicine"
],
"sha1": "fc7bfe4c34ef4e7874a840479404d046ce12140b",
"year": 2001
}
|
pes2o/s2orc
|
ApoE polymorphisms in narcolepsy
Background Narcolepsy is a common neuropsychiatric disorder characterized by increased daytime sleepiness, cataplexy and hypnagogic hallucinations. Deficiency of the hypocretin neurotransmitter system was shown to be involved in the pathogenesis of narcolepsy in animals and men. There are several hints that neurodegeneration of hypocretin producing neurons in the hypothalamus is the pathological correlate of narcolepsy. The ApoE4 allele is a major contributing factor to early-onset neuronal degeneration in Alzheimer disease and other neurodegenerative diseases as well. Methods To clarify whether the ApoE4 phenotype predisposes to narcolepsy or associates with an earlier disease onset, we have genotyped the ApoE gene in 103 patients with narcolepsy and 101 healthy controls. Results The frequency of the E4 allele of the ApoE gene was 11% in the patient and 15% in the control groups. Furthermore, the mean age of onset did not differ between the ApoE4+ and ApoE4- patient groups. Conclusion Our results exclude the ApoE4 allele as a major risk factor for narcolepsy.
Background
Narcolepsy is a frequent debilitiating neuropsychiatric disorder characterized by increased daytime sleepiness, cataplectic episodes and hypnopompic and hypnagogic hallucinations. The occurence of narcolepsy is sporadic; however, a proportion of cases is familial with an autosomal-dominant type of inheritance. In contrast to the normal population with an HLA-DR2 allele frequency of ∼30%, over 90% of narcoleptics type HLA-DR2 + and HLA-DQB1*0602 + [1,2]. The biological significance of this association remains elusive implicating autoimmune aspects in the ethiology [3]. In two animal models the involvement of the hypocretin (orexin) neurotrans-mitter system was demonstrated. Murine narcolepsy induced by knocking out the hypocretin gene shows symptoms corresponding to human narcolepsy [4]. Dobermann pincher and Labrador breeds with autosomal recessively inherited narcolepsy each share a splice-site mutation in the hypocretin-receptor 2 gene [5]. Although hypocretin levels in CSF of most narcoleptics is decresed or not detectable [6], no causative mutations in both hypocretin receptor genes were found in humans. A single patient with atypical early-onset narcolepsy carries a dominant signal peptide mutation in the preprohypocretin gene [7]. Furthermore a rare sequence variant in the 5'UTR of preprohypocretin gene has been shown to be a risk factor for narcolepsy [8]. Recent reports describe a nearly complete loss of hypocretinergic neurons in brains of narcoleptic patients as well as scar tissue normally occupied by the hypocretin-producing cells [7,9].
Among several neurodegenerative diseases the E4 allele of the ApoE gene has been recognized as a predisposing genetic risk factor mainly influencing the age of manifestation of M. Alzheimer. The ApoE protein is a component of the VLDL particles and chylomicrons and its primary role is lipid transport [10,11]. The pathophysiological effect of ApoE4 in neurodegeneration is not clarified yet and may possibly involve diminished neuroprotection against amyloid depositions, reactive oxygen species or exitotoxins [12]. We have tested the hypothesis of the involvement of the E4 allele of ApoE in the etiology of narcolepsy.
Methods
Patients were recruited from the University Hospital in Mainz and St. Josef Hospital in Bochum, Germany. All but two patients suffered from cataplexies. All patients fulfilled diagnostic criteria of the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM IV) and the International Classification of Sleep Disease of the American Sleep Disorders Association for narcolepsy. For further details see Gencik et al. 2000 [8]. The control group was composed of neurologically investigated 101 healthy individuals. All participants gave written, informed consent.
ApoE genotyping was performed as described [13]. The HLA-DR2 status of the patients was determined previously. 94 patients typed HLA-DR2 + and 9 patients HLA-DR2 - [8]. Genotype and allele frequencies were compared with the Χ 2 -significance test. The age of onset was known in 60 patients with E4genotypes and 13 patients with E4 + genotypes, these data were compared by the Mann Whitney test.
Results and discussion
Until now, only a few factors were recognized to predispose to narcolepsy. It is the major association with the HLA-DR2 allele on the one hand. Specific TNFα alleles [14] as well as the 3250T allele of the preprohypocretin gene [8] are minor contributors to the etiology on the other hand. No exogenous risk factors for narcolepsy have been recognized so far.
Recently, a novel neurotransmitter system was shown to be involved in narcolepsy in the canine disease model and in the orexin knock-out mouse. Autopsy reports of narcoleptic dogs and patients with narcolepsy pointed out possible neurodegenerative processes in areals with hypocretin-producing neurons. Taken together, the pathophysiology of narcolepsy seems to involve an autoimmune driven neurodegeneration of yet unkown cause [3,15].
In order to specify the role of ApoE isoforms in narcolepsy, we have determined the allele and genotype frequencies of the E2, E3 and E4 alleles in patients with narcolepsy and healthy controls. Allelic and genotypic frequencies are shown in table 1. No statistically significant differences were detected between nacoleptics, the DR2 subgroups and the controls. Although not significant, a tendentially increased E3 frequency was seen among the DR2 + subgroup of narcoleptics. Furthermore, in 73 narcoleptics exact age of onset could be determined. 60 patients had an non-E4, 13 patients had an E4 phenotype. The manifestation ages were 19.6 ± 9.9 years (mean ± SD) and 21.4 ± 8.6 years for the non-E4 group and E4 group, respectively. The mean difference of 1.8 years were not statistically signifcant (p = 0.44).
Conclusion
The presented results indicate, that the E4 isotype of the ApoE protein, which is an important risk factor for complex traits like alzheimer disease, parkinsonism and other neurodegenerative disorders is not involved in pathophosiological processes in narcolepsy.
Declaration of competing interests
None declared.
|
v3-fos-license
|
2024-08-03T15:03:27.918Z
|
2024-07-31T00:00:00.000
|
271653790
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "7cfe8d721da8b4b5af9c0301733118b1d30d7c38",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41931",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"sha1": "9f9d65c7a12cfbc417197ea112239160d3eb6e31",
"year": 2024
}
|
pes2o/s2orc
|
Exploring the Sustainability of Upcycled Foods: An Analysis of Consumer Behavior in Taiwan
Given the urgent climate change and food security challenges, upcycled food products are crucial for sustainable food production and waste management. This study investigates Taiwanese consumer behavior towards upcycled foods using the value–attitude–behavior (VAB) theory, focusing on “product knowledge”, “green perceived quality”, and “price sensitivity”. Of the 335 distributed surveys, 320 valid responses (95.5% effectiveness) were analyzed. The results indicated that eco-conscious values strongly influenced consumer attitudes and anticipated guilt (β = 0.647, p < 0.001; β = 0.691, p < 0.001), shaping behavioral intentions (β = 0.290, p < 0.001). Attitudes significantly correlated with intentions, validating the VAB framework. However, anticipated guilt showed a minimal impact (β = 0.029, p = 0.629), revealing complex consumer emotions. Green perceived quality and product knowledge were the key decision-making factors (β = 0.193, p < 0.001; β = 0.146, p < 0.001). Surprisingly, price sensitivity positively influences intentions (β = 0.764, p < 0.001), suggesting the consumer prioritization of quality and environmental values over price. These insights inform strategies for businesses to enhance consumer engagement and sustainability alignment, advancing progress towards Sustainable Development Goals (SDGs).
Introduction
Amidst the pressing global challenges of profound climate change and food security, greenhouse gas (GHG) emissions stemming from food production and waste management have emerged as critical issues that require immediate attention.Garske et al. [1] and Rodriguez Garcia and Raghavan [2] highlighted the substantial impact of this dilemma on climate change and food security.
According to the Food and Agriculture Organization of the United Nations (FAO) [3], approximately one-third of the world's food supply is wasted annually during production and consumption, resulting in 8-10% of the total greenhouse gas emissions, while an estimated 800 million people worldwide grapple with hunger.In the pursuit of Sustainable Development Goals (SDGs), particularly targeting "ending hunger" (SDG2) and achieving "responsible consumption and production" (SDG12), the concept of upcycling and repurposing food has emerged as a pivotal element in realizing these aspirations [4,5].
The Upcycled Food Association (UFA) [6] defines upcycled food as products primarily utilizing ingredients that would otherwise go to waste, boasting a transparent supply chain and positive environmental impacts.A growing body of research has investigated the demand for upcycled foods and their ecological and societal implications.Coderoni and Perito [7], Thorsen et al. [8], and Grasso et al. [9] indicated that consumers favor these products because of the dual benefits of waste reduction and the preservation of nutritional content.Nogueira et al. [10] highlighted the nutritional benefits of upcycled foods, thereby enriching the diets of low-income households.
In the realm of fostering a circular economy, Peschel and Aschemann-Witzel [11] highlight the positive economic role of repurposing solid waste in agricultural food sectors.For instance, in 2023, Salt and Straw, an American ice cream brand, collaborated with multiple organizations to release five upcycled ice cream flavors, estimated to save 38,000 pounds (approximately 17,000 kg) of food waste annually.Companies such as the Renewal Mill have committed to reusing byproducts from plant milk production, creating flour, and baking products to curb the environmental impacts of food waste.With increasing environmental consciousness among consumers, their inclination towards upcycled products has grown, as supported by Perito et al. [12], Bhatt et al. [13], and Asioli and Grasso [14].Research by Goodman-Smith et al. [15] reinforces that food transformation into new products effectively combats waste while enhancing consumer acceptance of upcycled foods.
Despite the evident advantages of upcycled foods, the academic exploration of consumer purchasing intentions remains limited.Existing research on upcycled foods primarily focuses on consumer perceptions, acceptance, and purchasing behavior.Goodman-Smith et al. [15] found that consumers generally hold positive attitudes towards upcycled foods, but their understanding of the specific benefits and manufacturing processes is limited.Bhatt et al. [16] emphasized the importance of upcycled food labeling, while Stelick et al. [17] highlighted the impact of sustainability information and nutritional content on consumer purchase intentions.
However, these studies are predominantly based on Western markets, with limited research on consumer behavior in other regions, particularly Asia.Additionally, research on the role of price sensitivity and product knowledge in purchasing decisions related to upcycled foods remains scarce.
This study aims to address these research gaps by investigating Taiwanese consumers' attitudes, purchase intentions, price sensitivity, and product knowledge of upcycled foods, providing new perspectives and contributions to the existing literature.
The value-attitude-behavior (VAB) model, introduced by Homer and Kahle [18], serves as a common framework for understanding consumer behavior.The VAB model posits that values influence behavior through attitudes, a concept with enduring relevance and practical application.Prior studies by Issock et al. [19], Kim and Hall [20], Cheung and To [21], Ma and Chang [22], and Kim et al. [23] efficiently utilized this model to probe sustainable consumption behaviors, highlighting its predictive capabilities for consumer behavior.This study delves into its empirical application in sustainable consumption, focusing on upcycled food.The foundational notion that abstract values can shape individual actions via attitude formation, as articulated by Homer and Kahle [18], resonates with contemporary research by scholars, such as Issock et al. [19], Kim and Hall [20], Chang et al. [24], Lee et al. [25], and Wang et al. [26], exemplifying the versatility of the VAB model across various consumption contexts.
Scholarly investigations by Szakos et al. [27] emphasize the dual dimensions of emotion and cognition in attitude formation, a concept bolstered by Habib et al.'s [28] findings.Furthermore, research by Lu et al. [29] illuminates the pivotal role of anticipated guilt, a significant emotional state, as a mediating factor between values and behavioral intentions.Complementing this, Haws et al.'s [30] seminal work on delineating green consumption values provides a robust theoretical underpinning for analyzing green consumption behavior.Building on this foundation, hypotheses H1a and H1b provide insights into how consumers' attitudes towards upcycled food and anticipated guilt stem from their green consumption values.
By examining how attitudes towards upgraded recycled foods and anticipated guilt shape behavioral intentions, this study referenced the findings of Deci [31], Lu et al. [29], and Zeynalova and Namazova [32], underscoring the pivotal role of emotions and in-trinsic motivation in behavioral intentions.Prior research has established the association between green values, motivations for buying green products, attitudes towards such products, and the resulting willingness to purchase green products [33].Studies indicate that anticipated guilt positively impacts the intention to engage in low-carbon consumption behaviors [34].Thus, Hypotheses H2a and H2b aim to elucidate the positive relationships between these variables.
The pivotal role of consumer knowledge in shaping behavior, as underscored by Philippe and Ngobo [35], emphasizes the significance of product knowledge in consumer decision-making processes.Building on this foundational understanding and incorporating insights from subsequent studies by Peng et al. [36] and Ayub and Kusumadewi [37], this study posits Hypothesis H3, which elucidates the positive impact of product knowledge on behavioral intentions towards upcycled food products.
In examining green perceived quality, this study draws upon Zeithaml's [38] definition of perceived quality and its implications for consumer behavior [39], inspired by the findings of Riva et al. [40].This exploration underscores the affirmative association between perceived green quality and behavioral intentions, with research indicating a direct link between perceived quality and behavioral intentions of the millennial generation [41].Thus, H4 sought to elucidate this relationship.
Finally, insights from Ogiemwonyi [42], Solomon and Panda [43], and Grasso and Asioli [44] highlight the critical role of price sensitivity in consumer decision-making processes.Research posits that individuals with lower price sensitivity exhibit a greater propensity to purchase green products as their environmental consciousness increases, in contrast to their more price-sensitive counterparts [45].Leveraging these findings, this study advances H5 by examining the negative influence of price sensitivity on behavioral intentions.
This study examines Taiwanese consumers' intentions to purchase upcycled food products using the VAB model as a conceptual framework, integrating "product knowledge", "perceived green quality", and "price sensitivity" as focal variables.We propose the following hypotheses: H1a.consumers' green consumption values positively influence their attitudes towards upcycled food.
H1b. consumers' green consumption values positively influence anticipated guilt.
H2a. consumers' attitudes towards upgraded recycled foods positively influence their behavioral intentions.
H3. product knowledge about upcycled food positively influences behavioral intention.
H4. the green perceived quality of upcycled food positively impacts behavioral intentions.
H5. the price sensitivity of consumers negatively affects their behavioral intentions.
Through a comprehensive analysis of the impact of these factors on consumer purchasing intentions, strategic recommendations are proposed to foster the market expansion of upcycled foods, thereby aiding in climate change mitigation and enhancing food security.
Research Framework
By synthesizing the literature discussed above, this study focused on the VAB model by integrating three research variables: "Product Knowledge", "Green Perceived Quality", and "Price Sensitivity", as depicted in Figure 1.
Research Framework
By synthesizing the literature discussed above, this study focused on the VAB model by integrating three research variables: "Product Knowledge", "Green Perceived Quality", and "Price Sensitivity", as depicted in Figure 1.
Questionnaire Development
The questionnaire design comprises seven parts.The first segment focuses on product knowledge derived from Sun and Wang [46] and comprises three questions.The next section addresses green consumer values by adapting the study by Do Paco et al. [47] to six questions.The third part explored attitudes towards upcycled food, drawing from modifications of the research by Kamalanon et al. [48] and involving four questions.The subsequent section delves into anticipated guilt based on modifications of the study by Attiq et al. [49] with four questions.The following sections cover green perceived quality, price sensitivity, and behavioral intentions, adapting the studies by Riva et al. [40], Ogiemwonyi [42], and Rausch and Kopplin [50].The eighth part gathered demographic data using a 7-point Likert scale for all the questions.
To ensure the clarity and accuracy of the questions and prevent misinterpretation, an expert validity review was conducted.Nine experts, including educational scholars and food industry professionals, each with more than a decade of experience, were invited to review and modify the questionnaire for precision and appropriateness.Their input was consolidated to refine the questionnaire.A pilot test with 64 questionnaires validated the reliability of the items through item and reliability analysis.
Following this rigorous process, the final questionnaire design was established and responses were scrutinized to eliminate incomplete or inconsistent data.Cronbach's alpha values for each construct ranged from 0.843 to 0.925, indicating strong reliability.
Sample and Data Collection
In light of technological advancements and the prevalence of the Internet, researchers have shifted towards online questionnaire dissemination for data collection.While online surveys may exhibit lower response rates, strategies, such as advance notifications and concise surveys, can improve participation.Online questionnaires offer several advantages in terms of data integrity and resource efficiency.This study employed convenience sampling and distributed questionnaires through various online platforms and emphasized the participants' privacy and anonymity.Statistical analyses were performed
Questionnaire Development
The questionnaire design comprises seven parts.The first segment focuses on product knowledge derived from Sun and Wang [46] and comprises three questions.The next section addresses green consumer values by adapting the study by Do Paco et al. [47] to six questions.The third part explored attitudes towards upcycled food, drawing from modifications of the research by Kamalanon et al. [48] and involving four questions.The subsequent section delves into anticipated guilt based on modifications of the study by Attiq et al. [49] with four questions.The following sections cover green perceived quality, price sensitivity, and behavioral intentions, adapting the studies by Riva et al. [40], Ogiemwonyi [42], and Rausch and Kopplin [50].The eighth part gathered demographic data using a 7-point Likert scale for all the questions.
To ensure the clarity and accuracy of the questions and prevent misinterpretation, an expert validity review was conducted.Nine experts, including educational scholars and food industry professionals, each with more than a decade of experience, were invited to review and modify the questionnaire for precision and appropriateness.Their input was consolidated to refine the questionnaire.A pilot test with 64 questionnaires validated the reliability of the items through item and reliability analysis.
Following this rigorous process, the final questionnaire design was established and responses were scrutinized to eliminate incomplete or inconsistent data.Cronbach's alpha values for each construct ranged from 0.843 to 0.925, indicating strong reliability.
Sample and Data Collection
In light of technological advancements and the prevalence of the Internet, researchers have shifted towards online questionnaire dissemination for data collection.While online surveys may exhibit lower response rates, strategies, such as advance notifications and concise surveys, can improve participation.Online questionnaires offer several advantages in terms of data integrity and resource efficiency.This study employed convenience sampling and distributed questionnaires through various online platforms and emphasized the participants' privacy and anonymity.Statistical analyses were performed using structural equation modeling, with a sample size of 320 effective questionnaires collected from 335 distributed questionnaires, meeting the criteria for robust analysis.
Sampling and Data Acquisition
As digital advancements and the ubiquity of the Internet reshape our communication methods, an increasing number of social science researchers are transitioning from traditional paper-based surveys to the virtual dissemination of questionnaires via online platforms and social media networks.This shift not only facilitates research data collection but also aligns with contemporary communication trends.Sammut et al. [51] highlight that despite the generally lower response rates associated with online questionnaires, proactive measures, such as preemptive email notifications or the development of concise, 10 min surveys, can significantly enhance participation rates.Furthermore, digital questionnaires offer several advantages over their paper counterparts, including improved data integrity, resource efficiency, and the ability to garner more thorough responses.
In this digital arena, researchers have leveraged various dissemination channels, including social networks such as Facebook, Instagram, Line, and personal communities, to circulate their questionnaires.Adhering to ethical research standards, this study transparently communicated its objectives to the participants and guaranteed anonymity through the survey's online portal, thus ensuring a comfortable environment for respondents free from privacy concerns.
Methods of Data Analysis
This study employed quantitative research methods and utilized IBM SPSS Statistics 27 and AMOS 28 for the data analysis.Statistical techniques included descriptive statistics, reliability and validity analyses, and SEM using maximum likelihood estimation to explore causal relationships and model fit.These methods were used to validate the research hypotheses outlined in this study.
Demographic Analysis
Given the specific needs of this study and the prerequisites for hypothesis testing, we employed SEM to analyze the collected data.Wu [52] posited that the optimal sample size for SEM is contingent upon the ratio of the number of items, recommending a range of 10:1 to 15:1.Consequently, with the 31 items presented in this investigation, the ideal sample size was projected to be between 310 and 465 respondents.Between February and June 2024, 335 responses were gathered using an official questionnaire.After excluding 15 invalid submissions, 320 valid questionnaires were obtained.Table 1 presents the demographic characteristics of the sample population.
Measurement Model: Reliability and Validity
This study employed a two-stage analysis method, with the first stage being confirmatory factor analysis (CFA) and the second stage being the analysis of the overall model fit.CFA is part of the SEM analysis used to assess the relationship between observed variables and latent factors, namely, whether the latent variables can truly be represented by the observed variables.Generally, CFA can be used to evaluate psychological measurements, construct validity, test method effectiveness, and examine model group invariance.As this research incorporated questionnaires developed by other researchers, it is necessary to use CFA to verify whether the measurement tool is appropriate for the study population.
This study consists of seven dimensions: "Product Knowledge", "Green Consumption Values", "Attitudes towards Upcycled Food", "Anticipated Guilt", "Green Perceived Quality", "Price Sensitivity", and "Behavioral Intentions".Confirmatory factor analysis was conducted individually for each dimension.First, items with a factor loading of less than 0.4 were eliminated based on the outcomes, and confirmatory factor analysis was repeated to assess the root mean square error of approximation (RMSEA) of the sub-dimensions.If it is greater than 0.08, it indicates that it does not meet the fit criteria.Thus, based on the principle of deleting items according to the modification index (MI) value, the model was repeatedly modified until the RMSEA of the dimension was less than 0.08, or the subdimension became a saturated model.
After confirming the dimensions of the scale, the composite reliability (CR) and convergent validity of each dimension were tested immediately.The CR value represents the combination of all the reliability of the measurement variables, and it is a ratio ranging from zero to one.The higher the CR value, the higher the proportion of the "true variance out of the total variance", which indicates higher internal consistency.Fornell and Larcker [53] suggested that the CR value of the latent variables should be greater than 0.60.The convergent validity of latent variables is best represented by the average variance extracted (AVE).Both Fornell and Larcker [53] and Bagozzi and Yi [54] recommend that the AVE of latent variables should preferably exceed 0.50.In this study, the CR values of the scale dimensions ranged from 0.906 to 0.947, indicating that the scale had good internal consistency.The AVE values ranged from 0.668 to 0.816, exceeding the recommended value of 0.50, indicating that the scale had good convergent validity.The standardized regression weights of all items ranged from 0.645 to 0.922, and the t-values were greater than 1.96; therefore, all were significant.The factor loadings, dimension CR values, and AVE values are presented in Table 2.The content of the table shows that the dimensions of this questionnaire met the requirements of convergent validity; hence, the measurement model had good internal quality.SEM discriminant validity analysis involves measuring two different concepts and conducting a relevance analysis of the results.If the degree of correlation is very low, it indicates discriminant validity between the two concepts.According to Hair et al. [55], the correlation coefficient between two different concepts should be less than the square root of the AVE for each concept.Table 3 presents a comparison of all construct correlation coefficients and the square root of the AVE in this study.The square root values of the AVE for each construct were greater than the correlation coefficients between the constructs, meeting the standard recommended by Hair et al. [55], which shows discriminant validity among the constructs in this study.Based on the evaluation results of the measurement model, it can be concluded that the measurement model used in this study had good internal and external qualities.
Model Fit Test
Table 4 presents the results of the fit index analysis.This study employed the maximum likelihood (ML) estimation method to construct a structural model to test the hypothesized relationships of the proposed model.The relevant indices are as shown in Table 4: the chisquare to degrees of freedom ratio (x 2 /df ) = 3.145, root mean square residual (RMR) = 0.043, root mean square error of approximation (RMSEA) = 0.079, adjusted goodness of fit index (AGFI) = 0.814, normed fit index (NFI) = 0.904, comparative fit index (CFI) = 0.911, and incremental fit index (IFI) = 0.912, all of which met the standards, indicating that the overall model of this study demonstrated a good fit.Note: root mean square residual (RMR), root mean square error of approximation (RMSEA), adjusted goodness of fit index (AGFI), normed fit index (NFI), comparative fit index (CFI), incremental fit index (IFI).
Overall Model Path Analysis
This study employed SEM to examine the relationships between various variables and conducted a detailed analysis within the proposed theoretical framework.The structural model analysis diagram is shown in Figure 2. Note: root mean square residual (RMR), root mean square error of approximation (RMSEA), adjusted goodness of fit index (AGFI), normed fit index (NFI), comparative fit index (CFI), incremental fit index (IFI).
Overall Model Path Analysis
This study employed SEM to examine the relationships between various variables and conducted a detailed analysis within the proposed theoretical framework.The structural model analysis diagram is shown in Figure 2. H2a, and H3-H4 describe how attitudes towards upcycled food (β = 0.290, p < 0.001), product knowledge (β = 0.146, p < 0.001), and green perceived quality (β = 0.193, p < 0.001) significantly influence behavioral intentions, highlighting the multifaceted motivational basis for consuming upcycled food products.
However, H2b shows that anticipated guilt has a positive but not significant effect on behavioral intentions (β = 0.029, p = 0.629); thus, the hypothesis is not supported.This suggests that more research is needed to understand the role and impact of this emotional H2a, and H3-H4 describe how attitudes towards upcycled food (β = 0.290, p < 0.001), product knowledge (β = 0.146, p < 0.001), and green perceived quality (β = 0.193, p < 0.001) significantly influence behavioral intentions, highlighting the multifaceted motivational basis for consuming upcycled food products.
However, H2b shows that anticipated guilt has a positive but not significant effect on behavioral intentions (β = 0.029, p = 0.629); thus, the hypothesis is not supported.This suggests that more research is needed to understand the role and impact of this emotional factor in decision making.
H5 shows that price sensitivity significantly and positively affects consumers' behavioral intentions (β = 0.764, p < 0.001), contrary to the original hypothesis that it negatively affects behavioral intentions; thus, the hypothesis is not supported.This indicates that consumers may place more emphasis on price than expected and that price sensitivity promotes purchase intentions.
Therefore, it can be known that H1a, H1b, H2a, H3, and H4 are all valid and significant, while H2b and H5 are not.Table 5 presents the path analysis and hypothesis testing results of this study.
Discussion
This study explores the factors influencing Taiwanese consumers' decisions to purchase upgraded recycled food through the VAB theory.The findings provide substantive insights into consumer behavior dynamics by employing structural equation modeling analysis.First, it was discerned that consumers' green consumption values notably and positively influenced their attitudes towards upgraded recycled food, alongside an augmented anticipation of guilt.This observation suggests that values underpinned by environmental and social responsibilities not only foster a positive disposition towards sustainable food options but also amplify guilt anticipation associated with the potential selection of noneco-friendly choices.These insights are consistent with the findings of Lu et al. [29] and Roh et al. [56], who similarly underscore the pivotal role of values in steering consumer behavior towards environmentally sustainable choices.
Moreover, this study corroborates the notion that a favorable attitude towards innovative and sustainable food solutions markedly influences behavioral intentions towards such products, echoing the findings of Chatterjee et al. [33] and Jung et al. [57] in the realms of green products and sustainable fashion.This indicates that positive perceptions can significantly incentivize consumers to prefer upgraded and reinvented food products.In contrast, anticipated guilt did not exhibit a statistically significant impact on behavioral intentions, diverging from the inference drawn by Jiang et al. [34] regarding its positive correlation with intention to adopt low-carbon consumption behaviors.This discrepancy could stem from the heterogeneity in individual sensitivity towards anticipated guilt and the perceived ramifications of certain actions, suggesting that not all individuals respond uniformly to emotional cues.Previous investigations by Yang et al. [58], Chen [59], and Lu et al. [29] collectively indicate that emotional responses play a crucial role in behavior prediction.Hence, this divergence warrants a more nuanced exploration of how different psychological, cognitive, and situational factors mediate the relationship between emotional antecedents and consumer behavioral intentions.
Additionally, the significance of product knowledge on behavioral intentions was affirmed, highlighting that a deeper understanding of a product fosters positive purchase intentions and actions.This finding corroborates Ayub and Kusumadewi [37] and Liu et al. [60] and highlights the critical role of information and education in molding consumer decisions [61].Similarly, the perceived green quality of products was found to significantly impact consumer behavioral intentions positively, resonating with Riva et al. [40] and Vuong and Nguyen [41], further illustrating consumers' inclination towards products that not only satisfy their environmental values but also embody their social and environmental responsibilities [62,63].
Notably, our findings on price sensitivity diverge from the extant literature, indicating a positive correlation with behavioral intentions, contrary to the anticipated negative relationship.Several studies [64,65] have suggested that while positive intentions may prevail, elevated price sensitivity can deter actual purchase behavior.However, Ogiemwonyi [42] unveiled a distinctive dichotomy within green consumer behavior, in which a subset of consumers were willing to pay for sustainable products and services.This delineates a nuanced consumer segment for whom quality supersedes price sensitivity concerning environmental goods, thus diluting the assumed negative impact of price on sustainable purchase behaviors.
Our research found that green consumption values play a significant role in Taiwanese consumers' attitudes and behavioral intentions towards upcycled foods, consistent with the findings from other countries.For example, Turkish consumers are more interested in purchasing such products when they perceive them as helping solve food waste issues [66].However, differences in cultural background and values lead to differences between countries.For instance, Dutch and Swedish consumers emphasize moral selfreward and environmental benefits more [67,68], whereas in the United States, esthetic and emotional values are given more importance [69].Additionally, studies in these countries have highlighted the influence of consumer perceptions of product quality, nutritional value, and environmental benefits on purchase intentions.These comparisons help broaden our understanding of global consumer motivations for upcycled foods and underscore the important role of cultural differences in consumer behavior.
In summary, this investigation meticulously explores the interplay between green consumer values, attitudes towards sustainably upgraded food, and behavioral intentions through the lens of VAB theory.It uncovers the multifaceted nature of emotional underpinnings, emphasizes the cruciality of informed decision making, and delineates the perceived quality of green products as seminal influencers of consumer purchase inclinations.Together, these findings not only enrich our understanding of consumer behavior in the sustainable food sector but also signal pivotal considerations for marketers and policymakers aiming to foster a more ecological consumption landscape.
Research Conclusions
This study investigates the influence of value-attitude-behavior (VAB) theory, incorporating green perceived quality, product knowledge, and price sensitivity, on Taiwanese consumers' behavioral intentions towards upgraded reclaimed food products.The findings highlight the significant role of green consumption values in forming positive attitudes towards these products.Despite the expected impact, anticipated guilt was not significantly correlated with behavioral intention, indicating complex emotional influences on behavior.Perceived green quality and product knowledge emerged as significant predictors of behavioral intention, whereas price sensitivity surprisingly showed a positive influence, suggesting that consumers value intrinsic quality and ideological appeal over price.
Managerial Implications
This study provides strategic insights into enhancing consumer engagement with upcycled food products.
1.
Marketing aligned with green values: emphasizing the environmental benefits of upcycled foods in marketing to foster consumer acceptance.
2.
Enhancing product knowledge: educate consumers through workshops and demonstrations to increase purchase intentions.
3.
Boosting green quality perception: obtain environmental certifications and promote eco-friendliness to build brand trust.4.
Governmental support: implement subsidies, regulations, and promotional initiatives to create a supportive market for upcycled foods.
Research Limitations and Future Research Directions
While this study offers valuable insights, its limitations include a lack of cultural diversity and a detailed demographic analysis.Future research should explore cross-cultural studies and demographic variables to enhance the generalizability of the findings.Additionally, alternative theoretical frameworks and variables, such as information asymmetry and neophobia, should be examined.Understanding the moderating role of emotional responses could further elucidate consumer behavior towards upcycled food.
This study only discussed Taiwanese individuals; however, the consumption of different foods is highly correlated with culture.Therefore, future research should delve deeper into cultural differences (i.e., differences between linguistic regions or countries) to examine the universality and applicability of the study results.
Convenience sampling was employed in our study.However, according to Andrade [70] and Emerson [71], convenience sampling may lack generalizability, potentially restricting the broad applicability of our findings.Future research could utilize different sampling methods (e.g., stratified sampling) to ensure the representativeness of each subgroup in the sample based on specific attributes, thereby enhancing the generalizability of the overall results.
In conclusion, this study advances the discourse on consumer behavior within the VAB framework, providing a foundation for future research and managerial practices aimed at promoting sustainable consumption patterns.
Figure 2 .
Figure 2. Structural equation modeling diagram.Note: **p < 0.01; *** p < 0.001.H1a and H1b indicate that green consumer values have a significant positive impact on attitudes towards upcycled food (β = 0.647, p < 0.001) and anticipated guilt (β = 0.691, p < 0.001), confirming the role of individual values in shaping attitudes, as posited by VAB theory.H2a, and H3-H4 describe how attitudes towards upcycled food (β = 0.290, p < 0.001), product knowledge (β = 0.146, p < 0.001), and green perceived quality (β = 0.193, p < 0.001) significantly influence behavioral intentions, highlighting the multifaceted motivational basis for consuming upcycled food products.However, H2b shows that anticipated guilt has a positive but not significant effect on behavioral intentions (β = 0.029, p = 0.629); thus, the hypothesis is not supported.This suggests that more research is needed to understand the role and impact of this emotional
Table 2 .
Results related to factor loading, reliability, and validity.
Note 1: The values in bold font are the square roots of the AVE; non-diagonal numbers represent the correlation coefficients of each dimension.Note 2: PK = product knowledge; GCV = green consumption values; ATU & RF = attitude towards upgrading and remanufacturing food; AG = anticipated guilt; GPQ = green perceived quality; PS = price sensitivity; BI = behavioral intentions.Note 3: ** p < 0.01.
Table 4 .
Analysis of fit indices.
Table 5 .
Results of the path analysis and confirmation of hypotheses.
|
v3-fos-license
|
2019-03-18T13:58:43.029Z
|
2018-03-15T00:00:00.000
|
80682971
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.5633/amm.2018.0209",
"pdf_hash": "5d44387be86886f802b6d193b954cf01dcbd805e",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41934",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "639e71928ec3cefe2b123dd561f3c8c9c82e8cce",
"year": 2018
}
|
pes2o/s2orc
|
THE ROLE OF SERUM LEVEL OF TUMOR MARKER CA 125 IN DISTINGUISHING BENIGN FROM MALIGNANT OVARIAN TUMORS IN POSTMENOPUSUAL WOMEN AND CORRELATION WITH SONOGRAPHIC FINDING
Malignant ovarian tumors occur at all ages, including early childhood, but also advanced old age, with the total incidence dramatically increasing with age. Tumor markers for early detection of ovarian carcinoma are used in ovarian cancer examination. The aim of the study was to examine the degree of correlation between sonographic findings and the levels of serum tumor marker Ca 125, and to study a correlation of preoperative sonographic findings and serum marker level CA 125 with intraoperative finding and patohistopathological results. The study was based on the prospective-retrospective study model involving 60 postmenopausal women diagnosed with the presence of ovarian tumor. The following medical tests and examinations were performed for all patients: anamnestic analysis of the medical record, that is the history of the disease with the data on age, parity, duration of menopause, the use of oral contraceptives and symptomatology, small pelvis sonography, lab parameters Ca 125 with referent ranges up to 35 ml/U. Laparotomy was used as an operative procedure in all patients. All material obtained operatively underwent histopathological treatment. The group of patients with malignant tumors of high statistical significance showed considerably higher average CA125 values. Among subjects with benign tumors, the dominant tumor structure was cystic, as opposed to the mixed-type tumors in malignant tumors. To this effect, the parameter of tumor structure is a serious factor in distinguishing between benign and malignant ovarian tumors. Tumor location is, with high statistical significance, more often bilateral in subjects with histopathologically proven malignant tumors, while it is predominantly unilateral in benign tumors. Acta Medica Medianae 2018;57(2):53-59.
Introduction
Malignant ovarian tumors occur at all ages, including early childhood, but also advanced old age, with the total incidence dramatically increasing with age.The risk of ovarian carcinoma formation increa-ses after the age of 40, with the incidence peak between 50 -55 years of age.Ovarian carcinoma is in the fifth place immediately behind breast, lung, rectosigmoid colon and lymphoma cancer, with the highest mortality rate among the gynecologic cancers (1,2).
Survival rate for ovarian cancer depends on the stage when the disease is detected.In this respect, survival is 93% in stage I, 70% in stage II, 37% in stage III and 20% in stage IV.Three-quarters of newly discovered ovarian carcinomas are in stage III and IV, where the five-year survival rate is below 50%.This leads to the conclusion how important early detection and treatment of ovarian cancer are.
The development of ovarian cancers is influenced by many factors.First of all, hereditary, environmental factors, prior pregnancies, breast feeding, oral contraceptives, infertility, substitution hormone therapy, oncogenic viruses, etc.
It is thought that malignant tumors can develop in many ways.The most likely path of development is by means of a benign epithelial neoplasm developed from serous inclusion cyst, developed again by the invagination of the serosa.This is followed by the malignant transformation into epithelium of the benign cyst or the epithelium in serous inclusion cyst can turn malignant without a previous benign phase or, in turn, ovarian serous cells may develop into epithelial malignancy de novo without the formation of the serous inclusion cyst.For a long time, it has been widely accepted that, in time, benign epithelial ovarian cysts may turn malignant (3).
Precisely, due to this complex and insufficiently clarified pathogenesis and unknown etiology, the incidence of malignant ovarian tumors cannot be prevented.Only early diagnosis in asymptomatic women and modern therapeutic approach have proven effective in this serious illness.
Tumor markers for early detection of ovarian carcinoma are used in ovarian tumor examination.
In the past 20 years, various methods for ovarian cancer detection (Papanicolaou test, peritoneal cytology examination, etc.) have been applied, but have not proven to be good enough.The latest methods of immunoscintigraphy, nuclear magnetic resonance and computed tomography may detect small cancers, but their invasiveness and price prevent them from becoming the main screening tests.The methods examined today as screening methods are: bimanual gynecological examination, ultrasound, tumor markers.
Bimanual examination has its advantages: it is relatively easy, it can fit into the already existing cervical screening program, and it does not require special equipment, thus the costs are low.However, neither the sensitivity nor the specificity of this test has been known so far.
The majority of studies looking into the value of ultrasound in the diagnosis of ovarian tumors included women with symptoms who were about to undergo the laparotomy due to suspected ovarian masses.They confirmed the concurrence between ultrasonography and operative findings regarding the size, position and characteristics of the ovarian tumor.Many researchers have tried to use the ultrasound to characterize benign and malignant tumors.However, a criterion with 100% specificity for malignant ovarian tumors has not been described so far.With all this in mind, Sassone and associates published a study in 1991 presenting a scoring system for an objective description of pelvic disease based on the transvaginal ultranosographic characterization of the alteration (4).The proposed scoring system, used both for ovaries and extrauterine pelvic masses of unknown origin, is based on the following four criteria: the structure of interior tumor wall (1-4 points), thickness of the tumor wall (1-3 points), the presence and thickness of septa (1-3 points), 4 echogenicity (1-5 points).In order to obtain a scoring threshold that best separates malignant ovaries from the rest, the sensitivity and specificity for each score from 5-13 is calculated, based on which the curve is formed, the shape of which showed that the total score of 9 points best distinguishes between benign and malignant lesions.
Primarily those are serum levels of alpha-fetoprotein markers (AFP), CA 125, lactate dehydrogenase (LDH), CEA, and inhibin B. If there is suspicion that the tumor is hormone active and produces certain hormones, hormone analyses are performed and they serve as tumor markers.The most common are β HCG, estradiol and testosterone.In addition, tumor markers are used to monitor the effect of the therapy, as well as for the early detection of the recurrence of certain ovarian tumors (5).
CA 125 is antigenic determinant of high molecular weight glycoprotein recognized by the monoclonal antibody OC125.CA 125 is highly sensitive, but not a specific marker for tumors of ovarian epithelial cells.It may have elevated values in many intraperitoneal processes such as endometriosis, pregnancy, small pelvis inflammatory disease, Crohn's disease or other malignant abdominal tumors.
Ca 125 antigen may be detected in serum using radioimmunoassay, and serum levels are higher than 35 ml/U in over 80% of women with ovarian cancer (5).Bast and associates (6) also showed that only 1 % of healthy women had serum Ca 125 levels higher than 35 ml/U (6).Elevated levels may, however, be related to a benign gynecological pathology.However, the incidence of these benign conditions in postmenopausal women, the group that is most at risk of getting ovarian cancer, is low.A more detailed analysis however shows dependence on the stage of the disease.The disease spread outside the ovary is associated with the elevated levels of CA 125 in the serum in over 90% of cases.In the case where the tumor is restricted to ovarian tissue, CA 125 levels in the serum are increased only in 50% of cases (6).
Given that a high degree of specificity is required for the prospective screening program for ovarian cancer and given the link between CA 125 and non-malignant pathology, the positive predictive value of elevated serum CA 125 for this disease is considered too small to use only CA 125 as a screening test.The specificity could be enough if CA 125 were combined with ultrasonography.Such screening program is used in large centers worldwide, including the Royal London Hospital since 1985 (6).
Aims
The aim of this study was to examine the degree of correlation between the ultrasound finding with respect to the level of serum tumor marker Ca 125 and the correlation of preoperative ultrasonography and the level of serum CA 125 marker with intraoperative finding and pathohistopathological results.
Material and methods
The study was based on the prospective-retrospective study models involving 60 postmenopausal women diagnosed with the presence of ovarian tumor.
The research was carried out in the following institutions: Clinic for Gynecology and Obstetrics at the Clinical Center Niš, Women's Health Service of the Health Center Niš, Clinic for Gynecology and Obstetrics of the Clinical Center Kragujevac, and Clinic for Gynecology and Obstetrics "Narodni front" from Belgrade.
All patients underwent the following tests and examinations: • Anamnestic health card analysis, history of the disease with the data on age, parity, duration of menopause, the use of oral contraceptives and symptomatology; • Ultrasonography of the small pelvis; • Lab parameters -Ca 125 with reference range up to 35 ml/U.
Laparotomy was used as an operative procedure in all patients.All material obtained operatively has undergone histopathological treatment.
The standard descriptive statistical methods were used in statistical data processing (mean value, standard deviation, representation in percentages, degrees of freedom).The assessment of the distribution type was carried out using Kolmogorov-Smirnov test.The assessment of the distribution significance was carried out using the parametric t-test and nonparametric x 2 statistical test, using a standard significance level.
Results
The reference ranges of all laboratories where the levels of serum Ca 125 tumor markers were tested are up to U/L.Average Ca 125 values in the group of benign ovarian tumors amounted to 39.67 U/L, while in the group of women with malignant ones they amounted to 556.6 U/L, which is a difference in values with outstandingly high statistical significance.
In over one half of the subjects with benign tumors, Ca 125 levels were within the reference ranges up to 35 U/L.High marker values (over 100 U/L) were determined in 4 women from this category, whereby the presence of endometrial tumors was found in histopathological preparations after the laparotomy.Simultaneously, similar marker values were also found in the category of malignant tumors in 9 out of 15 subjects, as shown in Table 1 and 2 Tumor location is predominantly bilateral in the case of malignant tumors, while unilateral in benign ones, resulting in high statistical significance as shown in Table 3.The size of benign tumors is 7 cm on aver-age, while the malignant tumors in the examined group of patients were over 9 cm, which makes high statistical significance, as shown in Tables 4 and 5.The wall was significantly thicker in benign changes in 39 subjects and it amounted to 3 and more millimeters.In contrast, in malignant tumors, the thick-ness of the tumor wall in 13 out of 15 subjects was 2 and below 2 mm, which has high statistical significance, as shown in Table 6.There are significant differences in the appearance of the wall of the change.While in benign ovarian tumors the interior of the wall is smooth, in malig-nant tumors the interior of the wall is uneven and with numerous excrescences, as shown in Table 7.
Discussion
A large number of previously conducted studies (7)(8)(9) showed that Ca 125 tumor marker was not specific enough in the differentiation of benign from malignant ovarian tumors.Van Calster and associates point out in their paper that serum Ca 125 value are more often false positive in premenopausal women compared to postmenopausal women, but that in both groups the ultrasonographic classification of changes in the ovary is by far more reliable criterion for distinguishing between benign and malignant tumors (7).The average Ca 125 values in the examined population were considerably higher in the group of patients with malignant tumors X = 556.6U/L versus X = 39.6 U/L in benign tumors.Hence, there is a high statistical significance.The results of this study are complementary to the results of the study led by Dr. Edward E. Partridge, University of Alabama and Birmingham (10).This study showed that Ca 125 values of more than 35 U/L could be considered suspicious (indicative of tumor), and that values greater than 65 U/L were a reliable predictor for tumor malignancy in asymptomatic menopausal women in combination with transvaginal ultrasound examination.
Differentiation between benign and malignant ovarian tumors is most important both for the patient and the physician.In most institutions, the operative procedure (laparoscopy or laparotomy) depends on the assessment of the malignancy of the change, but malignancy can be safely excluded by histopathological confirmation only (11,12).Therefore, many prognostic models for the differentiation of malignant from nonmalignant ovarian tumors, including Doppler criteria, have been published so far and show significant validity (12)(13)(14)(15).
In ultrasound diagnostics and in estimation of malignant potential for ovarian tumors, many scoring systems have been created that did not find broad application in clinical practice (10,16).However, all data suggest that in terms of location in malignant tumor processes, the tumor is most often bilateral, in contrast to dominant one-sidedness in benign conditions.It is that assumption that was proven in the presented study.In addition, the size of benign tumors is up to 70 mm, while the malignant tumors in the examined group of patients are over 90 mm.
Benign ovarian tumors are primarily cystic in structure, in contrast to the mixed-type structure in malignant tumors.The thickness of the ovarian tumor wall is considerably higher in benign tumors, which speaks in favor of their cystic and clearly limited structure.There are also significant differences in the appearance of the wall of the tumor alteration.While the interior of the wall is smooth in benign tumors, it is uneven and with numerous excrescences in malignant tumors.Papillary projection is a significant ultrasound sign of tumor malignancy.The degree of malignancy is proportional to the number of these papillary formations (16).Granberget al. showed that the risk of malignancy is 3-6 times higher in unilateral cysts with papillary formations compared to unilateral cysts without these formations, which makes the conservative tracking of these cysts unacceptable (16).
As an addition to the ultrasonographic morphological image of the tumor, other factors such as family history, the presence of free fluid in the pouch of Douglas and the presence of subjective symptoms should be taken into account when deciding on the optimal treatment.When it comes to benign alterations in the conducted study, the presence of free fluid in the pouch of Douglas was sporadically determined, while in malignant ovarian conditions this is one of almost regular clinical and ultrasound findings.
Conclusion
Average Ca 125 values were far higher in the group of patients with malignant tumors with high statistical significance.Among subjects with benign tumors, the dominant tumor structure was cystic, as opposed to mixed-type structure tumors in the malignant ones.In that sense, the parameter of tumor structure was a significant factor in distinguishing between malignant ovarian tumors.
Tumor location is, with high statistical significance, more often bilateral in subjects with histopathologically proven malignant tumors, while it is predominantly unilateral in benign tumors.
The size of benign tumors was around 70 mm on average, while the malignant tumors in the examined group of patients were over 90 mm.Based on this, tumor size is a reliable factor in distinguishing between benign and malignant ovarian tumors.The thickness of the wall of benign tumors is higher in relation to that of malignant ones, and this is a parameter of high statistical significance.
The presence of free fluid in the pouch of Douglas is rare in benign ovarian tumors, and as a rule, where determined, it is associated with the ruptures of the tumor wall (cyst) and most often it is below 50 ml, while in malignant ovarian tumors the presence of free fluid is quite frequent, whereby the quantity is multiple times higher, and often filling the entire volume of Douglas.
The obtained results clearly demonstrated that detailed ultrasonography of the small pelvis and ad-nexa and the levels of serum tumor marker Ca 125 are reliable parameters for the differentiation of benign from malignant ovarian tumors in postmenopausal women.
Table 1 .
. Distribution of values of CA 125 tumor marker
Table 2 .
A correlation between the levels of tumor markers Ca 125 measured in benign and malignant tumors, with standard deviations in both categories
Table 3 .
A correlation between the location of benign and malignant tumors, with standard deviations in both categories
Table 4 .
The size of the tumor in the examined group of women
Table 5 .
A correlation between the size of benign and malignant tumors, with standard deviations in both categories
Table 6 .
A correlation between the wall thickness of benign and malignant tumors, with standard deviations in both categories
Table 7 .
A correlation between the wall structure of benign and malignant tumors, with standard deviations in both categories
|
v3-fos-license
|
2018-01-23T22:41:08.644Z
|
2014-06-30T00:00:00.000
|
28246185
|
{
"extfieldsofstudy": [
"Computer Science",
"Sociology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://repository.upenn.edu/cgi/viewcontent.cgi?article=1615&context=asc_papers",
"pdf_hash": "272a49a6ebe411b83312459676eda9a99d9f86e9",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41937",
"s2fieldsofstudy": [
"Geography",
"Political Science"
],
"sha1": "272a49a6ebe411b83312459676eda9a99d9f86e9",
"year": 2014
}
|
pes2o/s2orc
|
The Geoweb and Everyday Life : An Analysis of Spatial Tactics and Volunteered Geographic Information
In this paper, we discuss GeoWeb technologies, specifically those created via volunteered geographic information (VGI) as a means of analyzing the political contours of mapmaking. Our paper is structured around two case studies of VGI projects that allow for consideration for the political efficacy (and potential drawbacks) of these geospatial technologies. We use de Certeau’s constructs of strategies and tactics as a conceptual framing, which allows for a political reading of geographic data couched in the context of everyday life, as well as opening up inquiry into the politics of making, accessing and interpreting spatial data. We conclude by suggesting provocations for future research on the GeoWeb and VGI at the intersection of geography and information science.
Introduction
The term GeoWeb refers broadly to a set of geospatial technologies and geographic information available on the Web (Herring, 1994), such as Google Earth and MapQuest, where location-based tools, geospatial data and content can be generated and shared by anyone with an Internet connection (Roche, et al., 2011).The study of the GeoWeb and volunteered geographic information (VGI) matters for a range of academic inquiries: By opening their platforms to an infinite user base, the GeoWeb invites questions of political participation, labor ethics, privacy concerns and archival logistics (Elwood, et al., 2013).Drawing together online technologies and geography, usergenerated content provides insight into processes of place-making, of ascribing both social and spatial relationships to a given set of physical coordinates (Massey, 2005).Moreover, questions of citizen participation in the production of data have taken on an urgency and import in the wake of 2013's disclosures about mass government surveillance in the U.S. and abroad (Greenwald, 2013;Internet Monitor, 2013).In this sense, GeoWeb projects present a convergence of technological, cultural and political questions surrounding maps, online platforms and relationships to space.
We proceed by offering a brief introduction of GeoWeb as a set of participatory mapping platforms, followed by an analysis of case studies that illustrate the political contours of VGI participation.First, we examine Hollaback!, which cartographically documents women's experiences of street harassment.Our reading of Hollaback!points to a longstanding tension in cultural geography, where characterizations of maps as objective and impartial representations of space have been fundamentally challenged by critical theory and post-colonial studies (see Crampton, 2001;Kitchin and Dodge, 2006;Monmonier, 2010).Hollaback!'s mission is predicated on subjective experiences of space, which, rather than undermining the legitimacy of maps, points to its potential as a communicative medium of socio-political change.Our second case study considers competing maps depicting protest actions of the political movement Occupy.In setting up a comparison between maps that are similar in form but radically opposed in ideology, we further develop question of legitimacy and uncertainty in VGI practice (see Elwood, et al., 2013).Throughout our analysis, we return to de Certeau's (1984) construct of tactics (introduced in depth below) as a lens for unpacking the political stakes of creating, adapting and interpreting maps in everyday life.We conclude by specifying a research program from the information science perspective that accounts for both the political stakes of VGI and the long-term consequences for archiving and preserving the GeoWeb.
Context: GeoWeb technologies in brief
Before turning to some of the social and political issues surrounding VGI, we briefly introduce the technological components that separate GeoWeb technologies from more traditional mapmaking tools.We first discuss the differences in technical functionality between traditional mapping technology and the GeoWeb, and then link the GeoWeb and VGI to a set of broader technological shifts in user-generated content.
Whether operated by public or private sector, most Web-based mapping applications share underlying technological infrastructures built on Asynchronous JavaScript and XML (AJAX) to provide seamless pan and zoom functionality to traverse base maps.VGI renders these mapping features publicly configurable (as opposed to technologies that were merely accessible), allowing ordinary users to alter previously safeguarded maps.For example, Google Maps originally generated its U.S. base map from the U.S. Census Bureau Topologically Integrated Geographic Encoding and Referencing (TIGER) files and satellite imagery from the U.S. Geological Survey.On one level, VGI can be seen as a destabilization of data that had previously been the domain of trained professionals.At the same time, because GeoWeb projects are typically overlaid on geospatial data produced by industry or the government, VGI maps retain a sense of credibility that might otherwise elicit significant skepticism.
In terms of relationships between people, technology and data, VGI can be situated within a broader shift in the mechanisms of media production from the few to the many, what Jenkins (2006) has dubbed convergence culture.Convergence culture has shaken up existing socio-technical practices in the music (Baym, 2007;Sinnreich, 2010), publishing (Deuze, 2008) and television (Jenkins, 2006) industries; Within information science, convergence culture can be linked to projects of folksonomies (Andreano, 2007;Schwartz, 2008) and tagging (Marshall, 2009;Naaman, et al., 2004), where traditional mechanisms of organizing and describing content are supplemented or even substituted with user-generated metadata.The increasing accessibility and popularity of VGI has had ramifications in conventional notions of mapmaking and geography, for example, the ways that configurable online maps invite questions of political subjectivity and legitimate versus illegitimate participation (see Parks, 2009).Indeed, VGI is increasingly being considered for use in more mainstream geospatial data application fields, such as emergency response and disaster management, where the ability to accumulate and assess nearly real-time geospatial data and local knowledge of place can be critical for mounting relief efforts (Graham and Zook, 2011;Zook and Graham, 2007).More broadly than crisis informatics, both academic researchers and professionals have explored how this usergenerated content on GeoWeb platforms should be utilized to help answer long-standing research questions requiring empirical evidence at a grand scale (Haklay, et al., 2008).Our interests in VGI are not empirical, but conceptual: how can examples of VGI help us understand processes of place making?What are the politics of individual (rather than institutionally) making data?
To some extent, we share a conceptual interest (and structural approach) with earlier investigations of GeoWeb projects and their politics (Elwood and Leszcynski, 2011).We seek to contribute to this growing area of study through a conceptual analysis of the politics of place (and data) making via the GeoWeb, we describe two case studies of maps developed with VGI, framing their politics and processes through a de Certeauian (1984) lens of strategies and tactics with consideration for long term curation and preservation.
Context: Theoretical frameworks
We frame our analysis of GeoWeb technologies as tools of potential empowerment using de Certeau's (1984) description of spatial practices of urban environments.In his analysis of everyday Parisian life, de Certeau rejected panoptic, static or bird's eye constructions of space, preferring instead to think in terms of "operations," or how people navigate cities in terms of everyday routines and habits [1].De Certeau here advocated a transgressive, pluralistc reading of space that recognizes and then sets aside official mappings of a city in order to see the different ways that everyday people think of and utilize their neighborhoods.This tension between individual and institutional readings of space corresponds to de Certeau's division of strategies and tactics.De Certeau defined strategies as "the calculation (or manipulation) of power relationships -an effort to delimit one's own place in a world bewitched by the invisible powers of the other" [2], where tactics refer to "a calculated action determined by the absence of a proper locus" [3].In other words, strategies are the ways institutions define, organize and name an individual's surroundings and available resources, even as individuals are able (with carrying degrees of agency) to wrest control of a situation through tactics, which arise from impromptu, decentralized opportunism.These moments of temporary disruption, of deliberately ignoring authoritative instructions for the use of hegemonically controlled space and resources, are moments of individual tactics that subvert institutional strategies.Putting these concepts in the context of VGI, GeoWeb technologies allow for tactical annotations and reworkings of traditionally strategic cartographies given the linkage between virtual and material spaces through augmented realities.
Research questions
In the context of theorizing space, de Certeau (1984) is most often invoked as a reorienting of space from above to below, from a bird's eye view to street-level understandings of city space.In this paper, we focus specifically on street-level politics of mapping, analyzing GeoWeb tools in terms of their tactical efficacy, where maps should be thought of less as representations of space and more as relationships to space.As such, key questions guiding this work include: How do VGI projects embed tactical objectives of activist politics?And in turn, how can a tactical reading of VGI reveal processes of place-making?What are the implications of recognizing the tactical layers of VGI maps for critical GIS?For critical information theory, particularly in the context of long-term archiving?
To address these questions, we turn to a series of case studies.By considering how different GeoWeb projects have encountered and managed issues of interpersonal politics, ethics and privacy, our analysis offers concretely grounded analysis of existing technologies as a means of guiding both development and scholarship for future GeoWeb technologies.
Before continuing, we note a caveat regarding our own rhetorical argumentation; throughout our analysis, we have set up a distinction between individual and institutional cartographic projects in order to highlight differences between tactical and strategic representations of space.Yet this binary runs the risk of reductively obscuring gradations of organizational forms that generate spatial data, with volunteer-based efforts like Hollaback! on one end of the spectrum and state-driven efforts like the Census in the middle where citizen participation is more pulled than pushed.In between these two poles exist a range of GeoWeb arrangements that vary widely in their technical sophistication, organizational structure and affiliations with government and industry.We have focused on political ideologies driving and embedded in VGI projects as indicative of their tactical efficacy, yet these factors do not exist in isolation from relationships (however indirect) to legal, corporate and state entities.
Case studies
As noted earlier, VGI represents one facet of a larger shift from institutional to individual production that sometimes is referred to as convergence culture (Jenkins, 2006).GeoWeb technologies in general and VGI in particular are frequently associated with democratic values of participation (Elwood and Leszcynski, 2011).The political paradigm undergirding this assumption could be summarized with the claim, "by allowing open access to cartographic tools, maps will be more inclusive and representative."Rather than pointing to this kind of subjectivity as inherently problematic, we argue that there are instances in which subjective, non-inclusive representations of space offer a powerful, tactical tool, yet the politics of participation need to be examined closely rather than assumed.
In defense of subjectivity: An analysis of Hollaback!Although participatory media conveys a sense of democracy, in that anyone who has access to increasingly basic technologies can help create and improve GeoWeb efforts, scholars have noted the limits to the democratic access that emerge through actual use (van Kranenburg, 2008).For example, Stephens (2013) has pointed out that the OpenStreetMap in Germany reflects a user base that predominantly consists of men, resulting in landmarks dominated by conventionally masculine interests.For example, Stephens noted that a variety of terms are approved for different kinds of bars, pubs and night clubs, but far less granularity is approved for locations related to, following Stephens' example, childcare.This is one instance in which unequal participation persists in spite of technically "open" platforms of participation, which in this case led to distorted depictions of space.
Stephens (2013) has advocated GeoWeb platforms that have full, or at least more, parity in terms of who participates in mapping projects, yet it is worth noting that there are instances in which one-sided maps of space are useful, as in the Hollaback!project [4], which enables women to report instances of street harassment.Users of the site document instances of unwanted attention, aggression or sexual assault (whether verbal or physical), combining a description of the occurrence with a geo-tagged location.The site also provides social support and resources for women who want to pursue legal action or policy changes.In terms of participation, Hollaback! is decidedly one-sided and undemocratic, factors that are necessary for meeting the site's objectives of documenting everyday incidents that are frequently ignored or written off as unimportant, and also generating a map that can be used to carve out safer routes through city space.As such, Hollaback!represents an instance in which inclusion is skewed towards a particular group [5] in order to counterbalance (or at least document) inequalities of power.The key difference between Stephens' discussion of OpenStreetMap and Hollaback!(as a feminist project) centers on recognizing when parity is desirable in creating representations of space and when it may, in fact, be harmful.The tactical efficacy of Hollaback!functions in dialogue with cartographic representations structured around patriarchal values and strategies.Stephens demonstrated that just because a platform has very low barriers of entry does not mean that participation will be democratic; Hollaback!demonstrates the advantages of VGI not in terms of even distribution of power, but rather of deliberately creating a one-sided view of space in order to counteract existing prejudices or injustices.
Figure 1:
A screenshot of Hollaback!'s map of Los Angeles.Users can document instances of street harassment, noting the date, time and place of the incident, as well as providing details on their experiences.Hollaback!has gained an international base of users, and raises interesting questions of participation (in that it represents an instance of a one-sided depiction of space as a response to historical inequalities of power) as well as privacy.
Whose coordinates win? Competing maps of Occupy
With the Hollaback!example we outlined instances where participation yields important benefits in legibility and documenting historically marginalized experiences of space.At the same time, it's important to acknowledge instances in which it might be preferable to be left off maps altogether.For example, we might consider different kinds of mapping practices related to protest movements like Occupy Wall Street.Begun in the late summer of 2011, the Occupy movement quickly spread around the globe as a multi-faceted, highly diffuse set of encampments that surfaced in over 324 sites in the U.S. alone (Caren and Gaby, 2011).From the protesters' perspective, mapping out where Occupy had spread offered a means of building solidarity and helping to coordinate collective action.At the same time, identical technologies (and perhaps even the very same maps) lent themselves to other kinds of monitoring -from counter-protestors, the media and local authorities.As an illustration of these tensions, the Web site OWSexposed [6] maintained a state-by-state map of U.S.-based Occupy protests, with links to news stories of crime and arrests allegedly linked to the movement.Looking only at its technical features and not content, this map offers kinds of spatially organized data similar to Occupy's own map [7] facilitating communication between activists.In de Certeauian terms, these maps demonstrate competing tactical cartographies -both are street-level documentations of actions and spaces, but the processes of meaning making, of noting the placeness of these spaces, diverged critically in their political ideologies.For Occupy protestors, VGI maps offered important tools of organizing actions and facilitating participation; for OWS Exposed, the exact same technologies were useful in pointing to instances in which Occupy protestors broke the law or engaged in dangerous behaviors.
Given that the technologies and content are so similar in these competing maps, careful attention to political motivation is required to disambiguate these maps.The stakes of this kind of disambiguation resonate with cloaked Web sites, or sites "published by individuals or groups who conceal authorship in order to deliberately disguise a hidden political agenda" (Daniels, n.d.).For example, at first glance, the site www.teenbreaks.com(http://www.teenbreaks.com/)appears to be a Web site offering information on reproductive health, but on closer inspection, the site's creators have a decidedly anti-choice ideological stance.Similar to the comparison between Occupy and Occupy Exposed, the ideological valences of online (cartographic or otherwise) may or may not be immediately legible, and may in fact be deliberately occluded.These layers of ideological visibility are apt to become more contested in the context of open participation, as in VGI artifacts.
Furthermore, Occupy protesters faced the difficulty of wanting to use GeoWeb tools to coordinate actions and document wrongful arrests, even as those same tools (of the GeoWeb as well as social media more broadly) are increasingly being used by the authorities to make arrests and infiltrate movements (see Bratich, 2011).That maps have politics is a well-established claim (Crampton, 2001); the political complexity of crowdsourced maps lies largely in the degree to which politics are articulated, coherent and transparent (Harley, 2001).Questions of legitimacy -in terms of who is allowed to access and add to a map -and legality (what are the consequences of participation) become both highly salient and deeply complex in maps created by conflicting communities, as arose around the politics of Occupy.
Discussion and conclusions
That politics are embedded into the production and use of maps has been persuasively and powerfully argued (Crampton, 2001;Monmonier, 2010).Through analysis of a series of case studies, we have identified some of the political contours stemming from participatory GeoWeb technologies, including assumptions about the capacity of crowdsourced maps to produce objective representations of space, as well as differences between legitimate versus illegitimate productions of spatial data.We conclude by developing these themes further, looking to make connections between GIS and IS as areas of scholarship that attend to information practices in everyday life.
The political potential of uncertainty
A common assumption about VGI maps is that collective participation produces objectivity; where individuals are biased, crowds are assumed to produce generalized associations by drowning out extreme perspectives with a stabilized majority.Our analysis have pointed to ways in which this presumed objectivity can not only be inaccurate (following Stephens, 2013) but moreover undesirable.For example, Hollaback!presents a deeply subjective and politiczed set of spatiation representations, destabilizing dominant, heteronormative understandings of city streets.These distortions have parallels in other instances of technological convergence, for example in the emergence of folksonomies, or user-generated classification schemes via the use of tags such as Flickr and Delicious (Chu, 2010).
Although folksonomies initially provoked excitement as a means of providing more democratic, egalitarian and (in de Certeau's (1984) terms) tactical means of classification [8], practical concerns of integrating folksonomies into existing systems quickly emerged (see Andreano, 2007;Schwartz, 2008).Part of the perceived benefit of user-generated metadata comes from the richness of individualized content revealing personal relationships to data.Yet these same idiosyncrasies present real challenges in the context of incorporating user-generated metadata into existing organizations and hierarchies.Moreover, although platforms that enable user generated content are by definition open to public use, bias can nevertheless emerge through self-selection and larger, more system gaps in participation with socio-technical systems (Collier and Bear, 2012).In VGI as in folksonomies, it becomes apparent that opening up a platform for general participation does not, as a process, necessarily result in more parity in terms of content.
Rather than thinking of this unevenness as a critical failing, however, we submit that these discrepancies have useful parallels with the concept of uncertainty in critical GIS.In generating provocations for future inquiry in critical GIS, Elwood, et al. (2013) have questioned "the appropriate functionality for a platial GIS" and ask "how might uncertainty be characterized in a platial approach?"[9] Generating uncertainty is a core purpose of the examples we've considered here, in that it is arguably vital for the projects we discuss to generate multiple perspectives on a given space.In this sense, uncertainty is an objective rather than an obstacle to "accurate" depictions of space/place.To generate uncertainties of space is to admit political dialogue, to allow for the contestation of multiple meanings (and tactics).
Legitimizing place and ethics of volunteer labor
Related to our earlier discussion of uncertainty, several of our case studies raised issues of legitimacy, which hinged less on whether or not VGI was accurate (a more conventional concern in assessing the GeoWeb) and more in terms of legitimate versus illegitimate relationships to data.For example, our analysis of competing Occupy maps demonstrated how structurally similar maps can differ radically in their political motivation.This observation points to a key tension of de Certeauian tactics and strategies, namely, what happens when tactics of the weak are subsumed into strategies of the strong?When Hollaback!users document street harassment, they contribute tactically both in the sense of providing non-dominant depictions of everyday street life and by enabling others to craft tactical routes through space.Yet these tactics have no inherent protection from reappropriation back into strategies of the dominant.For example, Hollaback!maps could easily be used by men to target specifically spaces where women might feel safe.Or in the example of conflicting protest maps, Occupy struggled to disseminate information about upcoming actions without alerting the police or counter protestors.In other words, as protestors produced tactical maps, they risked facilitating the development of police strategies.Whatever the circumstances of tactical production, there is no guarantee these maps (or, for that matter, tags or metadata) strategically benefitting dominant institutions.The history of Taylorist interventions in the workplace can be read as tactics being enfolded into the strategic.
Both the possibility of subsuming tactical work back into strategic infrastructure and concerns of privacy raise important issues of labor and ethics.Internet studies scholars have noted that common online interactions (Terranova, 2000), from using Google's search engine to tagging photos on Flickr, can be read as a form of free labor on behalf of users for corporate gain (see also Andrejevic, 2007).GeoWeb projects are open to similar critique, in that VGI initiatives are made possible only through the contributions of unpaid users; although these users may benefit from the maps they create, so do the underlying corporate entities that stand to gain from access to the products of VGI efforts.Particularly for projects with agendas of social justice, it behooves activists to consider the labor ethics surrounding VGI not only in terms of what is technically possible but what is ethically responsible.We offer these comments not to discourage the production of VGI maps within activist initiatives, only to advocate for careful consideration of avoiding conflations of open participation with democratic representations of space, as well as how projects can be converted from tactics to strategies.
Possibilities of VGI counter-conduct
Responses to leaked information about the National Security Agency's surveillance activities can be read as an indication that technologies for monitoring surveillance are of sustained public interest.From an activist perspective, geospatial technologies can facilitate organizing political action (Bennett and Segerberg, 2012) and also enable the many to surveil the many (Perkins and Dodge, 2009).With these issues of surveillance and control in mind, it becomes deeply important to consider how VGI participation can actually lead to an erosion of privacy for those lacking rational-legal authority of the state (Rose-Redwood, 2006).Bossewitch and Sinnreich (2013) have argued that new models of conceptualizing surveillance are necessary, stating that "in the face of communication infrastructure's increasing scope and complexity, individuals will require simple and effective models of participation to avoid paralysis and to catalyze strategic agency" [10].In terms of how agency can manifest on the GeoWeb, crowdsourced participation could be deployed to hide or deceive rather than earnestly report, what Bossewitch and Sinnreich called "disinformation campaigns" [11].A few simple examples to potentially avoid or impede government surveillance or control access to the information for a disruptive VGI project could be to create a series of placename cloaks to omit activists' whereabouts or create an operate within a new coordinate system.
Archiving activism: Metadata as tactics?
The long-term consequences of archiving the GeoWeb constitutes an issue that explicitly brings together geographic productions and archival practice, where IS has expertise in increasingly pressing questions of data storage, as well as policy questions.Yet although the stakes of organizing and providing access to GeoWeb data (even after removal from original Web-based mapping applications) would seem to draw together both GIS and IS, there is a notable gap in terms of how these two disciplines (and professions) tend to conceptualize metadata.Most GIS users understand the value of metadata standards in terms of describing the world as it is, collecting data on objective reality to solve immediate, and temporally bound problems (Obermeyer and Pinto, 2008).For information professionals, metadata is valued as a disciplinary and professional obligation and area of expertise, but moreover as a semantic layer of subjective meaning (Berman, 1971;Drabinski, 2013;Schuurman and Leszczynski, 2006).Although wary of generalizing either field, we note that continued dialogue between these communities could have important consequences for the development of GeoWeb technologies as well as attending to the politics of future geographic information contributors and preservationists (Bishop, et al., 2013).GIS would do well to consider VGI projects not only as documentations of the present, but as having real and long-term archival value.At the same time, in both professional and scholarly contexts, IS should expand efforts to include GeoWeb projects as part of their archival domain, intervening in discussions of how VGI is stored, accessed and (perhaps) deleted.
Figure 2 :
Figure 2: On the left, Occupy's map of local actions and collectives.On the right, a map from OWS that links news stories of arrests and harassment to various Occupy-related actions.Both maps relied on VGI data to link information to a set of geographic coordinates.
|
v3-fos-license
|
2019-01-30T01:11:48.325Z
|
2018-05-19T00:00:00.000
|
116853870
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3417/8/5/820/pdf?version=1526699807",
"pdf_hash": "96d99bd1da2b8cda58918a9cf1deb540402310f9",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41939",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"sha1": "96d99bd1da2b8cda58918a9cf1deb540402310f9",
"year": 2018
}
|
pes2o/s2orc
|
Heat Transfer Designed for Bionic Surfaces with Rib Turbulators Inspired by Alopias Branchial Arch in a Simplified Gas Turbine Transition Piece
The energy needed for highly efficient heat transfer has shown a continuous growth, as the energy reduction. For highly efficient power convection, gas turbine is an important device at present. But, the design of highly efficient gas turbine is limited by the temperature and the material’s temperature resistance around the inlet. One part of the inlet need to be protected from burning out is transition piece. A bionic thermal surface with rib turbulators is designed according to the turbulence function of alopias’ branchial arches and is evaluated for thermo-protection enhancement in a simplified gas turbine transition piece using computational fluid dynamics (CFD) simulation. With the given diameter (Φ = 10.26 mm) of the impinging hole, three different horizontal distances (S) from impinging holes to the front of first-row rib were solved, which were S1 = 20 mm, S2 = 40 mm, and S3 = 60 mm, respectively, in case 1. But, the results revealed that S is not a significant influence factor on heat transfer efficiency. The cooling coefficient increases from 0.194 to 0.198 when the distance varies from S1 to S3. In case 2, rib turbulator width (W) and height (H) have been studied in ranges from 0.5 × Φ to 1.5 × Φ. All of the numerical results indicated that the best size of the rib turbulators could improve the heat transfer efficiency to 32.5%, when comparing with the smooth surface. All of the comparisons will benefit the structural design of heat transfer, which could be used for solving more severe problems in thermo-protection.
Introduction
The energy that is needed for highly efficient transfer has shown a continuous growth, as the energy reduction.For highly efficient power convection, gas turbine is an important device at present.In order to enhance power transform efficiency, an available approach always been applied is increasing the inlet temperature in the gas turbine [1].The research indicates that the ability of power conversion on the turbine can be improved by 10% as the inlet temperature increasing 55 K.However, higher gas temperature will lead to larger thermal loads on the thermal surface in the transition piece, and will even threaten the working lifespan and the reliability of gas turbine hot component [2].Therefore, the cooling technology on such a high temperature components becomes to be a critical task.
The transition piece, which is an important component combining the combustor and turbine, is studied by scholars for heat protection.A. Gallegos Muñoz [3] investigated the geometric construction of transition piece effected on the contours of temperature and velocity in the outlet section, by using CFD.Then, structural optimization was carried out by the Genetic Algorithms (GA).The optimized results indicated that the average inlet gas temperature and the average velocity were decreased by 2.32% and about 7.73%, respectively.
Wang et al. [4,5] provided the sheath with hundreds of small impingement cooling jets can enhance the convective heat transfer coefficient by the strong forced convention coolant, which was installed over the external thermal surface of the transition piece.Both experimental and computational studies were carried out.A 1/7th section of a circle was designed to simulate a part of the dump diffuser accommodating one and two half transition pieces.The experimental results showed that the non-sheathed case provided a 40% reduction in pressure losses when comparing with the sheathed case, but 35% increase in the maximum surface temperature difference and an increase of 13-22% in other surface temperature difference, based on the temperature difference of the bulk inlet and outlet temperature.The CFD results also identified that the addition of the sheath was advised.
Xu [6] considered the coolant hole's angle and the injection angle of coolant flow effected on the heat transfer efficiency of impinging.The CFD simulation models were employed and the results indicated that heat transfer effectiveness was improved with the rising hole's inclination and injection angle, since the thermal surface was impinged by more coolant directly.They [7], also investigated that jet flow with water drop promoted impinging cooling efficiency increasing.They concluded the mass of 3 × 10 −3 kg/s droplets with diameters of 5-35 µm could enhance the 90% cooling effectiveness and reduce 122 K of surface temperature.For enhancing the impinging cooling efficiency, they [8] designed pin fins in the coolant chamber to increase the turbulence, and optimized the pin fin diameter and the distance.After the pin fins were brought in, the numerical reports showed that the temperature declined of 38.77 K, when comparing to without pin fins.With the mist injecting into the cooling chamber, the area weighted average temperature got a lower value without excess pressure loss.
The transition piece in the above literature has a smooth thermal surface.However, non-smooth thermal surfaces may have higher cooling efficiency or better convective heat transfer enhancement.How to design a surface with excellent thermal protection ability remains a challenge for researchers.After billions of years of evolution, some biological structures already have excellent properties, which could provide inspiration for thermal surface designers.
Nowadays, the excellent structure of creatures in natural evolution has provided a lot of inspiration for engineering.The analysis shows that the non-smooth structural characteristics of the livings' surface morphology can change the flow.Cui et al. [9] considered four types of bionic surfaces of reducing pressure loss, which were riblet-shaped, ridge-shaped, V-shaped, and placoid-shaped, respectively.Using the Lattice Boltzmann Method (LBM), an order for drag reduction coefficient (η) was generated, as follows: η ridge > η V > η p > η rib .The results suggested that the ridge-shaped structure effected on reducing significantly, and the riblet-shaped structure could strengthen turbulence flow.
Hu et al. [10] explored the heat transfer performance of coolant stream coming out for the hollow shell, which was designed by using the bionic Barchan-dune shaped (BDS) concept.They claimed that the BDS design could make the coolant stream attach to the test surface more firmly, but with more friction loss.
Referring to turbulence flow characteristic of creatures' structure, the thermal protection of the transition piece may be solved more efficiently.In this paper, inspired by alopias' gill arch, a two-chamber rectangular model with rib turbulence is designed for enhancing heat transfer efficiency.Because the study of the biomimetic thermal surface is in comparison to parameter differences, the numerical simulations that have turned out to be more available and less expensive than the contrast experiments are carried out.The CFD method is applied to investigate the flow behavior and the heat transfer of the coolant flowing in cooling chamber with rib surface.
Bionic Design
To adapt to the marine environment, alopias need to complete respiration by absorbing the scarce oxygen in the deep water, and the excellent morphology of its gills can help oxygen exchange efficiently.The gill morphological structures of alopias supercilious is shown in the Figure 1.The branchial arch of alopias changes the flow directing of the seawater from mouth to gill filament, meanwhile more turbulence is generated.The oxygen in the flow can be contacted with the capillaries in gill filaments easily [11].The formation of turbulence that is inspired by the gill arches can strengthen the absorption of the oxygen; it is likely that more turbulence may improve the convective heat transfer of the coolant.In order to seek the correctness of this idea, the thermal surface with ribbed turbulators will be designed in a simplified gas turbine transition piece.
To adapt to the marine environment, alopias need to complete respiration by absorbing the scarce oxygen in the deep water, and the excellent morphology of its gills can help oxygen exchange efficiently.The gill morphological structures of alopias supercilious is shown in the Figure 1.The branchial arch of alopias changes the flow directing of the seawater from mouth to gill filament, meanwhile more turbulence is generated.The oxygen in the flow can be contacted with the capillaries in gill filaments easily [11].The formation of turbulence that is inspired by the gill arches can strengthen the absorption of the oxygen; it is likely that more turbulence may improve the convective heat transfer of the coolant.In order to seek the correctness of this idea, the thermal surface with ribbed turbulators will be designed in a simplified gas turbine transition piece.Since the structure of gas turbine transition piece in operating is complex, the double-chamber simplified model is designed in this paper.In order to exclude the effect of curvature on the heat transfer, the simplified model is rectangular, as in Figure 2a.The "X" direction is the coolant streamwise, and the direction of the gas flow is countercurrent.The flowing length L = 1050 mm and the width W = 320 mm are used.In the figure, both sides of the below channel in the X axis direction are non-sealed, which is the mainstream chamber.Contrarily, one side of the above channel as the coolant chamber is opened, and the other side is closed.The height of coolant chamber and gas chamber are 38 mm and 162 mm, respectively.There are three holes that are distributed in the top surface, and the diameter of all the holes (Φ) is about 10.26 mm.The length of the rib is 320 mm.The Figure 2b shows the section of the coolant chamber with two ribs in geometry.The weight and the height of rib are set as W and H, respectively.The distance from the first-row rib to the closed wall is 420 mm, and the spacing of the ribs is 25 mm.In case 1, it is considered that the streamwise distance from the holes to the front of the first-row rib is various, and is set as 20 mm, 40 mm, 60 mm, respectively.In case 2, nine different sizes of the rectangular ribs are designed for choosing the best one for improvement of cooling efficiency.Since the structure of gas turbine transition piece in operating is complex, the double-chamber simplified model is designed in this paper.In order to exclude the effect of curvature on the heat transfer, the simplified model is rectangular, as in Figure 2a.The "X" direction is the coolant streamwise, and the direction of the gas flow is countercurrent.The flowing length L = 1050 mm and the width W = 320 mm are used.In the figure, both sides of the below channel in the X axis direction are non-sealed, which is the mainstream chamber.Contrarily, one side of the above channel as the coolant chamber is opened, and the other side is closed.The height of coolant chamber and gas chamber are 38 mm and 162 mm, respectively.There are three holes that are distributed in the top surface, and the diameter of all the holes (Φ) is about 10.26 mm.The length of the rib is 320 mm.The Figure 2b shows the section of the coolant chamber with two ribs in geometry.The weight and the height of rib are set as W and H, respectively.The distance from the first-row rib to the closed wall is 420 mm, and the spacing of the ribs is 25 mm.In case 1, it is considered that the streamwise distance from the holes to the front of the first-row rib is various, and is set as 20 mm, 40 mm, 60 mm, respectively.In case 2, nine different sizes of the rectangular ribs are designed for choosing the best one for improvement of cooling efficiency.
Appl. Sci. 2018, 8, x FOR PEER REVIEW 3 of 10
To adapt to the marine environment, alopias need to complete respiration by absorbing the scarce oxygen in the deep water, and the excellent morphology of its gills can help oxygen exchange efficiently.The gill morphological structures of alopias supercilious is shown in the Figure 1.The branchial arch of alopias changes the flow directing of the seawater from mouth to gill filament, meanwhile more turbulence is generated.The oxygen in the flow can be contacted with the capillaries in gill filaments easily [11].The formation of turbulence that is inspired by the gill arches can strengthen the absorption of the oxygen; it is likely that more turbulence may improve the convective heat transfer of the coolant.In order to seek the correctness of this idea, the thermal surface with ribbed turbulators will be designed in a simplified gas turbine transition piece.Since the structure of gas turbine transition piece in operating is complex, the double-chamber simplified model is designed in this paper.In order to exclude the effect of curvature on the heat transfer, the simplified model is rectangular, as in Figure 2a.The "X" direction is the coolant streamwise, and the direction of the gas flow is countercurrent.The flowing length L = 1050 mm and the width W = 320 mm are used.In the figure, both sides of the below channel in the X axis direction are non-sealed, which is the mainstream chamber.Contrarily, one side of the above channel as the coolant chamber is opened, and the other side is closed.The height of coolant chamber and gas chamber are 38 mm and 162 mm, respectively.There are three holes that are distributed in the top surface, and the diameter of all the holes (Φ) is about 10.26 mm.The length of the rib is 320 mm.The Figure 2b shows the section of the coolant chamber with two ribs in geometry.The weight and the height of rib are set as W and H, respectively.The distance from the first-row rib to the closed wall is 420 mm, and the spacing of the ribs is 25 mm.In case 1, it is considered that the streamwise distance from the holes to the front of the first-row rib is various, and is set as 20 mm, 40 mm, 60 mm, respectively.In case 2, nine different sizes of the rectangular ribs are designed for choosing the best one for improvement of cooling efficiency.
Mathematics & Materials
In these cases, each hole angle with different coolant injection orientations are confirmed in the former researches, so the coolant inlet orientation is orthogonal to the wall.According to the working condition of one F model gas turbine, the temperature and the mass flux rate of the mainstream flow is set as 1300 K and 32.72 kg/s, respectively.The turbulence intensity is set at 5%, which can be estimated based on the mass flow rate, area, hydraulic diameter of the gas inlet, and gas viscosity.In order to reduce calculation, the number of impact hole was reduced to only three in the coolant chamber.Therefore, the value of pressure on the jet hole inlet is used as the initial conditions, which is set at 1.821 MPa.The pressure recovery coefficient is given as 0.95, which means that the ratio of pressure between the inlet and outlet of transition piece with smooth surface.The turbulent intensity on the coolant inlet is 10%.Details of boundary conditions are ascertained in Table 1 [13].In the mainstream chamber, it is assumed that the main streamflow is a mixture of N 2 , O 2 , H 2 O, and CO 2 , as well as rare gases.In another chamber, air as the cooling flow is used for all of the simulations.The material of the thermal surface is using Nimonic 263 (Hucheng industry (Shanghai) Co., Ltd., Shanghai, China), for which information could be found from internet.The model is a computational domain that is made up by hexahedron meshes in the software, ANSYS-ICEM, version 18.0.In Figure 3, the grid sensitivity test for the simplified two-chamber model with the biomimetic thermal surface are carried out.When the number of cells increase from 2,227,680 to 2,962,080, the area weighted average temperature is declined by 0.3%.The calculation deviation of temperature is insignificant on further increasing the numbers of cells.Thus, 2,200,000 cells are used as a grid independent mesh for obtaining the solution variables in our further simulation.
In order to intuitively acquire the quantity of heat that is taken away in the process of heat exchange in the cooling chamber, the flow rate temperature λ, which is provided as an indicator that is computed by the product of flow rate and temperature, and it is defined as where ρ is the density of the coolant and → v is the facet velocity on the selected field.Also, the heat transfer coefficient can be defined as which serves as an indicator to value the performance of impinging cooling.In Equation ( 2), i is the number of group and s represents the group of smooth thermal surface.Also, β is defined as In Equation ( 3), λ out and λ in are the flow rate temperature on the outlet and the inlet surfaces.In order to describe the contour of temperature on thermal surfaces and the distributed field of turbulent kinetic energy in the coolant chamber, this study is using the control-method, which is a commercial CFD code, ANSYS-FLUENT 18.0.The flows in these simplified calculated models are steady, Newtonian, three-dimensional, incompressible, turbulent, and behave according to three fundamental laws: continuity, and the conservation of momentum and energy.The realizable k − ε turbulence model with the enhanced wall function is chosen to simulate the flow behaviors and the convective heat transfer enhancement on the biomimetic thermal surface.All of the runs were solved on a workstation with sixteen cores i7 3.6 GHz CPU.The decreasing of the mass residual by 85% percentage is chosen to be the standard of convergence tolerance during 2000 solving iterations.
Results & Discussions
In this section, the contours of temperature on the thermal surface and the behaviors in the coolant chamber are present in order to explain the heat transfer efficiency and the mechanism of enhancing heat transfer.Since the pressure at the inlet of the impingement cooling holes is controllable, the effect of the rib structure on the pressure loss at the inlet and outlet will not be considered in this paper.
Comparison of Case 1
To investigate the influence on the streamwise distance from the cooling holes to the front of the first-row rib, contours of turbulence kinetic energy for different distances are shown as Figure 4, which presents the process of cooling impingement.The results show that the maximum turbulent kinetic energy reduced from 21,440 m 2 /s 2 to 20,830 m 2 /s 2 , when the space increases 20 mm to 60 mm in these three groups.It can be sure that the collision of the coolant flow and the turbulent kinetic energy become weak at the front of the first-row rib, when the horizontal position of the rib is designed away from the impact cooling hole.But, the total area of turbulent kinetic energy on the thermal surface is not changed with various streamwise distance.In order to describe the contour of temperature on thermal surfaces and the distributed field of turbulent kinetic energy in the coolant chamber, this study is using the control-method, which is a commercial CFD code, ANSYS-FLUENT 18.0.The flows in these simplified calculated models are steady, Newtonian, three-dimensional, incompressible, turbulent, and behave according to three fundamental laws: continuity, and the conservation of momentum and energy.The realizable k − ε turbulence model with the enhanced wall function is chosen to simulate the flow behaviors and the convective heat transfer enhancement on the biomimetic thermal surface.All of the runs were solved on a workstation with sixteen cores i7 3.6 GHz CPU.The decreasing of the mass residual by 85% percentage is chosen to be the standard of convergence tolerance during 2000 solving iterations.
Results & Discussion
In this section, the contours of temperature on the thermal surface and the behaviors in the coolant chamber are present in order to explain the heat transfer efficiency and the mechanism of enhancing heat transfer.Since the pressure at the inlet of the impingement cooling holes is controllable, the effect of the rib structure on the pressure loss at the inlet and outlet will not be considered in this paper.
Comparison of Case 1
To investigate the influence on the streamwise distance from the cooling holes to the front of the first-row rib, contours of turbulence kinetic energy for different distances are shown as Figure 4, which presents the process of cooling impingement.The results show that the maximum turbulent kinetic energy reduced from 21,440 m 2 /s 2 to 20,830 m 2 /s 2 , when the space increases 20 mm to 60 mm in these three groups.It can be sure that the collision of the coolant flow and the turbulent kinetic energy become weak at the front of the first-row rib, when the horizontal position of the rib is designed away from the impact cooling hole.But, the total area of turbulent kinetic energy on the thermal surface is not changed with various streamwise distance.According to the above formats, the outlet flow rate temperature is calculated and is shown in Table 2. Due to the mass flow is conserved, the outlet flow rate temperature ( out ) can present heat taken away in the coolant chamber.In the Table 2, Negative value indicates outflow cooling chamber direction.The results show that the outlet flow rate temperature is almost the same, when the streamwise distance increases from 20 mm to 60 mm.So, the quantity of heat exchanged is almost not affected by different spaces in the coolant chamber.It can be considered that the thermal protection on the transition piece has not been significantly improved.Therefore, it also can be concluded that the streamwise distance is not a significant factor affecting the heat transfer coefficient.
Comparison of Case 2
To investigate the effect of the sizes of rectangular ribs on the thermal surface, the width, and height of the ribs are both set as 5 mm, 10 mm, and 15 mm in the case 2. From these results, the excellent structure sizes can be found.
Reflection on temperature
Based on the results of the numerical simulation, it can be concluded that the flow rate temperature on the coolant inlet is almost not changed as 50 K• kg/s.But, the outlet flow rate temperature is various with different groups, which is shown as Table 3.According to the conservation of quality, it is obviously that the larger the absolute value of out is, the more heat can be taken away.So, as is shown in Table 3, the simulation results indicate that the cooling efficiencies on the bionic surfaces are all clearly improved.In detail, when W = 5 mm and 10 mm, the best cooling efficiency shows on H = 10 mm.However, when W = 15 mm, the case of H = 10 mm presents the worst.So, the height and width of the rib are considered that can both strongly influence the cooling performance.That is the reason why the third group has the best thermal protection effect, while the effect of the sixth group is very closed to the third.When compared with the result of the smooth thermal surface, the best cooling efficiency can be improved to 32.5%.
In order to study the thermal protection on the transition piece, the comparisons of temperature distribution on thermal surfaces are shown as Figure 5. Temperature distributed on the thermal surface without rib turbulators below the injections is low, but other place is very high; however, the whole temperature that was distributed on the bionic thermal surfaces with rib turbulators is exhibited uniformly.In addition, the minimum temperature can be found on the bionic heat transfer surface and the data are tabulated in Table 3.The position of the lowest temperature point is mainly According to the above formats, the outlet flow rate temperature is calculated and is shown in Table 2. Due to the mass flow is conserved, the outlet flow rate temperature (λ out ) can present heat taken away in the coolant chamber.In the Table 2, Negative value indicates outflow cooling chamber direction.The results show that the outlet flow rate temperature is almost the same, when the streamwise distance increases from 20 mm to 60 mm.So, the quantity of heat exchanged is almost not affected by different spaces in the coolant chamber.It can be considered that the thermal protection on the transition piece has not been significantly improved.Therefore, it also can be concluded that the streamwise distance is not a significant factor affecting the heat transfer coefficient.
Comparison of Case 2
To investigate the effect of the sizes of rectangular ribs on the thermal surface, the width, and height of the ribs are both set as 5 mm, 10 mm, and 15 mm in the case 2. From these results, the excellent structure sizes can be found.
Reflection on Temperature
Based on the results of the numerical simulation, it can be concluded that the flow rate temperature on the coolant inlet is almost not changed as 50 K•kg/s.But, the outlet flow rate temperature is various with different groups, which is shown as Table 3.According to the conservation of quality, it is obviously that the larger the absolute value of λ out is, the more heat can be taken away.So, as is shown in Table 3, the simulation results indicate that the cooling efficiencies on the bionic surfaces are all clearly improved.In detail, when W = 5 mm and 10 mm, the best cooling efficiency shows on H = 10 mm.However, when W = 15 mm, the case of H = 10 mm presents the worst.So, the height and width of the rib are considered that can both strongly influence the cooling performance.That is the reason why the third group has the best thermal protection effect, while the effect of the sixth group is very closed to the third.When compared with the result of the smooth thermal surface, the best cooling efficiency can be improved to 32.5%.
In order to study the thermal protection on the transition piece, the comparisons of temperature distribution on thermal surfaces are shown as Figure 5. Temperature distributed on the thermal surface without rib turbulators below the injections is low, but other place is very high; however, the whole temperature that was distributed on the bionic thermal surfaces with rib turbulators is exhibited uniformly.In addition, the minimum temperature can be found on the bionic heat transfer surface and the data are tabulated in Table 3.The position of the lowest temperature point is mainly influenced by the height of the ribs.When H = 5 mm, the lowest temperature appears below the impingement cooling hole, which has no difference with the smooth thermal surface.But, when H = 15 mm, the lowest temperature is located on the upper rib facet and is closer to the hole.As shown in Table 3, the value of the lowest temperature declines with H increasing.It is considered that the upper rib facet is closer to the cooling hole, with H increasing and it can be rushed directly by the air from cooling holes.influenced by the height of the ribs.When H = 5 mm, the lowest temperature appears below the impingement cooling hole, which has no difference with the smooth thermal surface.But, when H = 15 mm, the lowest temperature is located on the upper rib facet and is closer to the hole.As shown in Table 3, the value of the lowest temperature declines with H increasing.It is considered that the upper rib facet is closer to the cooling hole, with H increasing and it can be rushed directly by the air from cooling holes.
Reflection on velocity
To explain the mechanism of enhancing heat transfer, analyzing the flow characteristics is essential on the thermal surface in a simplified two-chamber.Figure 6 provides information on detailed flow characteristics above and nearby the rib turbulator arrangement.The result on the section in the XZ plane (Y = 0 in Fig. 2 (a)) indicates that the fluid vector will be altered by the convex surface, which means that the distribution of fluid has been changed.Most researches point out that
Reflection on Velocity
To explain the mechanism of enhancing heat transfer, analyzing the flow characteristics is essential on the thermal surface in a simplified two-chamber.Figure 6 provides information on detailed flow characteristics above and nearby the rib turbulator arrangement.The result on the section in the XZ plane (Y = 0 in Figure 2a) indicates that the fluid vector will be altered by the convex surface, which means that the distribution of fluid has been changed.Most researches point out that the vortex makes convective heat transfer improvement on the fluid-solid surface.Two vortex can be seen in the cooling chamber of Figure 6 which were caused by inducing of double-rib structures.One is generated above the rib, with the other being located between two ribs.The height of rib H is perceived to be the influence factor.When H = 5 mm, there is amounts of space over the rib remained for vortex growing.
However, when H increased, the scale of vortex is squeezed.The larger the H is, the flatter vortex become.Simultaneously, the change of W can also affect the velocity and the generation of vortex, which indirectly influenced the heat transfer enhancement of convex surface.Overlarge W would narrow the space between two ribs, which may bring about heat-obstruct effect as ball bearing or make vortexes hardly exist.Similarly, vortexes cannot effectively cool down the whole space if W is too small.It is worth noting that oversized H may cause no vortex be found between ribs.In Figure 6, the result of group 4 (10 × 15) shows almost no vortex during the space of double-rib and the worst result, which is opposite to all the other results with non-smooth surface.
the vortex makes convective heat transfer improvement on the fluid-solid surface.Two vortex can be seen in the cooling chamber of Figure 6 which were caused by inducing of double-rib structures.One is generated above the rib, with the other being located between two ribs.The height of rib H is perceived to be the influence factor.When H = 5 mm, there is amounts of space over the rib remained for vortex growing.However, when H increased, the scale of vortex is squeezed.The larger the H is, the flatter vortex become.Simultaneously, the change of W can also affect the velocity and the generation of vortex, which indirectly influenced the heat transfer enhancement of convex surface.Overlarge W would narrow the space between two ribs, which may bring about heat-obstruct effect as ball bearing or make vortexes hardly exist.Similarly, vortexes cannot effectively cool down the whole space if W is too small.It is worth noting that oversized H may cause no vortex be found between ribs.In Figure 6, the result of group 4 (10 × 15) shows almost no vortex during the space of double-rib and the worst result, which is opposite to all the other results with non-smooth surface.
Reflection on turbulence kinetic energy
Figure 7 shows the generation of turbulence kinetic energy in the double-chamber model in terms of contours.It is quite clear that the turbulent kinetic energy is found to be higher at the front of the first-row rib because of reflux colliding with the inlet impinging coolant.The larger W leads to a closer distance from the impinging hole to the front surface of the first row that collects more kinetic energy at the corner of rib.Similar phenomenon can also be produced by increasing H, but it may weaken the energy spreading on streamwise.It can be observed that turbulence kinetic energy is affected less by the second-row rib, since the value of velocity from the coolant entrance is too much large and the first-row rib is too close with the coolant air column.
Reflection on Turbulence Kinetic Energy
Figure 7 shows the generation of turbulence kinetic energy in the double-chamber model in terms of contours.It is quite clear that the turbulent kinetic energy is found to be higher at the front of the first-row rib because of reflux colliding with the inlet impinging coolant.The larger W leads to a closer distance from the impinging hole to the front surface of the first row that collects more kinetic energy at the corner of rib.Similar phenomenon can also be produced by increasing H, but it may weaken the energy spreading on streamwise.It can be observed that turbulence kinetic energy is affected less by the second-row rib, since the value of velocity from the coolant entrance is too much large and the first-row rib is too close with the coolant air column.
Conclusions
In this study, it has been investigated that the convective heat transfer and the coolant flow characteristics on the biomimetic thermal surface by using the realizable k models.The effect of rib turbulators position is discussed on the thermal surface, the variable, as well as the height and width of the rib turbulators is also studied by the comparison.The major findings of the present study
Conclusions
In this study, it has been investigated that the convective heat transfer and the coolant flow characteristics on the biomimetic thermal surface by using the realizable k − ε models.The effect of rib turbulators position is discussed on the thermal surface, the variable, as well as the height and width of the rib turbulators is also studied by the comparison.The major findings of the present study are summarized below: 1.The biomimetic thermal surface inspired by Alopias' branchial arch can improve jet impingement cooling.2. The effect of the streamwise distance from the holes to the first-row rib are studied on the biomimetic surface.It can be confirmed that the outlet flow rate temperature is almost the same, when the streamwise distance increases from 20 mm to 60 mm.So, the streamwise distance is not a significant factor.3. Since ejected into the cooling chamber at a high speed, the coolant airflow is impacted by the rib turbulators.The simulation results show that the best size of the rib turbulators can improve the heat transfer efficiency to 32.5%,When comparing with the results of the smooth thermal surface.
The above conclusion and the fluid characteristics analysis provide a new sight of bionic structural design on heat transfer, especially for the gas turbine and manufacturing machines, which confront a similar severe working condition.We hope both of different researcher could make much deeper research.
Figure 2 .
Figure 2. (a) Picture of the structure of simplified rectangular gas turbine transition piece;
Figure 2 .
Figure 2. (a) Picture of the structure of simplified rectangular gas turbine transition piece;
Figure 2 .
Figure 2. (a) Picture of the structure of simplified rectangular gas turbine transition piece; (b) Picture of the section of the coolant chamber with two ribs (the blue section in picture (a)).
out and in are the flow rate temperature on the outlet and the inlet surfaces.
Figure 3 .
Figure 3. Validity of the grid number for air weighted average temperature.
Figure 3 .
Figure 3. Validity of the grid number for air weighted average temperature.
Figure 4 .
Figure 4. Contours of turbulence kinetic energy for different distances.
Figure 4 .
Figure 4. Contours of turbulence kinetic energy for different distances.
Figure 5 .
Figure 5. Comparisons of temperature distribution on the thermal surfaces.
Figure 5 .
Figure 5. Comparisons of temperature distribution on the thermal surfaces.
Figure 6 .
Figure 6.Cutaway views of velocity in the coolant chamber.
Figure 6 .
Figure 6.Cutaway views of velocity in the coolant chamber.
10 Figure 7 .
Figure 7. Contours of turbulence kinetic energy for different size rib in the coolant chamber.
Figure 7 .
Figure 7. Contours of turbulence kinetic energy for different size rib in the coolant chamber.
Table 2 .
Data of the outlet flow rate temperature on biomimetic thermal surface in case 1.
Table 2 .
Data of the outlet flow rate temperature on biomimetic thermal surface in case 1.
Table 3 .
Data of flow rate temperature and cooling efficiency on thermal surface in case 2.
Table 3 .
Data of flow rate temperature and cooling efficiency on thermal surface in case 2.
|
v3-fos-license
|
2020-08-06T09:08:41.200Z
|
2020-01-01T00:00:00.000
|
226213696
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.5267/j.msl.2020.7.036",
"pdf_hash": "1e921aec6f06217aa037d37c10eb211bc3584d2f",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41940",
"s2fieldsofstudy": [
"Business"
],
"sha1": "7c5d7ee38acc6a9e08d155c586334a914fa6015a",
"year": 2020
}
|
pes2o/s2orc
|
The role of organizational commitment in the relationship between human resource management practices and competitive advantage in Jordanian private universities
Article history: Received: June 26, 2020 Received in revised format: June 3
Introduction
Intellectual capital in light of the competitive economy and the information age is the real capital of organizations as the key factor that plays the main role in the process of innovation, change, creativity, transform knowledge into value and then to a competitive advantage. This means that the center of gravity in the generation of value has shifted from the exploitation of natural (material) resources to the exploitation of intellectual (intangible) assets and from the law of diminishing returns (which applies to material goods) to the law of increasing returns (in terms of knowledge and idea). Based on this resource, the organization can rise to the highest possible performance. Without it, it cannot achieve any goals. It possesses energies and capabilities that other resources do not have. All of this requires appropriate management concerned with human resources and affairs and concerned with their needs and desires. The role of this department lies in creating the appropriate environment, caring for the human element and motivating it to make the greatest possible effort, which leads to improving the performance of the organization through used the policies, technologies, and programs that serve both the organization and the individual (Amos, Pearse, Ristow, & Ristow, 2016). Recently, organizational commitment has received a clear interest in the field of management, because of its relationship with the effectiveness of the organization, and the degree of completion of work. The organizational commitment reflects the individual's direction towards the organization and includes the strong desire to re-main a member. The commitment appears in the worker's effort to make additional efforts at the workplace. Therefore, individuals committed to their organizations are a source of strength to aid their survival and to compete with other organizations (Jafri, 2010).
Hence, the Jordanian universities are witnessing an accelerated quantitative and qualitative development in the field of higher education, as their numbers have increased, the programs and specializations diversified therein, and their educational techniques and patterns have developed, and it is one of the most important institutions of intellectual capital and responsible for producing knowledge through scientific research, and transferring knowledge through the most important functions of universities, which is teaching, then working to spread and market them through the third function of the university, which is to serve society and develop the environment.
The universities are among the organizations that their success depends on providing qualified human cadres and a trainer that can work on the special market based on what this type of organization is witnessing today from intense competition, with instances of it characterized by its dependence on international quality standards and advanced technology in its various activities. Despite all these, the Jordanian universities face several challenges that hinder the investment of intellectual capital; therefore there is a lack of the ability to possess competitive advantages between other universities and therefore it is necessary to make sure of the availability of competencies to advance the burdens of building the educational sector and the kind of workers in these universities, the qualifications of workers that qualify them to work in these universities and achieve a competitive advantage. Allui and Sahni (2016) emphasize that the developments in the world in the fields of globalization, international competition, innovation and technology have shown the importance of human resources management to achieve competitive advantage, considering that human capital is the most prominent and is the only one achieving this sustainable competitive advantage. For any organization in that changing world than ever before, given that human capital is more important than new technologies or financial and material resources, the result of the changing nature of the work environment -especially technological and organizational developments, and competitiveness -means the enduring pursuit of organizations to attract retention and the provision of qualified personnel adequately for the labor market, and therefore has become a strategic human resource management and talent management concepts increasingly important; because in knowledge-based economies in the twenty one century, the focus is on workers who prepare a key element to achieve competitive advantage.
Most universities now operate in a complex, dynamic and highly competitive global environment, in addition to the presence of those trends in globalization, increasing academic mobility, and interest in academic talents in many disciplines, and administrative talents. The universities face special challenges in the presence of many changes in human resource management strategies such as the individual performance systems of employees; thus universities must move to a more professional approach to employee management so that it is more objective, fair and transparent to evaluate performance, and focuses on the employment of talent, and the use of performance indicators that provide for university chiefs, and human resources managers the opportunity to choose the talented employees of their institutions (Brink, Fruytier and Thunnissen 2013, P.181).
Through this, the role of universities emerges in how to select, train, and motivate employees and the impact on their commitment. It is well known that each university has goals to achieve through a set of work practiced by its workers who possess certain qualifications and capabilities. By exerting their efforts to achieve the goals of the university, and in return, they get many benefits that satisfy their needs. Therefore, the relationship between workers and the university is an integration relationship. It has the appropriate organizational climate and contributed to satisfying its various needs.
In this research, we will identify the importance of the practices carried out by human resources management, which guarantees Jordanian educational institutions the effective management of their human resources, and the effect of this on achieving a competitive advantage for this type of institution through their organizational commitment, and activating their role in the national economy. We will clarify the role that human resources management practices can play in Jordanian universities in dealing with their human resources in a way that makes such universities seek to exploit their energies and capabilities to achieve the goals of these universities.
Theoretical framework and hypotheses development
Human Resources Management is the primary and main element in organizing the relationship between the organization and the employees, through its continuous pursuit of the goals of the organization, by creating and building an effective workforce that can achieve these goals, which requires setting a clear plan for effective practices shared by the human resources management with management of high level (Salau, Oludayo, & Oni-Ojo, 2015).
When talking about individuals, we mean that the human elements are available to the organization, which refers to all workers in the organization regardless of the nature of their work, whether they are permanent or temporary workers, and whatever their job position is, whether they are heads or subordinates. From here, individuals in organizations are seen as their most important resource so that the efficiency and effectiveness of these organizations depend on the efficiency of this element, to the point that many experts and practitioners in the field of management feel that achieving a competitive advantage in the modern organizations, is not based on their possession of natural, financial or technological resources only but, it is based primarily on its ability to provide special types of individuals that enable it to maximize the benefit from the rest of the available resources (Dessler & Varrkey, 2005).
Due to the fact that humans are different and each of them possesses special capabilities and characteristics that distinguish it from others, therefore individual differences are the most responsible for making the differences in the competitiveness of the organization since individual differences are responsible for the differences in the best practices that are applied, Due to the fact that the best practices are the primary engine of competition and an essential reason for the difference among organizations. Also, it is the only thing that cannot be imitated and easily copied by competitors. Hence, the role of human resources in carrying out its duties fully through serious and strategic practices far from routine and imitation (Ungan, 2002).
Some researchers pointed out that human resource practices are those that can be described as a set of activities that would put human resource strategies into practice, and that their goal is to improve performance, enhance human resource capabilities and skills and thus achieve the strategic goals of the organization (David & Blandine, 2009). There are some researchers identify these practices as human resource planning, work analysis and design, employment, training and development, compensation, and performance evaluation. Also, there are some researchers believe that it is limited to planning and functional analysis, employment, training, performance evaluation, and compensation (Fota & Al-Qutb, 2013).
Despite this disparity and disagreement about what the practices of human resources management are, but it constitutes with each other an integrated and interactive system, indicating the existence of exchange and complementary relations among it, and that the decisions taken in the field of each practice complement each other, all of which work towards achieving a goal of Human Resource Management: It is to provide an environment, a qualified workforce, trained, and stimulating, with a high level of productivity and organizational effectiveness, and able to achieve and implement the organization's strategy. The most important of these practices that this study adopts are as follows:
HR planning
It is the set of integrated policies related to employment or employees, which aims to identify and provide the numbers and types of human resources required to perform certain actions at specific times and at an appropriate work cost, taking into account the productive goals of the organization and the factors affecting the organization (Anthony & Perrewe, 2009). It refers to procedures and practices that indicate proficiency in selecting and benefiting from employees in order to achieve the goals of the university in light of the specified costs.
Recruitment
Recruitment is defined as a process concerned with searching, selecting, and soliciting distinguished individuals to fill vacancies in the job, attracting a sufficient group of them, and selecting the best among them to form the bases from which the institution's administrative and executive structure is made (Wouter Jan Van Muiswinkel, 2013). It refers to the process of selecting individuals who have the necessary and appropriate qualifications to occupy certain positions at the university, which makes them competitive.
Training
Training is defined as "the continuous activity to provide workers with the skills, experience, and behaviors that enable them to perform their work efficiently and effectively to serve the goals of the organization" (Novkovska, 2013). Training is what organized, planned, and purposeful process through which the performance of human resources increases, which is reflected in the development and improvement of university outcomes and it enables them to achieve a competitive advantage.
Performance evaluation
It is studying and analyzes the performance of workers in their work and observing their behavior during work, in order to judge the extent of their success and the level of their competence in carrying out their current work. Also, judge the possibilities of growth and progress for the employee in the future and bear greater responsibilities or promotion to another job. In this study, we will address performance evaluation in that it refers to determining the efficiency of workers and the extent of their contribution to the accomplishment of the work assigned to them, as well as judging their behavior during work and the extent of progress they make during their work and at all levels of the university starting from the senior management (Chancellor's) and ending with workers in the small departments.
Several studies have examined the impact of human resources practices and strategies on organizations, and the factors influence those organizations' achievement of competitive advantage, but these studies have not addressed a very important topic, which is the role of organizational commitment to workers as a mediating variable in achieving this advantage in private Jordanian universities, which leads to enhancing the ability of these universities to survive and continue in light of the extreme competition that this sector faces. One of these studies, Al-qadi (2012) addressed the impact the practices of human resources on performance, which aimed to identify the impact of strategic practices of human resources management on the performance of private universities in Jordan, and the study concluded that the workers are not given an opportunity to participate in decision-making, the compensation system does not match the expectations of workers in private universities in Jordan, as well as workers do not participate in the recruitment practice and appointment with the manager of human resources. The Fayoumi study (2010) aimed to reveal the impact of intangible assets (human capital, organizational capital, and information capital) in achieving a competitive advantage in light of adopting total quality management standards in Jordanian public and private universities. The study found that there is a significant impact of the standards of total quality management on achieving the competitive advantage in public and private universities, and the significant impact of intangible assets on achieving excellence based on adopting the standards of total quality management in public and private universities.
Al-Shammari (2014) revealed the degree of availability of intellectual money and its relationship to the degree of achieving a competitive advantage in Kuwaiti private universities based on the faculty member's viewpoint. The results indicated a statistically significant relationship between the degree of availability of intellectual capital in the areas of capital the human, the customer capital, the operational capital, and the degree of achieving a competitive advantage in Kuwaiti private universities. Also, the results showed that the degree of achieving a competitive advantage in Kuwaiti private universities from the viewpoint of the faculty member's was medium. Essanya (2015) investigated the competitive strategies at the Nairobi Aviation College, used a case study method to obtain qualitative and quantitative data for the study. The study indicated that Nairobi College adopted all competitiveness strategies including cost management, service differentiation, expansion, and marketing. Al-Saleh (2012) aimed to identify concepts, areas, and strategies for building competitive advantage in Saudi public universities, and to know the most important requirements of each field. The study concluded that council members are aware of the concept of competitive advantage at a very high level. Also, he found that the most important areas for building competitive advantage in universities are scientific research, education, technology, and knowledge production. We noted, through a review of previous studies that dealt with the importance and impact of human resources, intellectual and human capital on universities, and the competitive advantage and its causes. Based on the results of those studies we find that it is necessary for any university to make these practices a reality managed intelligently for the purpose of increasing human resource productivity and improving the quality of outputs. It is natural to point out that the various aspects of adopts these practices, such as selection, training, promotion, transfer, incentive systems, and performance evaluation should be carried out in an effective manner that makes these practices able to make full use of resources. As it is the true wealth of it, because material assets erode over time and their market value decreases, while intellectual assets are the only basis for building and developing competitive capabilities and adding value to the university and achieving its competitive balance.
Regarding the organizational commitment of workers, it plays an important role in achieving the competitive advantage, many studies have indicated that, including a study, Al-Khushroom (2011) aimed to identify the level of organizational commitment for workers in technical institutes of the University of Aleppo, and to determine the impact of service climate change on the organizational commitment of workers, as well as to test the impact of job satisfaction as a mediator variable in the relationship between the service climate and organizational commitment. The study has found that the level of organizational commitment of employees was high, and job satisfaction as a mediator variable affected significantly in the relationship between the service climate and organizational commitment. Depending on the results of previous studies, the study hypotheses can be formulated as follows The first main hypothesis: There is a statistically significant effect of human resource management practices on the competitive advantage in Jordanian private universities.
The second main hypothesis: There is a statistically significant effect of human resource management practices in organizational commitment in Jordanian private universities.
The third main hypothesis: There is a statistically significant effect of the organizational commitment in the competitive advantage in Jordanian private universities.
The fourth main hypothesis: There is a statistically significant effect of organizational commitment on the relationship between human resource management practices and the competitive advantage in private Jordanian universities.
Organizational Commitment
Human Resource Management practices Competitive Advantage
Research Respondents
The study population consisted of all 22 private universities in Jordan, according to the statistics of the Jordanian Ministry of Higher Education 2019. The study sample included (10) universities distributed in Amman, Irbid, Jerash, Zarqa, and Al-Balqa. As shown in Table 1.
Measuring Instrument
The questionnaire was used as a tool to collect data. It was designed and developed specifically based on the objectives of this study which aims of measuring the impact of human resource management practices on the competitive advantage through organizational commitment as a mediator variable in Jordanian private universities. The questionnaire was designed based on study literature and previous studies. The questionnaire divided into four sections as follows: Demographic data: It is the personal data of the respondent (gender, age, educational qualification, job grade, years of experience).
Human resource management practices: this research adopts HRM practices such as human resource planning, human resource recruitment, human resources training, and evaluation of human resource performance. Human resources planning measured based on the scale of Guest and Conway (2011). Attracting human resources was measured based on the Mahmood, Iqbal, and Sahu (2014) scale. Human resources training was measured based on a scale of Asad and Mahfod (2015). The evaluation of human resource performance measured based on the Salau et al. (2015) scale.
Competitive Advantage: this construct was measured based on the Gituku and Kagiri (2015) scale.
Organizational commitment: this research developed this section based on the Antony (2013) scale .
The answer was limited to five points, the questionnaire was used according to the Likert five-point scale, ranging from strongly agree (5), and I agree (4), I agree with an average degree (3), disagree (2, Strongly disagree (1).
Analytical method
To address the data collected according to the purposes of the study, and based on the measurement of variables, the following statistical methods were used: -Descriptive analysis.
Descriptive analysis
Based on the Table 1, we find that the largest percentage of the members of the study sample were males, where they reached (85.3%) of the sample and the percentage of females (14.6%) of the study sample. This result can be explained by the fact that males tend to complete their higher studies are more than women, as well as for female marriage, often after the end of the first university degree. As for the variable of age, it indicates that the largest percentage of the members of the study sample was for the category of (40-less than 50) years, where their percentage (46.55%) of the sample. The explanation is that the study period until obtaining the doctorate is long and until he gets the job, he needs some time. In addition to the universities requiring the experience before employment which this explains the high percentage. As for the age group less than (30) it reached only (5.17%) of the study sample, which is a low percentage where it is still In the postgraduate stage, as well as the requirement of work experience, it makes the number of this group a few, and as a result of the high costs of studying, it makes the youth group delay in thinking about completing their postgraduate studies. With regard to the educational qualification, we note that the largest percentage of the study sample was for those who have a PhD degree, as they reached 88.79%. Also, the lowest percentage was for those who have a bachelor's degree, where they reached (4.31%) of the study sample. This result can be explained by the fact that the nature of work in these universities requires the presence of PhD holders, while the first university degree is limited to the need for it in some few administrative works. It is also noted that the job title of department manager reached (46.98%), and this is a result of the nature of the organizational structure of universities in a large way . The present study exposed the gathered data to statistical analysis, using Structural Equation Modeling (SEM) paired with Partial Least Squares (PLS). More importantly, PLS-SEM appeared to be the most suitable for an exploratory study, modeling reflective and formative constructs alike (Hair Jr et al., 2017). PLS-SEM is also known for its flexibility with both theory and practice (Richter et al., 2016) and therefore, to make sure that data fit was established with proposed theory, the researcher conducted a thorough evaluation of the measurement models as suggested by Alfoqahaa (2013), Barclay, Higgings, and Thompson (1995), Chin (1998), and Compeau, Higgins, and Huff (1999). More specifically, this study employed a bootstrapping procedure, with 5000 sub-samples to confirm the statistical significance of path coefficients in the model following Al-Shbiel et al. (2018) and Hernández-Perlines, Moreno-García, and Yañez-Araque (2016).
The study also analyzed the obtained data using PLS to confirm the research model's nomological validity and this involved a two-step analytical method. In the first step, the study evaluated the measurement model to confirm its measures validity and reliability and in the second one, the study evaluated the structural model to test the hypothesized relationships among the variables in light of their strength and direction. The entire scales psychometric properties in the structural model were assessed, using discriminant validity and reliability tests.
Measurement model
This study ensured that the internal consistency of the constructs were established by using Cronbach's alpha and composite reliability indicators, which based on the rule of thumb provided by Hair Jr et al. (2017) should lie between 0.7 and 0.9. As for convergent validity, the average variance extracted (AVE) values have to reach at least 0.50 to indicate that the construct explains 50% of the variance of its respective indicators. The measurement items factor loadings, according to Hair et al. (2017) should exceed 0.70. Table 1 tabulates the Cronbach's alpha values, composite reliability values and the factor loadings and AVE values and they all met the threshold criteria. With regards to discriminant validity, it refers to the level to which the measures of the constructs are dissimilar from each other, and for this validity, the study used the Heterotrait-Monotrait (HTMT) ratio of correlations method as well as Fornell and Larcker's (1981) criterion. The latter assumes that a construct with enough level of discriminant validity should share higher variance with its indicators compared to other model constructs. Stated clearly, the AVE square should be higher compared to the values of its correlations with other model constructs (Hair, Ringle, & Sarstedt, 2011). Meanwhile, HTMT indicates the average of the Heterotrait-Hetero method correlations, which are the correlations of the indicators throughout constructs that measure various phenomena in relation to the average of Monotrait-Hetero method correlations, which are the correlations of the indicators of a single construct. In this regard, the value of HTMT should be lower than one, with the ideal being <0.85, for the distinction between two factors (Al Shbail, Salleh, & Mohd Nor, 2018;M. O. Al Shbail, Salleh, & Nor, 2018;Henseler, Hubona, & Ray Pauline, 2016;Henseler, Ringle, & Sarstedt, 2015). In this study, the variables met the criterion put forth by Fornell-Larcker as well as those of HTMT. Each AVE square root is higher than the correlations among the constructs with reflective items, while the HTMT ratios for a pair are <0.85 as presented in Table 3. Therefore, the entire constructs are independent of each other and the results established and confirmed discriminant validity.
Structural model
Before the hypothesized relationships were tested, the study initiated the inner model assessment by obtaining the predictive relevance (Q 2 ), effect size (f 2 ), Standardized Root Mean Square Residual (SRMR) and Normed Fit Index (NFI). In particular, the blindfolding procedure, coupled with cross-validated redundancy is generally used to illustrate that each Q 2 value is higher than 0. Used in this study, the results showed that the Q 2 values of the constructs are as follows; for organizational commitment it was 0.107 and for competitive advantage, it was 0.251. Moving on to Cohen's effect size f 2 (Cohen, 1988) the entire values tabulated in Table 3 for significant paths were higher than the recommended value. Meanwhile, the SRMR of the model was 0.067, which is lower than the 0.08 value (Henseler et al., 2016). Furthermore, the model's NFI value was deemed to be acceptable at 0.91 (NFI>0.90) as recommended by Henseler et al. (2016) and presented in Table 3.
Following Hair et al.'s (2017) suggestion, the study made use of bootstrapping procedure to confirm the significant levels of the path coefficients, with 5000 bootstrap samples and 207 cases, and no changes in sign. Table 4 tabulates the values of path coefficients, t-statistics, significance level, p-values and bootstrap confidence intervals at the rate of 95%. From the results, HRMP had a positive and significant influence on both organizational and competitive advantage (β=0.419; 0.314, p<0.05 respectively). In addition, organizational commitment had a positive influence on competitive advantage (β=-0.446, p<0.001) (refer to Fig. 3), and the model succeeded in explaining 17.6% of organizational commitment and 41.5% of competitive advantage.
Fig. 3. Proposed model coefficient paths
The next step involved the assessment of the indirect effect significance, and this called for the use of SEM's assessment of the proposed model, enabling the evaluation of the model's variables relationships and the testing of the proposed hypotheses. The assessment of the mediating role of organizational commitment was carried out using Preacher and Hayes' (2008) method as it has been extensively utilized recently in empirical studies and guidelines established in PLS studies (e.g., Hair et al., 2017;Nitzl, Roldan & Cepeda, 2016).
The method involved including the mediating variable (organizational commitment) in the model after which computations were made. Based on the results, HRMP had a positive and significant influence on organizational commitment (β=0.419 and t-value=4.504) and on competitive advantage (β=0.446 and t-value=3.667), supporting hypotheses H1 and H3 respectively. As for the indirect effect, it was found to be positive and significant [0.419×0.446 = 0.187 and t-value=2.565; p-value < 0.05, based on t (4999), one tailed test)], supporting hypothesis H4. The significance valuation was conducted using confidence intervals (CI) and for the result to be significant, the value of 0 should not be present (refer to Table 5). The results also indicated a positive and significant direct influence of HRMP on competitive advantage (β=0.314 and t-value= 3.096), supporting hypothesis H2. After the inclusion of organizational commitment in the model as the mediating variable, the path coefficient between HRMP and competitive advantage decreased, indicating that organizational commitment acquires part of the HRMP effect on competitive advantage (Hair et al., 2017). Moreover, the VAF value is calculated by dividing the indirect effect 0.187 by the total effect 0.501 equating to 0.373 (37.3%), which falls between 20-80%. This shows that the mediating effect was partial according to Hair et al. (2017).
Conclusions
Employees with organizational commitment will be able to contribute to enhancing competitive advantage. Jordanian universities must be able to enhance their competitive advantage through the strengthening of human resource management based on their organizational commitment. Superiors must be supportive of the employees and work together to meet organizational goals and objectives. Employees and managers must have good cooperative skills and work in the field as a team to overcome the problems and increase commitment. Compatibility between human resource management practices is very important to increase the commitment of employees. Managers must understand and support the commitment for successful strategic implementation and they must insist that careful attention is paid to the organizational commitment of first-rate priority of the organization strategy. They personally must be able to lead the process of implementation and execution of human resource management practices. Developing strategic objectives, linking the motivation and reward structure directly to achieve the result, and initializing policies and procedures for proper implementation of strategies are crucial for the organization.
Findings indicated that since human resource management practices are a strong factor for organizational commitment, there is a need to give attention to the benefits of human resource management practices, and evaluation of human resource management practices in the organization. Since the research was analyzed based on the perception of low and middle-level managers who are also part of employees in the organization, top management must share experience and knowledge with its subordinates to improve the commitment and competitive advantage.
The main implication for managers is that human resource management practices must be in line to increase the commitment of employees. From the validation of the framework, it is obvious to make sure that the human resource management practices are following increasing competitive advantage along with the organizational objectives. Thus, operating in a stable and dynamic environment, managers and employees need to work as a team to develop new knowledge for sustainable competitive advantage. At an organizational level, the managers must understand different practices favoring both the firm and the employees. Thus, efforts should be undertaken with more case studies linking the relationship between managers and employees toward the importance of human resource management practices for commitment in the higher education sector. In future research, thus instead of limiting the survey to top-level management, it is suggested to focus on the opinion from employees throughout the higher education domains and perform a comparison of the results.
|
v3-fos-license
|
2020-08-13T10:10:23.459Z
|
2020-08-12T00:00:00.000
|
225466840
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/IJOA-05-2020-2204/full/pdf?title=employee-psychological-well-being-and-job-performance-exploring-mediating-and-moderating-mechanisms",
"pdf_hash": "edb5339966843d6333ddfd88320da90895a182ca",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41945",
"s2fieldsofstudy": [
"Business",
"Psychology"
],
"sha1": "f43c2d2d19374e1ef497941db0207efe9baaf68d",
"year": 2020
}
|
pes2o/s2orc
|
Employee psychological well-being and job performance: exploring mediating and moderating mechanisms
Purpose – Given the importance of employee psychological well-being to job performance, this study aims to investigate the mediating role of affective commitment between psychological well-being and job performance while considering the moderating role of job insecurity on psychological well-being and affective commitment relationship. Design/methodology/approach – The data were gathered from employees working in cellular companies of Pakistan using paper-and-pencil surveys. A total of 280 responses were received. Hypotheses were tested using structural equation modeling technique and Hayes’s Model 1. Findings – Findings suggest that affective commitment mediates the association between psychological well-being (hedonic and eudaimonic) and employee job performance. In addition, perceived job insecurity buffers the association of psychological well-being (hedonic and eudaimonic) and affective commitment. Practical implications – The study results suggest that fostering employee psychological well-being may be advantageous for the organization. However, if interventions aimed at ensuring job security are not made, it may result in adverse employee work-related attitudes and behaviors. Originality/value – The study extends the current literature on employee well-being in two ways. First, by examining psychological well-being in terms of hedonic and eudaimonic well-being with employee work© Yasir Mansoor Kundi, Mohammed Aboramadan, Eissa M.I. Elhamalawi and Subhan Shahid. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode Funding and Support statement: The authors did not receive any external funding or additional support from third parties for this work. IJOA 29,3
Introduction
Does the employee well-being have important implications both at work and for other aspects of an employees' life? Of course! For years, we have known that they impact life at work and a plethora of research has examined the impact of employee well-being on work outcomes (Karapinar et al., 2019;Turban and Yan, 2016). What is less understood is how employee well-being impacts job performance. Evidence suggests that employee health and well-being are among the most critical factors for organizational success and performance (Bakker et al., 2019;Turban and Yan, 2016). Several studies have documented that employee well-being leads to various individual and organizational outcomes such as increased organizational performance and productivity (Hewett et al., 2018), customer satisfaction (Sharma et al., 2016), employee engagement (Tisu et al., 2020) and organizational citizenship behavior (OCB; Mousa et al., 2020).
The organizations' performance and productivity are tied to the performance of its employees (Shin and Konrad, 2017). Much evidence has shown the value of employee job performance (i.e. the measurable actions, behaviors and outcomes that employee engages in or bring about which are linked with and contribute to organizational goals; Viswesvaran and Ones, 2017) for organizational outcomes and success (Al Hammadi and Hussain, 2019;Shin and Konrad, 2017), which, in turn, has led scholars to seek to understand what drives employee performance. Personality traits (Tisu et al., 2020), job conditions and organizational characteristics (Diamantidis and Chatzoglou, 2019) have all been identified as critical antecedents of employee job performance.
However, one important gap remains in current job performance researchnamely, the role of psychological well-being in job performance (Hewett et al., 2018). Although previous research has found happy workers to be more productive than less happy or unhappy workers (DiMaria et al., 2020), a search of the literature revealed few studies on psychological well-being and job performance relationship (Salgado et al., 2019;Turban and Yan, 2016). Also, very little is known about the processes that link psychological well-being to job performance. Only a narrow spectrum of well-being related antecedents of employee performance has been considered, especially in terms of psychological well-being. Enriching our understanding of the consequences and processes of psychological well-being in the workplace, the present study examines the relationship between psychological well-being and job performance in the workplace setting. Such knowledge will not only help managers to attain higher organizational performance during the uncertain times but will uncover how to keep employees happy and satisfied (DiMaria et al., 2020).
Crucially, to advance job performance research, more work is needed to examine the relationship between employees' psychological well-being and their job performance (Ismail et al., 2019). As Salgado et al. (2019) elaborated, we need to consider how an employees' wellbeing affects ones' performance at work. In an attempt to fill this gap in the literature, the present study seeks to advance job performance research by linking ones' psychological well-being in terms of hedonic and eudaimonic well-being to ones' job performance. Hedonic well-being refers to the happiness achieved through experiences of pleasure and enjoyment, while eudaimonic well-being refers to the happiness achieved through experiences of Moderating mechanisms meaning and purpose (Huta, 2016;Rahmani et al., 2018). We argue that employees with high levels of psychological well-being will perform well as compared to those having lower levels of psychological well-being. We connect this psychological well-being-job performance process through an employee affective commitment (employees' perceptions of their emotional attachment to or identification with their organization; Allen and Meyer, 1996) by treating it as a mediating variable between well-being-performance relationship.
Additionally, we also examine the moderating role of perceived job insecurity in the wellbeing-performance relationship. Perceived job insecurity refers to has been defined as the perception of being threatened by job loss or an overall concern about the continued existence of the job in the future . There is evidence that perceived job insecurity diminishes employees' level of satisfaction and happiness and may lead to adverse job-related outcomes such as decreased work engagement (Karatepe et al., 2020), deviant behavior (Soomro et al., 2020) and reduced employee performance (Piccoli et al., 2017). Thus, addressing the gap mentioned above, this study has two-fold objectives; First, to examine how the path between psychological well-being and job performance is mediated through employee affective commitment. The reason to inquire about this path is that wellbeing is associated with an employees' happiness, pleasure and personal growth (Ismail et al., 2019). Therefore, higher the well-being, higher will be the employees' affective commitment, which, in turn, will lead to enhanced job performance. The second objective is to empirically test the moderating effects of perceived job insecurity on employees' emotional attachment with their organizations. Thus, we propose that higher job insecurity may reduce the well-being of employees and their interaction may result in lowering employees' emotional attachment with their organization.
The present study brings together employee well-being and performance literature and contributes to these research areas in two ways. First, we contribute to this line of inquiry by investigating the direct and indirect crossover from hedonic well-being and eudaimonic well-being to employees' job performance. We propose that psychological well-being (hedonic and eudaimonic) influence job performance through employee affective commitment. Second, prior research shows that the effect of well-being varies across individuals indicating the presence of possible moderators influencing the relationship between employee well-being and job outcomes (Lee, 2019). We, therefore, extend the previous literature by proposing and demonstrating the general possibility that perceived job insecurity might moderate the relationship of psychological well-being (hedonic and eudaimonic) and affective commitment. While there is evidence that perceived job insecurity influence employees' affective commitment (Schumacher et al., 2016), what is not yet clear is the impact of perceived job insecurity on psychological well-being À affective commitment relationship. The proposed research model is depicted in Figure 1. 2. Hypotheses development 2.1 Psychological well-being and affective commitment Well-being is a broad concept that refers to individuals' valued experience (Bandura, 1986) in which they become more effective in their work and other activities (Huang et al., 2016). According to Diener (2009), well-being as a subjective term, which describes people's happiness, the fulfillment of wishes, satisfaction, abilities and task accomplishments. Employee well-being is further categorized into two types, namely, hedonic well-being and eudaimonic well-being (Ballesteros-Leiva et al., 2017). Compton et al. (1996) investigated 18 scales that assess employee well-being and found that all the scales are categorized into two broad categories, namely, subjective well-being and personal growth. The former is referred to as hedonic well-being (Ryan and Deci, 2000) whereas, the latter is referred to as eudaimonic well-being (Waterman, 1993). Hedonic well-being is based on people's cognitive component (i.e. people's conscious assessment of all aspects of their life; Diener et al., 1985) and affective component (i.e. people's feelings that resulted because of experiencing positive or negative emotions in reaction to life; Ballesteros-Leiva et al., 2017). In contrast, eudaimonic well-being describes people's true nature and realization of their actual potential (Waterman, 1993). Eudaimonic well-being corresponds to happy life based upon ones' self-reliance and self-truth (Ballesteros-Leiva et al., 2017). Diener et al. (1985) argued that hedonic well-being focuses on happiness and has a more positive affect and greater life satisfaction, and focuses on pleasure, happiness and positive emotions (Ryan and Deci, 2000;Ryff, 2018). Contrarily, eudaimonic well-being is different from hedonic well-being as it focuses on true self and personal growth (Waterman, 1993), recognition for ones' optimal ability and mastery (Ryff, 2018). In the past, it has been found that hedonic well-being and eudaimonic well-being are relatively correlated with each other but are distinct concepts (Sheldon et al., 2018).
To date, previous research has measured employee psychological well-being with different indicators such as thriving at work (Bakker et al., 2019), life satisfaction (Clark et al., 2019) and social support (Cai et al., 2020) or general physical or psychological health (Grey et al., 2018). Very limited studies have measured psychological well-being with hedonic and eudaimonic well-being, which warrants further exploration (Ballesteros-Leiva et al., 2017). Therefore, this study assesses employee psychological well-being based upon two validated measures, namely, hedonic well-being (people's satisfaction with life in general) and eudaimonic well-being (people's personal accomplishment feelings).
Employee well-being has received some attention in organization studies (Huang et al., 2016). Prior research has argued that happier and healthier employees increase their effort, performance and productivity (Huang et al., 2016). Similarly, research has documented that employee well-being has a positive influence on employee work-related attitudes and behaviors such as, increasing OCB (Mousa et al., 2020), as well as job performance (Magnier-Watanabe et al., 2017) and decreasing employees' work-family conflict (Karapinar et al., 2019) and absenteeism (Schaumberg and Flynn, 2017). Although there is evidence that employee well-being positively influences employee work-related attitudes, less is known about the relationship between psychological well-being (hedonic and eudaimonic) and employee affective commitment (Pan et al., 2018;Semedo et al., 2019). Moreover, the existing literature indicated that employee affective commitment is either used as an antecedent or an outcome variable of employee well-being (Semedo et al., 2019;Ryff, 2018). However, affective commitment as an outcome variable of employee well-being has gained less scholarly attention, which warrants further investigation. Therefore, in the present study, we seek to examine employee affective commitment as an outcome variable of employee psychological well-being because employees who are happy and satisfied in their lives are more likely to be attached to their organizations (Semedo et al., 2019).
To support the above argument, we draw on the self-determination theory of motivation (SDT), which is defined as people's ability to make decisions and control their life for better psychological health and well-being (Deci and Ryan, 1985;Ryan and Deci, 2000). SDT is categorized into three types of psychological needs, namely, autonomy, relatedness and competence. These types of psychological needs are considered essential for the happiness and satisfaction of an individual. Based on SDT, we propose that employees who are satisfied and happy in their lives will be more committed to their organizations. Research in the past has found a positive linkage between employee commitment and indicators of psychological well-being such as happiness, personal growth, vitality and personal expressiveness (Pan et al., 2018;Sharma et al., 2016). Similarly, Thoresen et al. (2003), in their meta-analysis, also found a positive association between organizational commitment and indicators of hedonic well-being and eudaimonic well-being. Thus, we hypothesize the following: H1a. Hedonic well-being positively predicts employee affective commitment.
Affective commitment and job performance
The concept of organizational commitment was first initiated by sit-bet theory in the early 1960s (Becker, 1960). Organizational commitment is defined as the psychological connection of employees to the organization and involvement in it (Cooper-Hakim and Viswesvaran, 2005). It is also defined as the belief of an individual in his or her organizational norms (Hackett et al., 2001); the loyalty of an employee toward the organization (Cooper-Hakim and Viswesvaran, 2005) and willingness of an employee to participate in organizational duties (Williams and Anderson, 1991).
Organizational commitment is further categorized into three correlated but distinct categories (Meyer et al., 1993), known as affective, normative and continuance. In affective commitment, employees are emotionally attached to their organization. In normative commitment, employees remain committed to their organizations due to the sense of obligation to serve. While in continuance commitment, employees remain committed to their organization because of the costs associated with leaving the organization (Allen and Meyer, 1990, p. 2). Among the dimensions of organizational commitment, affective commitment has been found to have the most substantial influence on organizational outcomes (Meyer and Herscovitch, 2001). It is a better predictor of OCB (Paul et al., 2019), low turnover intention (Kundi et al., 2018) and job performance (Jain and Sullivan, 2019).
According to Jain and Sullivan (2019), employees with greater affective commitment are more likely to perform better in their jobs as compared to those who have a low sense of obligation and devotion toward their organization. Schoemmel and Jønsson (2014) researched Danish employees working in a health care organization and found that employee affective commitment is associated with different individual and organizational outcomes. They also found that among different individual and organizational outcomes, employee affective commitment was strongly related to job performance. Based on the above arguments, we hypothesize the following: H2. Affective commitment positively predict employee job performance.
Affective commitment as a mediator
Many studies had used the construct of affective commitment as an independent variable, mediator and moderating variable because of its importance as an effective determinant of work outcomes such as low turnover intention, job satisfaction and job performance (Jain and Sullivan, 2019;Kundi et al., 2018). There is very little published research on employee well-being and affective commitment relationship. Surprisingly, the effects of employee psychological well-being in terms of hedonic well-being and eudaimonic well-being have not been closely examined.
Employee psychological well-being is considered essential for employee affective commitment and employee job performance because an employee with greater well-being is more committed to his or her work and organization and tend to be a better performer (Jain and Sullivan, 2019). Staw and Barsade (1993) conducted a study in which they surveyed around 100 master of business administration students and found that students who were happy and satisfied with their lives were having high grades and better performance. Thereupon, we hypothesize the following: H3a. Affective commitment mediates the association between hedonic well-being and job performance.
H3b. Affective commitment mediates the association between eudaimonic well-being and job performance.
The moderating role of job insecurity
Job insecurity is gaining importance because of the change in organizational structure as it is becoming flattered, change in the nature of the job as it requires a diverse skill set and change in human resource (HR) practices as more temporary workers are hired nowadays (Piccoli et al., 2017;Kundi et al., 2018). Such changes have caused several adverse outcomes such as job dissatisfaction (Bouzari and Karatepe, 2018), unethical pro-organizational behavior (Ghosh, 2017), poor performance (Piccoli et al., 2017), anxiety and lack of commitment (Wang et al., 2018). Lack of harmony on the definition of job insecurity can be found among the researchers. However, a majority of them acknowledge that job insecurity is subjective and can be referred to as a subjective perception (Wang et al., 2018). Furthermore, job insecurity is described as the perception of an employee regarding the menace of losing a job in the near future . When there is job insecurity, employees experience a sense of threat to the continuance and stability of their jobs (Shoss, 2017).
Although job insecurity has been found to influence employee work-related attitudes, less is known about its effects on behavioral outcomes (Piccoli et al., 2017). As maintained by the social exchange theory, behaviors are the result of an exchange process (Blau, 1964). Furthermore, these exchanges can be either tangible or socio-emotional aspects of the exchange process (Kundi et al., 2018). Employees who perceive and feel that their organization is providing them job security and taking care of their well-being will turn to be more committed to their organization (Kundi et al., 2018;Wang et al., 2018). Much research has found that employees who feel job security are happier and satisfied with their lives (Shoss, 2017;De Witte et al., 2015) and are more committed to their work and organization (Bouzari and Karatepe, 2018;Wang et al., 2018). Shoss (2017) conducted a thorough study on job insecurity and found that job insecurity can cause severe adverse consequences for both the employees and organizations.
Moderating mechanisms
Many studies have found that job insecurity lead toward employee anxiety (Wang et al., 2018), stress (Shoss, 2017), unhappiness and psychological illness and lack of commitment. De Witte and Näswall (2003) conducted a study of 4,000 permanent and temporary employees working in different companies located in European countries, namely, Belgium, Netherland, Sweden and Italy. Findings of their study highlighted that: Employees who are uncertain about their jobs (i.e. high level of perceived job insecurity) are less committed with their organizations. Employees with temporary job contracts were found to have low organizational committed as compared to the employees with permanent job contracts.
Such a difference between temporary and permanent job contract holders was mainly due to the perceived job insecurity by the temporary job contract holders. Accordingly, in the present study, we assume job insecurity as a moderating variable between employee well-being and organizational commitment due to two reasons. First, as per the past evidence, which shows that job insecurity impacts employees' happiness (hedonic well-being), satisfaction (eudaimonic well-being) and level of employees' commitment. Second, the nature of the jobs in the Telecom sector of Pakistan are contractual and temporary, which could result in perceived job insecurity by the employees. Therefore, we hypothesize the following: H4. Job insecurity will moderate the relationship between hedonic well-being, eudaimonic well-being and affective organizational commitment.
Sample and procedure
The data for this study came from a survey of Pakistani employees, who worked in five private telecommunication organizations (Mobilink, Telenor, Ufone, Zong and Warid). These five companies were targeted because they are the largest and highly competitive companies in Pakistan. Moreover, the telecom sector is a private sector where jobs are temporary or contractual (Kundi et al., 2018). Hence, the investigation of how employees' perceptions of job insecurity influence their psychological well-being and its outcomes is highly relevant in this context. Studies exploring such a phenomenon are needed, particularly in the Pakistani context, to have a better insight and thereby strengthen the employee well-being and job performance literature. Two of the authors had personal and professional contacts to gain access to these organizations. The paper-and-pencil method was used to gather the data. Questionnaires were distributed among 570 participants with a cover letter explaining the purpose of the study, noted that participation was voluntary, and provided assurances that their responses would be kept confidential and anonymous. After completion of the questionnaires, the surveys were collected the surveys on-site by one of the authors. As self-reported data often render itself to common method bias (CMB; Podsakoff et al., 2012), we applied several procedural remedies such as reducing the ambiguity in the questions, ensuring respondent anonymity and confidentiality, separating of the predictor and criterion variable and randomizing the item order to limit this bias.
Of the 570 surveys distributed initially, 280 employees completed the survey form (response rate = 49%). According to Baruch and Holtom (2008), the average response rate for studies at the individual level is 52.6% (SD = 19.7). Hence, our response rate meets the standard for a minimum acceptable response rate, which is 49%. Of the 280 respondents, 39% were female, their mean age was 35.6 years (SD = 5.22) and the average organizational tenure was 8.61 years (SD = 4.21). The majority of the respondents had at least a bachelors' degree (83 %). Respondents represented a variety of departments, including marketing (29%), customer services (26%), finance (20%), IT (13%) and HR (12%).
Measures
The survey was administered to the participants in English. English is the official language of correspondence for professional organizations in Pakistan (De Clercq et al., 2019). All the constructs came from previous research and anchored on a five-point Likert scale ranging from 1 = Strongly disagree to 5 = Strongly agree.
Psychological well-being. We measured employee psychological well-being with two subdimensions, namely, hedonic well-being and eudaimonic well-being. Hedonic well-being was measured using five items (Diener et al., 1985). A sample item is "my life conditions are excellent" (a = 0.86). Eudaimonic well-being was measured using 21 items (Waterman et al., 2010), of which seven items were reverse-scored due to its negative nature. Sample items are "I feel that I understand what I was meant to do in my life" and "my life is centered around a set of core beliefs that give meaning to my life" (a = 0.81).
Affective commitment. The affective commitment was measured using a six-item inventory developed by Allen and Meyer (1990). The sample items are "my organization inspires me to put forth my best effort" and "I think that I will be able to continue working here" (a = 0.91).
Job insecurity. Job insecurity was measured using a five-item inventory developed by Chirumbolo et al. (2015). The sample item is "I fear I will lose my job" (a = 0.87).
Job performance. We measured employee job performance with the seven-item inventory developed by Williams and Anderson (1991). The sample items are "I do fulfill my responsibilities, which are mentioned in the job description" and "I try to work as hard as possible" (a = 0.87).
Controls. We controlled for respondents' age (assessed in years), gender (1 = male, 2 = female) and organizational tenure (assessed in years) because prior research (Alessandri et al., 2019;Edgar et al., 2020) has found significant effects of these variables on employees' job performance. Table 1 presents the means, standard deviations and correlations among study variables.
Construct validity
Before testing hypotheses, we conducted a series of confirmatory factor analyzes (CFAs) using AMOS 22.0 to examine the distinctiveness of our study variables. Following the guidelines of Hu and Bentler (1999), model fitness was assessed with following fit indices; comparative fit index (CFI), root mean square error of approximation (RMSEA) and standardized root mean square residual (SRMR). We used a parceling technique (Little et al., 2002) to ensure item to sample size ratio. According to Williams and O'Boyle (2008), the item-parceling approach is widely used in HRM research, which allows estimation of fewer model parameters and subsequently leads to the optimal variable to sample size ratio and stable parameter estimates (Wang and Wang, 2019). Based on preliminary CFAs, we combined the highest item loading with the lowest item loading to create parcels that were equally balanced in terms of their difficulty and discrimination. Itemparceling was done only for the construct of eudaimonic well-being as it entailed a large number of items (i.e. 21 items). Accordingly, we made five parcels for the eudaimonic wellbeing construct (Waterman et al., 2010). As shown in Table 2, the CFA results revealed that the baseline five-factor model (hedonic well-being, eudaimonic well-being, job insecurity, affective commitment and job performance) was significant (x 2 = 377.11, df = 199, CFI = 0.971, RMSEA = 0.034 and SRMR = 0.044) and better than the alternate models, including a four-factor model in which hedonic well-being and eudaimonic well-being were considered as one construct (Dx 2 = 203.056, Ddf = 6), a three-factor model in which hedonic well-being, eudaimonic well-being and affective commitment were loaded on one construct (Dx 2 = 308.99, Ddf = 8) and a onefactor model in which all items loaded on one construct (Dx 2 = 560.77, Ddf = 11). The results, therefore, provided support for the distinctive nature of our study variables.
To ensure the validity of our measures, we first examined the convergent validity through the average variance extracted (AVE). We found AVE scores higher than the threshold value of 0.5 (Table 1; Fornell and Larcker, 1981), supporting the convergent validity of our constructs. We also estimated discriminant validity by comparing the AVE of each construct with the average shared variance (ASV), i.e. mean of the squared correlations among constructs (Hair et al., 2010). As expected, all the values of AVE were higher than the ASV constructs, thereby supporting discriminant validity (Table 1).
Common method variance
We examined the presence of common method variance (CMV) using: Harman's one-factor test. CFA (Podsakoff et al., 2012).
Harman's one-factor test showed five factors with eigenvalues of greater than 1.0 accounted for 69.12% of the variance in the exogenous and endogenous variables. The results of CFA showed that the single-factor model did not fit the data well (x 2 = 937.88, df = 210, CFI = 0.642, RMSEA = 0.136, SRMR = 0.122). These tests showed that CMV was not a major issue in this study.
Results of confirmatory factor analyzes of study measures
Moderating mechanisms
H1a and H1b suggested that hedonic well-being and eudaimonic well-being positively relate to employee affective commitment. According to Figure 2, the results indicate that hedonic wellbeing (b = 0.26, p < 0.01) and eudaimonic well-being (b = 0.32, p < 0.01) are positively related to employee affective commitment. Taken together, these two findings provide support for H1a and H1b. In H2, we predicted that employee affective commitment would positively associate with employee job performance. As seen in Figure 2, employee affective commitment positively predicted employee job performance (b = 0.41, p < 0.01), supporting H2. H3a and H3b suggested that employee affective commitment mediates the relationship between hedonic and eudaimonic well-being and employee job performance. According to Figure 2, the results indicate that hedonic well-being is positively related to employee job performance via employee affective commitment (b = 0.11, 95% CI = 0.09; 0.23). Similarly, eudaimonic well-being is positively related to employee job performance via employee affective commitment (b = 0.15, 95% CI = 0.12; 0.35), supporting H3a and H3b.
Finally, H4a and H4b predicted that job insecurity would negatively moderate the positive relationship between: Hedonic well-being. Eudaimonic well-being and employee affective commitment.
In support of H4a, our results (Table 3) revealed a negative and significant interaction effect between hedonic well-being and job insecurity on employee affective commitment (b = À0.12, p < 0.05). The pattern of this interaction was consistent with our hypothesized direction; the positive relationship between hedonic well-being and employee affective commitment was weaker in the presence of high versus low job insecurity (Figure 3). Likewise, the interaction effect between eudaimonic well-being and job insecurity on employee affective commitment was negatively significant (b = À0.28, p < 0.01). The pattern of this interaction was consistent with our hypothesized direction; the positive relationship between eudaimonic well-being and employee affective commitment was weaker in the presence of high versus low job insecuritay (Figure 4). Thus, H4a and H4b were supported. The pattern of these interactions was consistent with our hypothesized direction; the positive relationship of hedonic well-being and eudaimonic well-being with an employee affective commitment were weaker in the presence of high versus low perceived job insecurity.
Discussion
The present research examined the direct and indirect crossover from psychological wellbeing (hedonic and eudaimonic) to job performance through employee affective commitment and the moderating role of job insecurity between psychological well-being and affective commitment relationship. The results revealed that both hedonic well-being and eudaimonic well-being has a direct and indirect effect on employee job performance. Employee affective commitment was found to be a potential mediating mechanism (explaining partial variance) in the relationship between psychological well-being and job performance. Findings regarding the buffering role of job insecurity revealed that job insecurity buffers the positive Moderating mechanisms relationship between psychological well-being and employee affective commitment such that higher the job insecurity, lower will be employee affective commitment. The findings generally highlight and reinforce that perceived job insecurity can be detrimental for both employees' well-being and job-related behaviors (Soomro et al., 2020).
Theoretical implications
The present study offers several contributions to employee well-being and job performance literature. First, the present research extends the employee well-being literature by investigating employee affective commitment as a key mechanism through which psychological well-being (hedonic and eudaimonic) influences employees' job performance. In line with SDT, we found that both hedonic well-being and eudaimonic well-being enhanced employees' affective commitment, which, in turn, led them to perform better in their jobs. Our study addresses recent calls for research to understand better how psychological well-being influence employees' performance at work (Huang et al., 2016), and adds to a growing body of work, which confirms the importance of psychological well-being in promoting work-related attitudes and behaviors (Devonish, 2016;Hewett et al., 2018;Ismail et al., 2019). Further, we have extended the literature on employee affective commitment, highlighting that psychological well-being is an important antecedent of employee' affective commitment and thereby confirming previous research by Aboramadan et al. (2020) on the links between affective commitment and job performance.
Second, our results provide empirical support for the efficacy of examining the different dimensions of employee well-being, i.e. hedonic well-being and eudaimonic well-being as opposed to an overall index of well-being at work. Specifically, our results revealed that both hedonic well-being and eudaimonic well-being boost both employees' attachment with his or her organization and job performance (Hewett et al., 2018;Luu, 2019). Among the indicators of psychological well-being, eudaimonic well-being (i.e. realization and fulfillment of ones' true nature) was found to have more influence on employee affective commitment and job performance as compared to hedonic well-being (i.e. state of happiness and sense of flourishing in life). Therefore, employees who experience high levels of psychological wellbeing are likely to be more attached to their employer, which, in turn, boosts their job performance.
Third, job insecurity is considered as an important work-related stressor (Schumacher et al., 2016). However, the moderating role of job insecurity on the relationship between psychological well-being and affective commitment has not been considered by the previous research. Based on social exchange theory (Blau, 1964), we expected job insecurity to buffer the positive relationship between the psychological well-being and affective commitment. The results showed that employees with high levels of perceived job insecurity reduce the positive relationship of psychological well-being (hedonic and eudaimonic) and affective commitment. This finding is consistent with previous empirical evidence supporting the adverse role of perceived job insecurity in reducing employees' belongingness with their organization (Jiang and Lavaysse, 2018). There is strong empirical evidence (Qian et al., 2019;Schumacher et al., 2016) that employee attitudes and health are negatively affected by increasing levels of job insecurity. Schumacher et al. (2016) suggested in an elaborate explanation of the social exchange theory that the constant worrying about the possibility of losing ones' job promotes psychological stress and feelings of unfairness, which, in turn, affects employees' affective commitment. Hence, employees' psychological well-being and affective commitment are heavily influenced by the experience of high job insecurity.
Practical implications
Our study has several implications. First and foremost, this study will help managers in understanding the importance of employees' psychological well-being for work-related attitudes and behavior. Based on our findings, managers need to understand how important psychological well-being is for employees' organizational commitment and job performance. According to Hosie and Sevastos (2009), several human resource-based interventions could foster employees' psychological well-being, such as selecting and placing employees into appropriate positions, ensuring a friendly work environment and providing training that improves employees' mental health and help them to manage their perceptions positively.
Besides, managers should provide their employees with opportunities to use their full potential, which will increase employees' sense of autonomy and overall well-being (Sharma et al., 2017). By promoting employee well-being in the workplace, managers can contribute to developing a workforce, which will be committed to their organizations and will have better job performance. However, based on our findings, in the presence of job insecurity, organizations spending on interventions to improve employees' psychological well-being, organizational commitment and job performance might go in vain. In other words, organizations should ensure that employees feel a sense of job security or else the returns on such interventions could be nullified.
Finally, as organizations operate in a volatile and highly competitive environment, it is and will be difficult for them to provide high levels of job security to their employees, especially in developing countries such as Pakistan (Soomro et al., 2020). Given the fact that job insecurity leads to cause adverse employee psychological well-being and affective commitment, managers must be attentive to subordinates' perceptions of job insecurity and adverse psychological well-being and take action to prevent harmful consequences (Ma et al., 2019). Organizations should try to avoid downsizings, layoffs and other types of structural changes, respectively, and find ways to boost employees' perceptions of job security despite those changes. If this is not possible, i.e. the organization not able to provide job security, this should be communicated to employees honestly and early.
Limitations and future studies
There are several limitations to this study. First, we measured our research variables by using a self-report survey at a single point of time, which may result in CMB. We used various procedural remedies to mitigate the potential for CMB and conducted CFA as per the guidelines of Podsakoff et al. (2012) to ensure that CMV was unlikely to be an issue in our study. However, future research may rely on supervisors rated employees' job performance or collect data at different time points to avoid the threat of such bias.
Second, the sample of this study consisted of employees working in cellular companies of Pakistan with different demographic characteristics and occupational backgrounds; thus, the generalizability of our findings to other industries or sectors is yet to be established. Future research should test our research model in various industries and cultures.
A final limitation pertains to the selection of a moderating variable. As this study was conducted in Pakistan, contextual factors such as the perceived threat to terrorism, law and order situation or perceived organizational injustice might also influence the psychological well-being of employees working in Pakistan (Jahanzeb et al., 2020;Sarwar et al., 2020). Future studies could consider the moderating role of such external factors in the relationship between employee psychological well-being, affective commitment and job performance.
Conclusion
This study proposed a framework to understand the relationship between employee psychological well-being, affective commitment and job performance. It also described how psychological well-being influences job performance. Additionally, this study examined the moderating role of perceived job insecurity on psychological well-being and affective commitment relationship. The results revealed that employee psychological well-being (hedonic and eudaimonic) has beneficial effects on employee affective commitment, which, in turn, enhance their job performance. Moreover, the results indicated that perceived job insecurity has ill effects on employee affective commitment, especially when the employee has high levels of perceived job insecurity.
|
v3-fos-license
|
2019-04-27T13:10:19.870Z
|
2018-07-31T00:00:00.000
|
54176937
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.aff.20180703.11.pdf",
"pdf_hash": "12392e4435feeee2300bb319003e2839f4fb6a46",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41946",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "7cffff96196646eb36f6a2416e26a4e842e9d8fa",
"year": 2018
}
|
pes2o/s2orc
|
Soil Carbon Sequestration Differentials among Key Forest Plantation Species in Kenya: Promising Opportunities for Sustainable Development Mechanism
Soil organic carbon (SOC) contributes to the productivity of forests and enhances carbon sink in forest ecosystem. However, the available data on forest based carbon projects among African countries that have ratified Kyoto Protocol and are party to United Nations Framework Convention on Climate Change (UNFCCC) shows little emphasis on SOC, deadwood and litter. Kenya, for example, has piloted five afforestation and reforestation Clean Development Mechanism (AR-CDM) activities in government forests of which none addresses SOC, and yet studies elsewhere have shown that forest soils consist about 73 % of global carbon storage. This study therefore, sought to determine soil carbon sequestration differentials among selected key forest plantations in Kenya and their future implications on sustainable development mechanism. Soils were sampled at 0-20, 20-50 and 50-80 cm depth from Pinus patula, Cupressus lusitanica, Juniperus procera and Eucalyptus saligna/grandis plantations in Central Kenya for analysis of carbon, soil pH, nitrogen, phosphorous and potassium. The litterfall collected from each of these forest plantations were analysed for nitrogen and carbon. The Pinus patula plantations had significantly (p<0.01) higher amount of soil carbon (132.2 ± 12.55 MgC ha -1 ) as compared with Cupressus lusitanica (114.4 ± 12.55 MgC ha -1 ) and Eucalyptus saligna (85.0 ± 12.55 MgC ha -1 ) plantations. Specifically, Pinus patula plantation had sequestered almost twice of soil carbon as compared to above and below-ground carbon pools whereas that of Cupressus lusitanica and Eucalyptus saligna /grandis were about 1.2 and 1.3 times higher, respectively. The levels of acidity varied among species, between and within sites from very strongly acidic to very slightly acidic. The amount of soil nitrogen, phosphorous and potassium between sites, tree species and soil depths differed significantly. This study therefore reveals soil carbon potentials in forest plantations that need to be considered in the development and implementation of afforestation and reforestation activities under Clean/Sustainable Development Mechanism (SDM). Equally, differences on soil carbon sequestered among species need to be taken into account when evaluating carbon stocks under certified and voluntary carbon offset markets in order to promote trees with high potential of carbon sequestration for sustainable development. This is important because, introduction of Reducing Emissions from Deforestation and forest Degradation (REDD+) and forest based Clean Development Mechanisms (CDM) have provided impetus to African governments in implementing afforestation and reforestation (AR) programmes to enhance carbon stock and improve resilience of biophysical and social systems against impacts of climate of change.
Introduction
Soil organic carbon (SOC) is among of the five carbon pools that are reported under United Nations Framework Convention on Climate Change (UNFCCC). However, the estimation of these carbon pools, namely; above-ground biomass, below-ground biomass, litter, deadwood and soil organic carbon, have largely concentrated on above and below-ground biomass in different forest types. This has resulted to little attention on other carbon pools such as soil organic carbon that has significant potential in climate change mitigation and improving resilience of forest ecosystem for enhanced productivity. For example, studies carried out on estimation of SOC have shown that forest soils consists about 73 per cent of global soil carbon storage, which is the largest active terrestrial carbon reservoir for both a sink and atmospheric carbon. Specifically, estimation of SOC in temperate forest ecosystems have shown that carbon pool in forest soil is almost as twice as large as the pool in the forest vegetation [1][2]. This demonstrates the significance of SOC in the forest ecosystem that remains instrumental not only as a carbon sink but also on determining the productivity of the forest and tree resources. Forests with high levels of SOC directly influence the above and belowground biomass. Equally, exposed soils due to land degradation, deforestation and other poor land management practices results to emission of carbon back to the atmosphere resulting to global warming. For example studies conducted by [3] revealed that litter decomposition as influenced by various climatic factors such as temperature, moisture and other non-climatic factors have resulted to enormous amount of carbon emitted to the atmosphere. This underscores the need for investing both financial and human resources to estimate and report on SOC from different forest types globally. It also signals for the need to monitor shifts on land uses in the context of Agriculture, Forestry and Other Land Use (AFOLU) as well as Land Use, Land-use change and Forestry (LULUCF).
The global shifts on AFOLU and LULUCF provides valuable information on SOC lost and gained depending on the land use patterns that are essential, especially when reporting on agriculture and forest based Nationally Determined Contributions (NDCs). It's estimated that about 20% of the global anthropogenic carbon dioxide is associated with land use changes, either from forestry to agriculture, deforestation, human settlement, infrastructure, and foodfibre-fuel nexus among other determinants including extractive industries. In this regard, countries have embarked on national inventories to estimate the forest and tree resources in order to determine the coverage as well as approximating value of forests in provisioning of good and services at national, regional and international levels in the face of climate change. The global statistics shows that Africa's total forest cover is about 624 million ha of which 16 million ha are forest plantations established for production of industrial round wood, afforestation of degraded land, protection of environment among others [4,5]. These forest statistics shows that African forests have a significant potential in carbon sinks thus playing a vital role in the overall carbon cycle [6].
However, Africa has not invested much on understanding the carbon dynamics in the forest sector, especially considering the SOC among the total carbon pools. This is an important aspect because a number of African countries are implementing afforestation and reforestation programmes to enhance their forest cover that is below the FAO recommendations of at least 10% forest cover. The efforts to improve forest and tree cover has attracted various incentive programmes at national, regional and international levels. The Kyoto Protocol (KP) under UNFCCC, for example, that is coming to an end in 2020, embraced afforestation and reforestation (AR) in the Clean Development Mechanism is expected to be replaced by Sustainable Development Mechanism (SDM) by 2021. Under the CDM, there exist compliance carbon offset markets for trading certified emission reductions (CERs) from AR programmes among other sector based activities. The available data shows that African countries that ratified the Kyoto Protocol have made very little progress in developing forest based carbon projects to tap these global opportunities in the investment of forest sector. The countries that have embraced AR-CDM have majorly dealt with above and below-ground carbon pools with little emphasis on SOC, deadwood and litter. This demands step wise approaches in estimating SOC in different forest types in order to optimize on the carbon offset returns and provide the total value of forest in the regulation of climate change. The introduction of Reducing Emissions from Deforestation and forest Degradation (REDD+) and voluntary carbon offset markets has also provided impetus to the African governments in implementing AR programmes.
Kenya that is known to be low forest and tree cover country is currently thriving to achieve 10% forest cover by 2030 through different strategic interventions. The country has piloted five AR-CDM activities in government forests, namely; Mau Forest Complex, Mt Kenya and Aberdare Range. The country has also piloted seven forestry sector voluntary projects including Rukinga REDD+ phase I and II, which is the first world project to have issued verified carbon unit certificates. These efforts are expected to enhance carbon stock and improve forest cover. In spite of these notable progress that has profiled Kenya as the most successful African country in tapping the forestry segment of the global certified and voluntary carbon offset markets, none of the AR-CDM activities addresses the soil carbon estimation. There are various reasons advanced based on the literature for neglecting soil organic carbon based on the type of soils and other considerations. This has continued to indicate less interest on SOC that might result to the overlooking of the available potential of SOC in different forest types and tree species that are currently used in AR activities. This study therefore sought to determine soil carbon sequestration differentials among selected key species in forest plantations in Kenya and their future implications on sustainable development mechanism.
Description of Study Sites
This study was carried at Kiambu and Nyeri Counties in the Central Highland Conservancy. Kiambu County covers an area of 1,323.9 Km² lies between latitudes 0°75′ and 1° 20′ south of Equator and longitudes 36° 54′ and 36° 85′ east. Its agro-ecological zone (AEZs) extend in a typical pattern along the eastern slopes of the Nyandarua (Aberdare) Range. It has great potential for tea growing in Githunguri and Limuru, coffee, dairy farming and pyrethrum, among others. It is the most densely populated area with a density of 562 persons per km 2 compared to 280 persons per km 2 in 1979, with a population growth rate of 8.4% (Kenya National Bureau of Statistics [7][8][9]. Nyeri County forms part of Kenya's eastern highlands. It is the most expansive covering an area of 3,266 km 2 and is situated between Longitudes 36º and 38º east and between the equator and Latitude 0º 38´ south. The population densities at Nyeri North and Nyeri South were 142 and 351 persons per km 2 . The average annual rainfall ranges from 2200 mm on the most easterly exposed edge of the Aberdare range to 700 mm on the Laikipia Plateau. The economic livelihood of people in this County is dependent on agriculture as over 67% of the total area is arable land with main agro-ecological zones (AEZ) UM 2 (main coffee zone 2), LH 4 (Cattlesheep-Barley Zone) and LH 5 (Ranching zone).
Study Design
A list of all forest stations managed by Kenya Forest Service in Kiambu, Kirinyaga, Murang'a, Nyandarua and Nyeri Counties in Central highland Conservancy was obtained. Kiambu and Nyeri Counties were randomly selected out of the five Counties. The forest stations in each of these Counties were stratified and clustered on the basis of their AEZ and composition of plantation species, resulting to four and three clusters in Kiambu and Nyeri Counties respectively. The first cluster of Kiambu County comprised of Thogoto and Muguga. The second one comprised of Uplands, Kerita and Kinale. The third cluster comprised of Ragia, Kamae and Kieni while the fourth one comprised of Kimakia. The first cluster of Nyeri County comprised of Kabage, Kiandongoro and Zaina. The second cluster comprised of Chehe, Hombe, Gathiuru and Kabaru while the third one comprised of Naromoru and Nanyuki.
The first and second clusters of forest stations in Kiambu County were randomly selected resulting to sampling of Muguga, Uplands and Kinale forest stations. Stratification and simple random sampling were used among three clusters (Aberdare range, Windward and leeward sides of Mt. Kenya) at Nyeri County, resulting to random selection of Kabage, Kabaru and Naromoru forest stations.
Soil Sampling from Selected Forest Plantations
Soil was sampled from six subplots of 4 m by 5 m established at the four edges and middle of the main plot of 20 m by 50 m (Figure 1) for all the selected tree species and age categories at different study sites. In each of the six subplots, a central point was chosen where soil samples were collected at 0-20, 20-50 and 50-80 cm depth using soil augur. Any surface vegetation material was removed before soil augering was done. The collected soil samples from the six subplots of the same depth were thoroughly mixed and a composite sample of about one kilogram was packed into polythene bags for laboratory analysis of carbon, soil pH, nitrogen, phosphorous and potassium. Litter fall was collected from the same area of the soil sampling subplots, thoroughly mixed and about 300-500 g was packed into polythene bags for analysis of N and C.
Analysis of Soil Samples, Litter Fall and Above-Belowground Biomass
Soil samples were analysed for carbon (C), nitrogen (N), phosphorous (P), potassium (K) and soil pH. Litterfall was analysed for C and N. All analytical methods were conducted using the procedures as described by [10]. Statistical comparisons were done for C, N, P, K and pH under different soil depths and species using ANOVA and analysis of covariance. Pairwise comparisons were done using orthogonal contrasts. Total soil carbon estimated per ha was based on soil bulk density and percentage of carbon analyzed from soil samples. This was given by Total soil carbon (ha) = (Bulk density (kg/m 3 ) * soil depth * %C)* 100 (1) where the soil bulk density was determined using procedures as outlined by [10]. Mean comparisons of carbon sequestered were done using least significant difference (LSD), which was obtained by multiplying twice the standard error of difference (s.e.d) based on linear mixed model approach.
Estimation of Soil Carbon Sequestration Among Selected Species in Different Sites
There were significant differences (F (4,271) =8.08; p<0.01) in the amount of soil carbon sequestered by commonly grown plantation species adjusted for age at Kiambu, Nyeri South and Nyeri North. Pinus patula had the highest amount of soil carbon (191.1 ± 12.55 MgC ha -1 ) followed by Cupressus lusitanica (169.3 ± 12.55 MgC ha -1 ) at Lari sub county, Kiambu. Eucalyptus saligna had the least amount of soil carbon at Kiambu and Nyeri South except in Nyeri North (Table 1). Moreover, mean comparisons in the amount of soil carbon sequestered by the species among sites showed a significant difference (p<0.05) in the quantity of carbon sequestered by Cupressus lusitanica and Pinus patula in Kiambu, Nyeri North and Nyeri South. Similarly, there were significant differences (p<0.05) in the amount of soil carbon sequestered by Eucalyptus saligna among the sites except for Nyeri North and Nyeri South. Eucalyptus saligna had generally lower amount of soil carbon sequestered among the sites as compared to Pinus patula and Cupressus lusitanica except in Nyeri North. Subsequently, mean comparisons of soil carbon sequestered among species within each site, differed significantly (p<0.05) in Kiambu and Nyeri South.
Estimation of % C and Selected Soil Elements Across Depths Among Different Species
There were significant differences (F (8, 271) = 3.91; p<0.01) in the amount of soil carbon across study sites and soil depths among tree species (Table 2). There also were significant differences (F (2, 224) = 79.22; p<0.01) in the levels of soil pH among the sites (Table 3). Kiambu soils were slightly acidic (6.11) as compared to Nyeri North (5.14) and Nyeri South (5.15), which were strongly acidic with a standard error difference of 0.093. Overall, the soil pH in all the study areas was mainly acidic. The levels of acidity varied among species, between and within sites from very strongly acidic to very slightly acidic. The soil under Cupressus lusitanica plantations exhibited almost same soil pH at Kiambu, Nyeri North and Nyeri South similar to Eucalyptus saligna plantations. However, the soil under Pinus patula plantations, which is known to grow well in acidic conditions, had low amount of acidity in the soil in Kiambu as compared to Nyeri North and Nyeri South. On the other hand, the interaction effects between sites and species were significant (F (4, 271) = 12.62; p<0.01) with respect to C. Also there were significant interaction effect in C (F (8, 271) = 2.08; p=0.037) between environmental sites, tree species and soil depths. This was however different for Juniperus procera whose interaction effect was only between age and depth albeit non-significant (F (4, 48) = 0.23; p=0.918).
Comparisons of the Amount of Carbon Dioxide Equivalent Above and Below Ground (AGB) and Soil Among Forest Species Across Ages and Sites
The amount of carbon dioxide equivalent (CO 2 e) removed from the atmosphere by selected forest plantation species at below-ground, above-ground and soil significantly (F(4,86) = 6.03; p<0.01) varied across ages and sites (Table 4). Age as a covariate was highly significant (F(1,86) = 17.55; p<0.01) in the amount of CO 2 e among tree species. Similarly, amount of CO 2 e significantly differed (F(2,86) = 14.73; p<0.01) among the sites where Kiambu recorded highest amount followed by Nyeri South and Nyeri North. There were also significant interaction effect between age and sites for CO 2 e among species above-ground and below-ground as well in soil.
Discussion
The differences of soil carbon sequestered among species could be associated with various factors such as production of litter, amount of leaf-litter fall, rate of decomposition of leaf-litter, amount of lignin found in leaf-liter, age of the plantation, climatic conditions, plantation management, fire incidences and type of soil among others. Specifically, plant litter decomposition is valuable in the formation of soil organic matter, the mineralization of organic nutrients and the carbon balance in terrestrial ecosystems where it has been established that 53% of the total carbon is in the organic layer of the soil stored [14][15][16]. The production of leaf litter and its decomposition is species dependent. For example, a study conducted by [17] showed production of litter under broadleaved plantation species and natural forest was significantly higher than that under coniferous species. Similarly, the variation of the rate of decay of litter among different tree species may be explained by the amount of lignin in the leaves of the selected species as well as amount of moisture in the soil. Research has shown that biotic decomposition in mesic ecosystems is usually negatively correlated with the concentration of lignin, a group of complex aromatic polymers present in the plant cell walls that is recalcitrant to enzymatic degradation and serves as structural barrier impeding microbial access to labile carbon compounds [16]. Overall, lignin plays a very important role in the carbon cycle, sequestering of atmospheric carbon into living tissues of woody vegetation. There exists variation on lignin content in the leaves of coniferous and broad leafed trees. A study conducted by [18] established that the lignin content in the litter of Pinus sylvestris and Pinus conotrta Dougl were 29.3% and 37%, respectively, as compared to broadleaf tree like Eucalyptus grandis that was 21.1%.
The higher amount of soil carbon sequestered by Pinus patula in this study as compared to other species may be associated with high amount of carbon in the leaf-litter fall catalyzed by rate of decomposition as influenced by environmental factors such as rainfall and temperature. This was well manifested with differentials of the amount of carbon found in the leaf-litter of Pinus patula (46%), Cupressus lusitanica (41%) and Eucalyptus saligna (43%). Sites that were generally receiving low amount of rainfall, had less soil carbon, like the case of the dry part of Nyeri North that had the least soil carbon as compared to other sites. Less amount of soil moisture essentially hinders microbial activities in carbon fixation and mineralization of soil nutrients.
Other studies conducted by [19] have shown differences in the amount of soil carbon sequestered by plantation species such Cupressus lusitanica, Eucalyptus grandis and Pinus patula. In their study, Cupressus lusitanica had highest amount of soil C followed with Pinus patula and least with Eucalyptus grandis due to litter quality input and rate of decomposition. A study by [20] also showed that Scots pine had higher amount of C on decomposing wood and bark as compared to spruce and birch in Finland. Series of studies have also advanced the same understanding following simulation in CENTURY and YASSO models where they have indicated that accumulation of soil organic pools were driven by changes in litter inputs, rate of decomposition, management regimes, root activity, stand growth rates among others. Specifically, plantation management practices like pruning, thinning, liming, drainage, clear felling and timber harvesting among others could also bring the variations on the amount of C as leftovers would decompose differently depending on the site characteristics and on the litter quality of the material [14,[21][22][23][24].
The significant variations of the percentage C across soil depths, decreased with, increasing soil depth. This implied that more C is concentrated in the top layer of the soil where there is high organic matter due to high amount litter fall that decomposes within the top layer of the soil. The amount of SOC at the top layer of the soil, about 1 m depth varies depending on the ecosystem and microbial activities as influenced by abiotic and non-abiotic factors. In arid and semi-arid areas, amount of SOC in the upper layer is estimated to be about 30 tons/ha whereas in organic cold regions is estimated to be about 800 tons/ha [3]. Overall most studies have reported C storage and concentration increased in the upper layers of the soil and decreased with soil depths among hardwood and softwoods tree species in different forest types [25,26]. In this study, it was also found SOC varied with age of tree where young forest stands had higher soil carbon as compared to middle aged. This may be explained by different rates of decomposition that is characterized by at least two stages, where the first stage is described by leaching of soluble compounds and by decomposition of solubles and non-lignified cellulose and hemcellulose resulting to 0-40% of mass loss as compared to late stage that encompasses the degradation of lignified tissue. This may further be explained by incidences of fire outbreak resulting to increase of ash that is rich in exchangeable bases, which leads to the reduction of soil acidity. A study by [27] reported the effect of lime in shifting the ectomycorrhizas in red pine plantations. Ectomycorrhizas are significant component of the forest floor in red pine plantations and produce high levels of surface acid phosphate activity. Therefore induced lime has the potential to alter the mineralization of organic P and P nutrition of the host. Overall, quantification of soil carbon among different soil depths showed an increasing trend due to multiplier of soil depths and bulk density.
The bulk density increased with increase of soil depths due to low organic matter, poor structure, low moisture and roots penetration as well as pressure exerted by overlying layers [28]. The significant amount of N in Pinus patula as compared to Cupressus lusitanica and Eucalyptus saligna may be explained by effect of forest floor leading to large differences in turnover rates of litter fall and the amount of soil organic matter accumulated in the soils. Studies have revealed that a low C/N ratio of broadleaves led to a better humus layer status [14]. The N quantities may also be influenced by amount of lignin in different species that has a great effects on the nitrogen dynamics of forest ecosystems as well as other ecological processes [18]. For instance, the rate at which forest litter decomposes forms an important aspect of assessing past, current and future carbon and N responses of forests under changing climate conditions. Also the litter decomposition entails physical and chemical processes that reduce litter to carbon dioxide, water and mineral nutrients that is regulated by a number of biotic and abiotic factors [18,29]. Nitrogen comes mainly from three sources, namely; uptake from the soil, foliar uptake of atmospheric deposition and internal reallocation from one organ to another [30]. Thus increased N deposition causes an increased rate of soil organic matter. Also a study on carbon and nitrogen release from decomposing Scots pine, Norway Spruce and silver birch stumps found that N was released considerably more slowly from the stumps than from the stems and branches [20]. However, [31] reported forests respond to increased N availability by increase in stand leaf area and net photosynthesis and increased stem growth. This concurs with [32] who reported mineral soil N status among Promising Opportunities for Sustainable Development Mechanism tree species were strongly related to liter fall N status and was significantly higher in 0-30 cm of soil depth.
The findings on other selected soil parameters concurred with soil classifications within Central Kenya, which are known to be largely nitisols. These are characterized by pH <5.5 due to leaching of soluble bases and high clay content >35% [33]. Therefore the correlation of soil pH with P indicated the levels at which these elements would be available to plants to support the plant growth and accumulation of biomass for enhancement of carbon sequestration. Both sites, C, N, P and K were high. This indicated high amount of precipitation and soil mobilization as influenced by different trees species, thus availability of major nutrients for tree uptake/forest productivity. Soil pH usually has a big influence on the uptake of minerals. Thus soils with high acidity do not provide good conditions for the microorganisms that are very valuable with litter decomposition and other dead wood for nutrient fixation and carbon sinks.
The positive relationship between C and N showed available N could also be used as an indicator of soil carbon sequestration. This is because deposition of N on forests may increase C by increased growth and accumulation of soil organic matter through increased litter production or Nenriched litter. This leads to reduced long term decomposition rates of organic matter. Other studies have shown such relationship between C and N and offered appropriate explanations including large differences in turnover rates of foliar litter fall, forest management, and different tree species among others. Also the increase in nitrogen deposition on forests over a longer period of time may reduce the decomposition of organic matter. In general, this showed that soils in various plantation forests in Central Kenya have a huge potential of soil carbon stocks for mitigation of climate change. For instance, [34] found levels of soil C and N declined during the second rotation of Pinus radiata and ratios of C/N in the surface soil increased from 27 to 30 in lower quality sites and from 24 to 26 in higher quality sites.
Overall, the implications of soil carbon differentials on sustainable development mechanism is evident in this study. Specifically, the findings collaborates well with a series of studies demonstrating the potential role of SOC in climate change mitigation through sequestration of atmospheric carbon dioxide, thus providing a long lasting solution on sustainable reduction of greenhouse gases in a less cost effective manner. This calls for strengthening of afforestation of agricultural soils and management of forest plantations to enhance SOC stock through sequestration as influenced by interaction between climate, soil, tree species and management as well as the rate of chemical decomposition of the litter. In order to harness this potential, the inclusion of SOC in the sustainable mitigation mechanisms especially in afforestation and reforestation programmes to tap the forest carbon offset markets under Clean Development Mechanism (CDM) and voluntary mechanism will spur social-economic growth and environmental sustainability. For example, this study revealed that in most cases amount of CO 2 e among species across study sites, were higher in soils than above and below ground biomass. The considerations of SOC in this regard, will resonate well especially with Paris Agreement (PA) where parties agreed to reduce the rising temperature below 2°C above-preindustrial levels. The implementation of this Paris Agreement is based on the understanding that countries confirmed their intention to share reductions according to common but differentiated responsibilities and respective capabilities. This is reflected in the proposed Nationally Determined Contributions (NDCs) that need to be explored opportunistically on the role of SOC in the overall investments on mitigation options fronted by countries that have signed Paris Agreement.
Further, the evidence demonstrated in this study points to landscape-based approach in addressing SOC as an important sink that need to be mainstreamed in the NDCs and utilize the existing global climate financing such as Green Climate Fund and other incentives to spur sustainable development in the context of combating negative effects of climate change. For example, pushing for SOC in the Sustainable Development Mechanisms with well-developed structures can result to significant investments in the forest sector to enhance carbon stock and sustainable management of forests. This will in turn promote other related sectors of the economy such as agriculture, trade, energy among others through various functions of forests and tree-resources in social-economic development of the nation. In this sense, the future of SDM will rely on strengthening its flexibility in addressing various sustainable options of reducing greenhouse gas (GHG) emissions as fronted by various parties to Paris Agreement. This will result to increased incentives based on different schemes that countries can tap to reduce their vulnerability to climate change and improve on mitigation efforts for sustainable development.
Conclusion and Recommendations
The role of forest SOC in reducing atmospheric GHG is apparent in many studies. It is also evident that different species sequester different amounts of carbon depending on abiotic and non-abiotic factors. The potential carbon sink by forest plantations as presented in this study cannot be underestimated in their overall contributions to climate change mitigation. The national based initiatives on afforestation and reforestation programmes to improve the forest cover, like the current call by the national and county governments of the republic of Kenya will certainly results to increased carbon sink. This will work well when the governments also continue to promote other alternatives to livelihoods in order to minimize the leakages on carbon footprints. The realization of the contribution of SOC at a global scale will therefore require strengthening of policy and institutional frameworks to support investment in the forest sector. This study therefore recommends setting up reliable baseline emissions scenario from the forest sector that takes into account the contribution of SOC as source and sink. In this manner, appropriate quantification/measurement, monitoring, reporting and verification will be institutionalized. This will ensure total valuation of the contribution of forest sector in climate change mitigation as reflected in various implementations options of Nationally Determined Contributions. This is important because, soil organic carbon dynamics are usually driven by the changes in climate and land cover or land use, calling for the need to strengthen the integration of LULUCF and AFOLU interventions in various NDCs options.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2008-08-06T00:00:00.000
|
3261734
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://journals.iucr.org/e/issues/2008/09/00/er2056/er2056.pdf",
"pdf_hash": "fd1592cd334426aface16498257121c3cdc4acf1",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41950",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "e6a738fd05d358618625e9327491744846026615",
"year": 2008
}
|
pes2o/s2orc
|
6-[(Dimethylamino)methyleneamino]-1,3-dimethylpyrimidine-2,4(1H,3H)-dione dihydrate
Uracil, the pyrimidine nucleobase, which combined with adenine forms one of the major motifs present in the biopolymer RNA, is also involved in the self-assembly of RNA. In the title compound, C9H14N4O2·2H2O, the asymmetric unit contains one dimethylaminouracil group and two water molecules. The plane of the N=C—NMe2 side chain is inclined at 27.6 (5)° to the plane of the uracil ring. Both water molecules form O—H⋯O hydrogen bonds with the carbonyl O atoms of the uracil group. Additional water–water hydrogen-bond interactions are also observed in the crystal structure. The O—H⋯O hydrogen bonds lead to the formation of a two-dimensional hydrogen-bonded network cage consisting of two dimethylaminouracil groups and six water molecules.
Uracil, the pyrimidine nucleobase, which combined with adenine forms one of the major motifs present in the biopolymer RNA, is also involved in the self-assembly of RNA. In the title compound, C 9 H 14 N 4 O 2 Á2H 2 O, the asymmetric unit contains one dimethylaminouracil group and two water molecules. The plane of the N C-NMe 2 side chain is inclined at 27.6 (5) to the plane of the uracil ring. Both water molecules form O-HÁ Á ÁO hydrogen bonds with the carbonyl O atoms of the uracil group. Additional water-water hydrogen-bond interactions are also observed in the crystal structure. The O-HÁ Á ÁO hydrogen bonds lead to the formation of a two-dimensional hydrogen-bonded network cage consisting of two dimethylaminouracil groups and six water molecules.
AJT thanks the Department of Science and Technology (DST), Government of India, New Delhi, for financial support and SD thanks Tezpur University for an Institutional Fellowship.
Supplementary data and figures for this paper are available from the IUCr electronic archives (Reference: ER2056).
S1. Comment
Uracil, the pyrimidine nucleobase, combined with Adenine comprises one of the major motifs present in the biopolymer RNA, is also involved in the self-assembly of RNA (Sivakova & Rowan, 2005) The versatility of uracil and its derivatives, particularly the annulated one, is well recognized by synthetic (Sasaki et al., 1998) as well as biological chemists (Pontikis & Monneret, 1994) owing to their wide range of biological activities. The chemistry of uracil moiety and its derivatives have expanded enormously in the past decades only because of its mechanistic, synthetic and biological importance which made them of substantial experimental and theoretical interest.
Synthesis and characterization of the title compound (I) was reported recently from our laboratory (Thakur et al., 2001), through the reaction of 6-amino-1,3-dimethylbarbituric acid with (DMF-DMA) under thermal condition or Microwave irradiation in the solid state. Our ongoing present research program is aimed at synthesizing fused pyrimidine derivatives of biological significances. Also we have been investigating the rotational barrier of the two methyl groups in the exocyclic N9-Me 2 part in (I), which will help us in understanding the mechanism of the Diels Alder reaction of (I).
The asymmetric unit of (I), comprises one dimethylamino uracil moiety and two water molecules (Fig. 1). The sixmembered uracil ring is planar and the plane of its attached side chain is inclined 27.6 (5)° to the plane of the uracil ring.
The crystal structure is stabilized by O-H···O hydrogen bonds (Table 1). Both the water (O1W and O2W) molecules form O-H···O hydrogen bonds with the carbonyl (O1 and O2) atoms of the uracil moiety. In addition, water···water interactions are also observed in the crystal structure. The water molecules interconnect each other and in turn links the uracil moiety, thereby forming a two-dimensional hydrogen-bonded network cage consists of two dimethylamino uracil moieties and six water molecules (Fig.2).
S2. Experimental
In order to obtain suitable single crystals for this study, the title compound was dissolved in ethanol (98%) and the solution was allowed to evaporate very slowly.
S3. Refinement
The H atoms of the water molecules were located in a difference Fourier map and refined isotropically. Distance restraints were also applied to the H atoms of the water molecules with a set value of 0.86 (1) Å. All other H atoms were positioned geometrically and treated as riding on their parent C atoms, with C-H distances of 0.93 -0.96 Å, and with U iso (H) values of 1.5U eq (C) for methyl H atoms and 1.2U eq (C) for the other H atoms. The methyl groups were allowed to rotate but not to tip. where P = (F o 2 + 2F c 2 )/3 (Δ/σ) max = 0.001 Δρ max = 0.34 e Å −3 Δρ min = −0.21 e Å −3 Extinction correction: SHELXL97 (Sheldrick, 2008), Fc * =kFc[1+0.001xFc 2 λ 3 /sin(2θ)] -1/4 Extinction coefficient: 0.063 (11)
Special details
Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2013-07-18T00:00:00.000
|
7905869
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0069266&type=printable",
"pdf_hash": "549eefa431814f9726214fb2fdee2ae542969d18",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41951",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "549eefa431814f9726214fb2fdee2ae542969d18",
"year": 2013
}
|
pes2o/s2orc
|
Sensitive Assessment of the Virologic Outcomes of Stopping and Restarting Non-Nucleoside Reverse Transcriptase Inhibitor-Based Antiretroviral Therapy
Background Non-nucleoside reverse transcriptase inhibitor (NNRTI)-resistant mutants have been shown to emerge after interruption of suppressive NNRTI-based antiretroviral therapy (ART) using routine testing. The aim of this study was to quantify the risk of resistance by sensitive testing and correlate the detection of resistance with NNRTI concentrations after treatment interruption and virologic responses after treatment resumption. Methods Resistance-associated mutations (RAMs) and NNRTI concentrations were studied in plasma from 132 patients who interrupted suppressive ART within SMART. RAMs were detected by Sanger sequencing, allele-specific PCR, and ultra-deep sequencing. NNRTI concentrations were measured by sensitive high-performance liquid chromatography. Results Four weeks after NNRTI interruption, 19/31 (61.3%) and 34/39 (87.2%) patients showed measurable nevirapine (>0.25 ng/ml) or efavirenz (>5 ng/ml) concentrations, respectively. Median eight weeks after interruption, 22/131 (16.8%) patients showed ≥1 NNRTI-RAM, including eight patients with NNRTI-RAMs detected only by sensitive testing. The adjusted odds ratio (OR) of NNRTI-RAM detection was 7.62 (95% confidence interval [CI] 1.52, 38.30; p = 0.01) with nevirapine or efavirenz concentrations above vs. below the median measured in the study population. Staggered interruption, whereby nucleos(t)ide reverse transcriptase inhibitors (NRTIs) were continued for median nine days after NNRTI interruption, did not prevent NNRTI-RAMs, but increased detection of NRTI-RAMs (OR 4.25; 95% CI 1.02, 17.77; p = 0.03). After restarting NNRTI-based ART (n = 90), virologic suppression rates <400 copies/ml were 8/13 (61.5%) with NNRTI-RAMs, 7/11 (63.6%) with NRTI-RAMs only, and 51/59 (86.4%) without RAMs. The ORs of re-suppression were 0.18 (95% CI 0.03, 0.89) and 0.17 (95% CI 0.03, 1.15) for patients with NNRTI-RAMs or NRTI-RAMs only respectively vs. those without RAMs (p = 0.04). Conclusions Detection of resistant mutants in the rebound viremia after interruption of efavirenz- or nevirapine-based ART affects outcomes once these drugs are restarted. Further studies are needed to determine RAM persistence in untreated patients and impact on newer NNRTIs.
Introduction
The SMART trial randomized HIV-1 infected patients with CD4 counts .350 cells/mm 3 to take antiretroviral therapy (ART) either continuously or episodically, guided by the CD4 cell count [1]. Results showed that interrupting treatment carried a significant risk of morbidity and mortality. There remain circumstances when ART discontinuation may be required (e.g., due to toxicity), or may occur unplanned due to patient choice or problems with drug supply (e.g., in resource-limited settings). In patients receiving ART with agents that have different elimination half-lives, simultaneous interruption of all drugs can lead to a period of inadvertent monotherapy, which can result in viral replication in the presence of a single drug, promoting selection of drug-resistant mutants. This is expected to be a problem especially with the non-nucleoside reverse transcriptase inhibitors (NNRTIs), as they show the longest plasma half-lives among available antiretrovirals [2]. NNRTI clearance rates show significant interperson variability, however, reflecting the activity of enzymes responsible for NNRTI metabolism, which in turn are influenced by multiple encoding and regulatory genes [3,4]. A low genetic barrier to resistance further compounds the problem of stopping NNRTI-based ART, as a single mutation in reverse transcriptase (RT) is typically sufficient to abrogate drug activity [5]. It can therefore be proposed that selection of NNRTI resistance may occur in patients stopping NNRTI-based ART and that the risk is higher the slower the NNRTI clearance rate. However, previous studies investigating the correlation between NNRTI concentrations after treatment interruption and detection of NNRTI resistance have not been conclusive, possibly due to small numbers and low sensitivity of testing methods [6,7]. The extent to which treatment interruption leads to emergence of drug-resistant virus is important for understanding the full implications of stopping ART in relation to both subsequent treatment outcomes and risk of transmission of drug-resistant HIV.
The risk of resistance after interruption of NNRTI-based ART has been previously estimated using Sanger sequencing [6][7][8][9]. We reported that among 141 patients who interrupted NNRTI-based ART within SMART, 18 (13%) had evidence of NNRTI resistance in the two months following interruption [8]. Sanger sequencing fails to detect mutants present in the viral quasispecies at a frequency below approximately 20%, suggesting that an even greater proportion of patients may carry resistant mutants below this detection limit. The issue is especially relevant to NNRTI therapy. Low-frequency NNRTI-resistant mutants have been detected in both ART-naive and NNRTI-experienced patients with and without high-frequency mutants, and shown to impair responses to NNRTI-based ART [10,11]. Recommended strategies to minimize the potential risk of drug resistance after interruption of NNRTI-based ART include stopping the NNRTI first and continuing the remaining drugs in the regimen for a short period, commonly the nucleos(t)ide RT inhibitors (NRTIs) (staggered interruption), or replacing the NNRTI with a ritonavir-boosted protease inhibitor (PI/r) for a short period (switched interruption) [12]. There is limited evidence supporting one particular strategy. In a previous study using Sanger sequencing, no NNRTI-RAMs were detected in virologically suppressed children that stopped nevirapine or efavirenz according to either a staggered or a switched interruption modality [9]. Within SMART, we previously reported that both the detection of drug resistance after interruption (by Sanger sequencing) and resuppression rates after restarting therapy were higher among patients with staggered or switched interruption relative to those with simultaneous interruption [8]. Expanding our previous observations, the aim of this study was to obtain a more accurate estimation of the risk of NNRTI resistance after interruption of NNRTI-based ART by sensitive testing with allele-specific (AS)-PCR and ultra-deep sequencing (UDS). We then investigated the correlation between detection of NNRTI resistance and NNRTI concentrations after treatment interruption, and analyzed the findings in relation to virologic responses after resumption of NNRTI-based ART.
Study Population
Eligible patients were receiving NNRTI-based ART, had a plasma HIV-1 RNA load ('viral load') ,400 copies/ml, and were randomized to the drug conservation arm of SMART and thus to undergo a treatment interruption [1]. A total of 132/984 (13.4%) patients who interrupted suppressive NNRTI-based ART in SMART and had stored plasma samples available for testing were included in this sub-study. The modality of interruption was chosen by the treating physician, as previously described [8]. Therapy was re-started when the CD4 count decreased ,250 cells/mm 3 or at the occurrence of clinical events [1].
Ethics Statement
The Institutional Review Board at the University of Minnesota approved the proposal for the use of stored specimens. All necessary permits were obtained for the described study, which complied with all relevant regulations. The samples used in this study have been described in previous publications (references 1 and 8).
Drug Concentrations
Efavirenz and nevirapine concentrations were measured by validated [13,14], highly sensitive high-performance liquid chromatography (HPLC) in plasma samples collected at week 4 (visit 1) after NNRTI interruption. The assay lower limit of quantification was 0.25 ng/ml for nevirapine and 5 ng/ml for efavirenz.
Drug Resistance
Plasma samples collected 4-12 weeks after NNRTI interruption were used for resistance testing. Selection was based upon sample availability and viral load levels .3000 copies/ml to allow reliable testing by the sensitive assays. Samples underwent Sanger sequencing and AS-PCR as previously described [15][16][17]. The AS-PCR targeted the NNRTI resistance-associated mutations (RAMs) K103N, Y181C, and Y188L; samples showing K103N were also tested for G190A. In addition samples were screened for the presence of NRTI-RAMs, including thymidine analogue mutations (TAMs), K65R, Q151M and M184V/I. Mutationspecific interpretative cut-offs ranging from 0.3% to 1% were applied as previously described [15]. In a subset of 21 samples, UDS of the RT amino acid region 100 to 190 ( RT100-190 UDS) was performed as previously described [18]; samples were selected randomly from three subsets according to volume availability: samples with RAMs by AS-PCR; samples without RAMs by AS-PCR; and samples that failed the AS-PCR reaction. Briefly, viral RNA was extracted from 500 ml of plasma (EasyMag, Biomérieux, France) and reverse transcribed into cDNA using the Accuscript HF RT enzyme (Agilent, Santa Clara, USA) and random hexamers. The RT region spanning amino acids 100 to 190 was amplified by nested PCR, and pooled barcoded amplicons were sequenced on the GS-FLX instrument (454 Life Sciences, Roche, Branford, USA) according to the manufacturer's standard protocol. The experiment was designed to reach on average a mutation detection sensitivity of 1% and an average coverage of 5,500 reads per position was obtained. Amplicons were sequenced from both ends (forward and reverse). The Amplicon Variant Analyzer (AVA) software (Roche) was used for read mapping and calculating variant frequencies at each nucleotide position relative to HIV-1 reference strain HXB2. The presence of relevant mutations was manually verified by inspection of the individual owgrams. A detection limit of 1% was chosen to avoid the high probability of technical artifacts below this threshold [19]. Major RAMs were assigned according to the International IAS-USA list (Nov 2011).
Statistical Analysis
Factors associated with the detection of RAMs by all testing modalities combined (n = 131) were investigated using standard univariable and multivariable logistic regression analysis. All factors of interest were stipulated a priori and included in the multivariable models. In the first model, the variables analyzed were age, gender, ethnicity, HIV-1 transmission risk group, nadir CD4 count, duration of ART before interruption, viral load and CD4 count at the time of interruption, and interruption modality. In a second model exploring factors associated with the detection of NNRTI-RAMs, the analysis also included nevirapine and efavirenz plasma concentrations (n = 70). To overcome the limitation related to the small number of observations, the drug concentration data were pooled and analyzed as categorical variables (either above or below the median concentration measured in the study population). This approach was stipulated a priori as there was insufficient statistical power to analyze the two drugs separately or assess the interaction between nevirapine and efavirenz in this model. The proportion of patients who regained virologic suppression ,400 copies/ml after re-starting NNRTIbased ART was investigated using logistic regression analysis as an intention-to-treat switch = failure analysis. Only patients restarting ART without a PI and with at least one viral load measurement in the following 4-12 months were included (n = 90). All factors of interest were stipulated a priori and included in the multivariable models. The variable included were age, gender, duration of ART before interruption, viral load and CD4 count at the time of interruption, time between interrupting and restarting ART, the NNRTI restarted, the interruption modality and the presence of RAMs. P-values were not corrected for multiple comparisons. All statistical analyses were performed using STATA software
Drug Resistance
Resistance testing was performed in samples collected median 8 weeks (IQR 4, 11) after NNRTI discontinuation. At the time of testing, the median viral load was 27,618 copies/ml (IQR 8,480,76,200 (Table 3). With 19 samples that underwent both tests, UDS confirmed the AS-PCR results, with the exception of one sample that showed G190A by UDS but not by AS-PCR; the frequency of the G190A mutant was 4% by UDS. In addition RT100-190 UDS detected NNRTI-RAMs not targeted by AS-PCR, including V179D, L100I, and K101E. Detection of NNRTI-RAMs according to the interruption modality was 13/62 (21.0%) for simultaneous interruption, 8/46 (17.4%) for staggered interruption, and 1/23 (4.3%) for switched interruption.
Predictors of Drug Resistance
Detection of NNRTI-RAMs was less likely in patients with a viral load recorded as ,50 copies/ml at the time of treatment interruption with an adjusted odds ratio (OR) of 0.28 (95% confidence interval, CI 0.09, 0.91; p = 0.03) ( Table 4). There was also a trend towards a reduced risk of NNRTI-RAMs after a switched interruption relative to a simultaneous interruption. NNRTI-RAMs were detected in 10/31 (32.3%) patients with week 4 NNRTI concentrations above the median level measured in the study population (1 ng/ml for nevirapine and 16 ng/ml for efavirenz), and 2/34 (5.9%) in patients with concentrations below the median level (p = 0.007). A separate multivariable model was used to assess the association between NNRTI-RAMs and drug concentrations, to account for the fact that drug concentrations were only available in 70 patients. In this analysis, NNRTI-RAM detection was more likely in patients with NNRTI concentrations above the median levels (.1 ng/ml for nevirapine and .16 ng/ ml for efavirenz) with an adjusted OR of 7.62 (95% CI 1.52, 38.30; p = 0.01). Detection of NRTI-RAMs was associated with duration of ART exposure prior to interruption with an adjusted OR of 1.26 for each year longer (95% CI 1.10, 1.45; p = 0.001); the nadir CD4 count with an adjusted OR 0.68 for each 50 cells/ mm 3 higher (95% CI 0.52, 0.87; p = 0.003); and staggered interruption relative to simultaneous interruption with an adjusted OR of 4.25 (95% CI 1.02, 17.77; p = 0.03).
Discussion
This study demonstrated RAMs in a substantial number of patients experiencing rebound viremia after stopping suppressive NNRTI-based ART. Interpretation of the findings should take two limitations into consideration. Viral load assays with different lower limits of quantification were used in SMART and viral load suppression ,50 copies/ml could not be demonstrated in all patients at study entry. Furthermore, due to sample availability, NNRTI concentrations were obtained only in a subset of patients.
Nonetheless, the data provide sufficient evidence to indicate that different factors influenced the detection of RAMs after ART interruption. NNRTI-RAMs were less likely in patients who had a viral load recorded as ,50 copies/ml at the time of interruption, indicating a risk of selecting NNRTI resistance even at low levels of viremia between 50 and 400 copies/ml. In addition, NNRTI-RAMs were more likely in patients showing higher plasma NNRTI concentrations at week 4 after interruption. These findings provide support to the notion that selection of NNRTI resistance can occur in patients experiencing slower NNRTI clearance after ART interruption. Two previous studies did not observe an association between NNRTI concentrations after interruption and detection of NNRTI-RAMs [6,7]. One study found that median efavirenz or nevirapine concentrations at day 15 after interruption did not differ significantly between 12 patients with NNRTI-RAMs (by Sanger sequencing) and 20 patients without RAMs [6]. Of note, the NNRTI was quantifiable in less than a third of available samples. A second study reported that median efavirenz concentrations and rate of efavirenz decline over 7-10 days after interruption did not differ significantly Table 3. Resistance-associated mutations in reverse transcriptase at week 8 after interruption of NNRTI-based ART, according to the interruption modality, HIV-1 RNA load at interruption, and NNRTI concentrations at week 4 post-interruption. Resistance-associated mutations (RAMs) in reverse transcriptase (RT) were detected by Sanger sequencing, allele-specific PCR (AS-PCR) and ultra-deep sequencing (UDS). AS-PCR targeted the NNRTI-RAMs K103N, Y181C, Y188L and G190A (the latter only in samples with K103N), and the following NRTI-RAMs: thymidine analogue mutations (M41L, D67N, K70R, L210W, T215Y/F, K219Q), K65R, Q151M, and 184V/I. Mutation-specific cut-offs ranging between 0.3% and 1% were used for AS-PCR interpretation as previously described [15]. UDS targeted the RT amino acid region 100-190; a detection limit of 1% was chosen to avoid the high probability of technical artifacts below this threshold [19]. RAMs detected by sensitive testing but not by Sanger sequencing are indicated in bold; RAMs detected by between 7 patients with NNRTI-RAMs (by Sanger sequencing) and 14 patients without RAMs. Thus, both the size of the study population, the timing of the assessment, and the sensitivity of the testing methods for both drug concentrations and NNRTI-RAMs differed in our study compared with the two previous reports. A previous study of 19 patients receiving intermittent efavirenzbased ART assayed efavirenz concentrations and used AS-PCR to detect the NNRTI-RAM K103N during the off-therapy periods. Consistent with our findings, AS-PCR increased detection of NNRTI resistance relative to Sanger sequencing; furthermore the half-life of efavirenz was higher in 8 patients in whom K103N emerged compared with 11 patients in whom it did not (p = 0.04) [20]. Genetic predictors of NNRTI clearance are being identified which may help tailoring NNRTI discontinuation. The cytochrome P450 (CYP)-2B6 isoenzyme (CYP2B6), for instance, catalyzes the main oxidative metabolism reaction for efavirenz. Three polymorphisms within the CYP2B6 gene have been associated with efavirenz estimated Cmin, although together they explain only one-third of inter-individual variability [4]. Meanwhile, measuring drug levels at week 4 after NNRTI discontinuation may potentially offer a readily available tool to assign patients to the low or high risk of NNRTI-RAMs. It must be pointed out however that the fact that drug concentrations were measured in a subset of patients limits the power of our conclusions. While the dataset was larger than in previous studies, the statistical analysis required a separate model and pooling of the nevirapine and efavirenz concentration data, which involved an underlying assumption that the effect of nevirapine concentrations is the same as that of efavirenz concentrations. The drug concentration data were analyzed as categorical variables either above or below the respective median concentrations measured in the study population. Thus, the results provide proof-of principle evidence that patients with slower nevirapine or efavirenz clearance have a greater risk of NNRTI resistance after interruption, although further studies are required to identify drug-specific cut-offs that are predictive of resistance. Further analyses of interest may also include the correlation between drug concentrations measured before and after treatment interruption. In addition, although we were unable to identify an association between detection of NNRTI-RAMs and the NRTIs used (data not shown), the different half-life of NRTIs has the potential to influence the risk for resistance development [7] and its effects warrant further investigation.
Detection of NRTI-RAMs was surprisingly common in this study. NRTI-RAMs were more likely in patients with a long previous ART history, suggesting re-emergence of resistant mutants archived during previous virologic failures, rather than, or in addition to, de novo selection during viral load rebound. In support of this hypothesis, most NRTI-RAMs were detected by Sanger sequencing, and there was a high prevalence of patients showing two or more TAMs. As TAMs are known to emerge in stepwise fashion during prolonged therapy with zidovudine or stavudine [21], it would seem that multiple TAMs were unlikely to arise for the first time solely as a result of treatment interruption. Detailed treatment histories and results of previous resistance tests would be required to corroborate this hypothesis. Interestingly, there was an increased detection of NRTI-RAMs in patients with a staggered interruption, suggesting a potential for selection or reselection of mutants by the continued NRTIs.
We previously reported that patients who had undergone a simultaneous interruption showed reduced virologic responses after restarting ART compared with those with a staggered or a switched interruption [8]. Here we confirm the previous observation that simultaneous interruption should be avoided when possible. We further propose that a switched interruption may be preferable to a staggered interruption both to offer improved protection against emergence of NNRTI-RAMs and reduce selection of NRTI resistance; the latter may be especially important in patients with previous NRTI experience.
In our previous study we used Sanger sequencing to detect RAMs after ART interruption [8]. Here we demonstrated that sensitive testing increased prevalence and spectrum of NNRTI-RAMs detected during viral load rebound. The data provide a measure of the potential risk of NNRTI resistance after ART interruption. It should be noted that at 16.8%, the overall prevalence of NNRTI-RAMs was lower than that observed in patients either failing NNRTI-based ART or receiving single-dose nevirapine for the prevention of mother to child transmission [10,22]. This may be explained by the consideration that both sufficient levels of virus replication and sufficient drug concentrations must co-exist to allow selection of drug resistance. The optimal ''selection window'' is likely to be narrower in patients stopping NNRTI-based ART with a suppressed viral load relative to patients receiving single-dose nevirapine in the presence of a fully replicating virus. A further consideration is that patients interrupting NNRTI-based ART in SMART had already achieved steady-state NNRTI pharmacokinetics through the induction of hepatic enzymes. In addition, testing samples collected several weeks after treatment interruption, while required to ensure adequate viral load levels, may have missed the earlier emergence of resistant strains. Finally, the AS-PCR method applies strict cut-offs for interpreting positivity.
The AS-PCR methodology employed in this study has undergone extensive validation [15][16][17]. In previous studies we demonstrated that low-frequency NNRTI-RAMs detected by AS-PCR were predictive of virologic failure among naive patients starting first-line NNRTI-based ART [11,16,17], and also influenced the detection probability and type of NNRTI-RAMs detected at the time of virologic failure [23]. One downside of AS-PCR is that it is mutation-specific and to a large extent cladespecific, and labor-intensive. In recent years, next-generation sequencing methodologies, including UDS, have become available that allow the quantitative detection of mutants with greatly enhanced sensitivity relative to Sanger sequencing (reliably down to a cut-off of about 1%) [19]. Direct comparisons of AS-PCR with UDS are limited. A previous study of 11 samples undergoing AS-PCR for K103N showed a good level of agreement with UDS [20]. Here, using a subset of 21 samples that underwent both AS-PCR and UDS, we found good concordance between the two techniques at the respective validated cut-offs for interpretation. Importantly, although the AS-PCR targeted a relatively small number of NNRTI-RAMs, these were the key RAMs associated with resistance to efavirenz or nevirapine, and the spectrum was only marginally expanded in the samples that also underwent UDS of the RT region spanning amino acids 100 to 190.
In summary, this study provides substantive evidence in support of the widely cited hypothesis that stopping NNRTI-based ART carries a risk of drug resistance. We show that viral load levels at the time of interruption, plasma NNRTI concentrations at week 4 after interruption, overall treatment history, and interruption modality combine to influence the risk of resistance and ultimately predict virologic responses when NNRTI-based ART is resumed. Further studies are required to assess the persistence of NNRTI-RAMs in patients remaining off therapy, the potential for their onward transmission, and the implication of these findings for etravirine and rilpivirine use. The analysis included 90 patients who restarted NNRTI-based ART without a protease inhibitor and had at least one viral load measurement in the 4-12 months after re-starting therapy. b As noted above some patients had the viral load measured by assays with a lower limit of quantification of either 75 or 400 copies/ml. NNRTI = non-nucleoside reverse transcriptase inhibitor; ART = antiretroviral therapy; OR = Odds ratio; CI = confidence interval. doi:10.1371/journal.pone.0069266.t005
|
v3-fos-license
|
2020-03-25T13:15:15.174Z
|
2020-03-25T00:00:00.000
|
214624892
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fsufs.2020.00019/pdf",
"pdf_hash": "e23d75d713d74985eaa228008da832b9fe43f5d7",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41953",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"sha1": "e23d75d713d74985eaa228008da832b9fe43f5d7",
"year": 2020
}
|
pes2o/s2orc
|
Comparing the Efficacy of Two Triple-Wash Procedures With Sodium Hypochlorite, a Lactic–Citric Acid Blend, and a Mix of Peroxyacetic Acid and Hydrogen Peroxide to Inactivate Salmonella, Listeria monocytogenes, and Surrogate Enterococcus faecium on Cucumbers and Tomatoes
This study was designed to evaluate two triple-wash procedures with commercial antimicrobials to inactivate foodborne pathogens and surrogate bacteria on cucumbers and tomatoes. Fresh, West Virginia locally grown cucumbers and tomatoes were dip-inoculated with Salmonella Typhimurium and Tennessee, Listeria monocytogenes (3-strain), and Enterococcus faecium. Produce was washed through two triple-wash steps (10 s each) including water dip, antimicrobial dip, and water dip (WAW), or water dip, water dip, and antimicrobial dip (WWA), followed by draining (2 min) on aluminum foil. A triple-water (WWW) process was also included as a water-only control. Tested treatments were (1) water; (2) sodium hypochlorite (SH, 100 ppm, pH 8.2); (3) acidified sodium hypochlorite (ASH, 100 ppm, pH 6.8 adjusted by citric acid); (4) lactic and citric acid blend (LCA, 2.5%); and (5) a H2O2-peroxyacetic acid mix [SaniDate-5.0 (SD) 0.0064, 0.25, and 0.50%]. Surviving bacteria were recovered on xylose lysine tergitol-4 (XLT-4, Salmonella), Modified Oxford (MOX, L. monocytogenes), and bile esculin agar (E. faecium). Data (two replicates, four samples/replicate) were analyzed using the mixed model procedure of SAS (P = 0.05). Counts of Salmonella, L. monocytogenes, and E. faecium on unwashed cucumbers and tomatoes were 5.42–6.23, 6.31–6.92, and 6.05 log colony-forming units (CFU)/produce, respectively. Triple-wash with water only reduced all three tested bacteria by 0.45–1.36 log CFU/fruit. Triple-wash by WWA with antimicrobials achieved additional reductions [least squares means (LsMeans)] of 0.38 log CFU/cucumber (Salmonella), 0.56 log CFU/cucumber (E. faecium), 1.48 log CFU/tomato (Salmonella), 1.09 log CFU/tomato (L. monocytogenes), and 0.71 log CFU/tomato more than the WAW procedure. Applying SD-0.25 and SD-0.50% solutions in triple-washing cucumbers and tomatoes resulted in reductions (P > 0.05) similar to ASH and greater reductions (P < 0.05) than SH and LCA. E. faecium was less susceptible (P < 0.05) or there was no difference (P > 0.05) in comparison with Salmonella in most cases, except for tomatoes treated with WWA. The results of this study indicate that SD could be used as an alternative antimicrobial agent for chlorine water in triple-wash processing at local small produce plants. Future pilot plant validation studies and cost-effectiveness analyses are needed for applying SD solutions in triple-wash by WV local small produce growers.
INTRODUCTION
Among foodborne pathogens, Salmonella is the second most common cause of foodborne illnesses (Dewey-Mattia et al., 2018). A multi-state outbreak of Salmonella Poona on sliced cucumber from 2015 to 2016 caused more than 900 cases of infection, and six deaths were recorded according to the Centers for Disease Control and Prevention (CDC) (CDC, 2019). In 2006, an outbreak across 21 states in the United States caused 183 illnesses, of which 22 patients were hospitalized (CDC, 2006). In 2017, Denmark, Finland, Germany, Ireland, and the United Kingdom were affected by an outbreak of Salmonella on cucumbers (EFSA and ECDC, 2018). Another popular fresh produce commodity, tomatoes, has been associated with Salmonella outbreaks as well. Tomatoes were also inked to a recent Salmonella outbreak in Sweden, with 71 identified infections/illnesses (Colombe et al., 2019). In 2015, tomatoes served at Chipotle restaurants in Minnesota were reported to be contaminated with Salmonella (Minnesota Department of Health, 2015).
Listeria monocytogenes is another pathogen of concern reported by the United Fresh Produce Association (UFPA) due to the higher fatality rate of listeriosis (United Fresh Produce Association, 2018). In Iran, the prevalence of L. monocytogenes in sampled cucumbers was reported to be 18% (Hossein et al., 2013). Depending on the area, the prevalence of L. monocytogenes ranged from 3.8% to 17.5% (Strawn et al., 2013) on cucumbers sampled from a farmers market. Although outbreaks caused by L. monocytogenes on tomatoes are uncommon, tomatoes were recognized as a common food crop susceptible to foodborne pathogens (Honjoh et al., 2016).
The consumption of fresh produce (fresh vegetables and fruits) has increased to 345 pounds (per capita availability) in 2017 [(USDAERS, 2019)]. In the United Sates; however, there has been increasing concern regarding the microbial safety of farmers market-sold produce (Scheinberg et al., 2017). Fresh produce accounted for 46% of foodborne illnesses according to a comprehensive analysis by the CDC (Painter et al., 2013). In a recent study in West Virginia and Kentucky farmers markets, 18.6% of spinach, 10.9% of tomatoes, 18.5% of peppers, and 56.3% of cantaloupes tested positive for Salmonella, and 3.8% of the produce samples were positive for Listeria spp. . Fresh produce may be eaten without further processing; therefore, cross-contamination during transportation, or handling becomes a serious issue to ensure produce safety. As the demand for locally produced foods has increased nationwide, with an estimated $20 billion target by 2019 (USDA, 2015), produce safety becomes a recurring problem, especially in regions where raw consumption of fresh produce is common practice.
In 1998, the United States Food and Drug Administration (FDA) published the "Guide to Minimize Microbial Food Safety Hazards for Fresh Fruits and Vegetables" as the main guidelines for Good Agricultural Practices (GAP), which identified that antimicrobial chemicals in processing water may reduce the microbial load on the surface of produce (United States Food and Drug Administration, 1998). The West Virginia University (WVU) Extension Service Small Farm Center encourages small produce growers to apply a triple-wash process during their post-harvest processing if their produce is eaten raw or grown on the ground (Strohbehn et al., 2013). Although more evidence suggests that washing is more a cross-contamination preventative than a pathogen reduction step, the triple-wash process (water rinse, water rinse, and final antimicrobial dip) is still recommended for removing pathogens from food surfaces and for improving on-farm food safety with the assumption of clean water being used in each wash step (Strohbehn et al., 2013). The effectiveness of the triple-wash critically depends on the antimicrobial solutions used. Sodium hypochlorite (SH, referred to as chlorinated water) has been well-recognized and used extensively as an effective and low-cost sanitizer during the post-harvest washing process (Shen et al., 2013). The antimicrobial effect of chlorinated water against Salmonella was observed when spraying 200 ppm chlorinated water onto tomato surfaces (Bari et al., 2003). However, chlorinated water has obvious disadvantages including being easily degraded by organic matter and generating chlorine byproducts . Therefore, local produce growers are interested in learning the efficacy of new antimicrobial solutions. For example, Preston County Workshop Inc., a local small produce processor, is currently using SaniDate-5.0 [SD, a mix of peroxyacetic acid (PAA) and H 2 O 2 ] in their triple-wash tanks to control foodborne pathogens on their produce. According to our recent internal survey from local small produce growers at the 2018 West Virginia Small Farm Conference produce safety training workshop, approximately half of the participants (9/20) currently choose water dip-antimicrobial dip-water dip (WAW), and the other half (10/20) use the water dip-water dip-antimicrobial dip (WWA) procedure. Recently, there is growing recognition that post-harvest washing reduces cross-contamination with no expectation of achieving log count reductions of pathogens on produce (Gombas et al., 2017). Therefore, the concentration level to be used and the antimicrobial efficacy of the triple-wash procedures need to be investigated.
The efficacy of antimicrobial solutions during triple-washing should be tested in real local small produce commercial settings. This is because the dynamics of processing conditions applied by local produce growers could be controlled to a lesser extent than under laboratory conditions. To validate washing procedures, local small produce plants typically have a great aversion to using an actual microbial test pathogen in their processing lines; instead, they usually apply alternative methods such as ensuring a sufficient active sanitizer within the wash tank. The use of a pathogen surrogate is a possible valid approach, and the target surrogate needs to be validated in laboratory conditions first (Hu and Gurtler, 2017). Enterococcus faecium, a Grampositive chain-shaped cocci, has been studied on our WVU poultry farm as a safer alternative for Salmonella during steam conditioning, antimicrobial inclusion, and standard/thermal aggressive pelleting during broiler feed manufacturing (Boney et al., 2018;Boltz et al., 2019). Enterococcus faecium has also been validated as a surrogate for Salmonella in almond pasteurization (Jeong et al., 2011). However, this Salmonella surrogate has not been validated on fresh produce, and no publications identified the ideal surrogate for Salmonella during the post-harvest produce washing process.
As outbreaks associated with Salmonella and L. monocytogenes from fresh produce are of concern, a more comprehensive evaluation of washing procedures should be carried out. Therefore, the objectives of this study were to evaluate two triple-wash procedures with three commercial antimicrobials to inactivate Salmonella, L. monocytogenes, and the surrogate E. faecium on WV locally grown cucumbers and tomatoes.
Fresh Produce Sample Preparation and Background Microflora Elimination
Fresh cucumbers and tomatoes were purchased from WV Morgantown Farmers Market and stored overnight in a refrigerated cooler. Before each experiment, the population of natural microflora on produce surfaces was determined by adding one cucumber or tomato into 200 ml of buffered peptone water (BPW, Alpha Biosciences, Baltimore, MD, USA) with shaking for 30 s, followed by spread-plating onto tryptic soy agar (TSA, Alpha Biosciences, Baltimore, MD, USA) after 10-fold serial dilution and incubating at 35 • C for 48 h. Results indicated that there were ∼5-6 log colony-forming units (CFU)/produce of background microbiota on cucumber and tomato surfaces. It was noticed that the microflora on cucumbers interfered with the results of the E. faecium experiment, as the selective medium used for the E. faecium experiment was not selective enough to preclude background microbiota. Therefore, cucumber samples were decontaminated before being subjected to E. faecium inoculation. To reduce microbiota on cucumbers in the E. faecium experiment, a pre-wash procedure was conducted. Fresh cucumbers were first rinsed with tap water and then submerged in a chlorinated solution (40 ml bleach in 2 L water) for a minute, followed by submerging in boiling water for 5 s. This surface disinfection method was verified by showing growth of no colonies on bile esculin agar (BEA) after shaking cucumbers with 200 ml BPW following by spread-plating. The decontaminated cucumber samples were air-dried in a biosafety cabinet before inoculation.
Preparation of Inoculum
Salmonella Typhimurium ATCC 14028, Salmonella Tennessee ATCC 10722, L. monocytogenes strains L2624 and L2625 (cantaloupe outbreak serotype 1/2b, donated by Dr. Joshua Gurtler, USDA-ARS, Wyndmoor, PA), and surrogate E. faecium ATCC 8459 were used in this study. Both Salmonella and L. monocytogenes strains were used in the previously reported farmers market produce safety projects (Li et al., , 2018, and this strain of E. faecium has also been studied in WVU poultry meat projects Boltz et al., 2019). Salmonella, L. monocytogenes, and E. faecium retrieved from frozen stock cultures were streak-plated onto xylose lysine tergitol-4 agar (XLT-4, Hardy Diagnostics, Santa Maria, CA, USA), Modified Oxford agar (MOX, Hardy Diagnostics, Santa Maria, CA, USA), and BEA (Hardy Diagnostics, Santa Maria, CA, USA), respectively, and then incubated at 35 • C for 48 h to generate single colonies. Before each experiment, a single colony was picked from XLT-4 (Salmonella), MOX (L. monocytogenes), and BEA (E. faecium) of each strain and was enriched in 10 ml tryptic soy broth (TSB; Alpha Biosciences, Baltimore, MD, USA) at 35 • C for 24 h. Then, each individual bacterial suspension was centrifuged (5,000 × g) for 15 min (VWR Symphony 4417, VWR International, Radnor, PA). Each suspension was then centrifuged and washed in triplicate in 0.1% BPW followed by re-suspending in 10 ml of 0.1% BPW. The Salmonella and L. monocytogenes inoculum was made by combining the two strains of Salmonella with the two L. monocytogenes strains (Li et al., , 2018. After creating the four-strain cocktail, the inoculum was diluted to ca. 6 log CFU/ml by adding the 40 ml cocktail into 3 L of 0.1% BPW solution for the dip inoculation process. The inoculum level of E. faecium was adjusted to 6.5 log CFU/ml by adding 40 ml of triplicate-washed strains into 3 L of 0.1% BPW.
Inoculation of Fresh Produce Samples
Tomatoes and cucumbers were inoculated by placing the product in a metal bowl containing 3 L of Salmonella or Listeria inoculum with gentle stirring for 5 min, followed by placement in a biosafety cabinet for 15 min to allow for pathogen attachment. According to our preliminary studies, E. faecium was inoculated onto tomato and cucumber samples by pipetting 1 ml of inoculum onto the samples, fully covering the surface using foodgrade plastic wrap for 30 s, and then air-drying in a biosafety cabinet for 15 min.
Triple-Wash Produce With Antimicrobials
Before the triple-washing process, the inoculated produce (cucumbers and tomatoes) was tested for temperature using a scan thermometer (Exergen Corporation, Watertown, MA, USA), and the surface temperatures of both products were 46.76 ± 0.6 • F (8.2 • C). Inoculated samples were left unwashed (control) or triple-washed in three metal containers with 3 L of solution each. Each treatment contained six samples randomly and evenly split into two groups; each sample contained either one tomato or one cucumber. Two triple-wash procedures were applied to the samples, including WAW or WWA. Each step in the triplewash procedure was completed by dipping the samples into the solutions with manual rotation for 10 s with agitation at ca. 200 rpm . Treatments tested include: (i) tap water only (pH = 6.9, 15.4 • C); (ii) SH [free available chlorine 100 ± 0.6 ppm, pH = 8.2 (SH), or pH = 6.8 (adjusted by 10% citric acid: acidified SH, ASH), 14.4 • C; Birko, Henderson, CO, USA]; (iii) a lactic/citric acid blend (LCA, Veggiexide R , 2.5%, pH = 5.1, 15.4 • C, Birko); and (iv) a H 2 O 2 -PAA mix (SD, 15.2 • C; Arbico Organics, Tucson, AZ, USA) with concentrations of 0.0064% (pH 6.25), 0.25% (pH 5.52), and 0.50% (pH 3.75). Free chlorine concentration was measured using the N, N diethyl-1,4 phenylenediamine sulfate method with a chlorine photometer (CP-15, HF Scientific, Inc., Ft. Myers, FL). Temperatures of all wash solutions ranged from 57.9 to 59.7 • F (14.4-15.4 • C), which meets the U.S. FDA advisory recommendation that the wash should be 10 • F higher than the produce (46.8 • F) being washed (United States Food and Drug Administration, 2018). After triple-wash procedures, samples were drained and dried on aluminum foil for 2 min.
Microbiological Analysis
Each unwashed and washed produce sample was placed into a sterile sample bag (Nasco, Fort Atkinson, WI, USA) and rinsed in 200 ml of BPW, followed by vigorously shaking for 60 s to detach bacteria from the surface. Sample rinse solutions were then 10 or 100-fold serially diluted in 0.1% BPW and spread-plated (0.1 ml onto one plate or 1.0 ml equally divided onto three plates) on XLT-4, MOX, and BEA agar to enumerate Salmonella, L. monocytogenes, and E. faecium cells, respectively. All agar plates were than incubated at 35 • C for 24 h (XLT-4) or 48 h (MOX and BEA) and were manually counted for CFU after the incubation period. Three different dilutions were used for spread-plating of each sample including "0" dilution by adding 1 ml of 200 ml BPW diluent equally split onto three agar plates (0.33 ml each). Numbers of colonies from each of the three plates were then combined following incubation; therefore, the detection limit was 2.3 log CFU/fruit (=log 10 200 CFU/fruit). The presumptive positive colonies of Salmonella and L. monocytogenes were confirmed by using an Oxoid latex agglutination test kit (Oxoid Ltd, Basingstoke, Hampshire, UK).
Data Analysis
The triple-wash studies were duplicated with four cucumbers and four tomatoes per treatment per repetition, with a total of eight samples per treatment for each bacterium. The experimental design was a randomized 2 × 6 factorial design with two triple-wash procedures (WAW or WWA) and six antimicrobial treatments. The survival and reduction of Salmonella, L. monocytogenes, and E. faecium were analyzed using the mixed model procedure of SAS (version 9.2, SAS Institute, Cary, NC), including individual factors of triple-wash procedures, antimicrobial treatments, and their interactions. The comparison of reductions between Salmonella and the surrogate E. faecium was also analyzed using the mixed model procedure. The reduction data were determined by a reduction ratio of log 10 (N 0 /N), which includes N 0 , the average control plate counts, and N, the plate count of each individual antimicrobial treated sample (Adler et al., 2016). Means were separated by Tukey Honestly Significant Difference (HSD) at an α = 0.05 significance level.
Comparison of WAW and WWA Procedures
In general, the least squares mean (LsMean) values across the six tested antimicrobial treatments (water-only wash was not included) indicated that triple-washing with WWA is more effective (P < 0.05) than WAW in reducing Salmonella (2.39 vs. 2.01 log CFU/cucumber) and E. faecium (2.16 vs. 1.60 log CFU/cucumber) on cucumbers, and reducing Salmonella (2.82 vs. 1.34 log CFU/tomato), L. monocytogenes (2.35 vs. 1.26 log CFU/tomato), and E. faecium (2.81 vs. 2.10 log CFU/tomato) on tomatoes. Although there is a statistical difference between the WWA and WAW processes, the differences in log reductions ranged from 0.38 to 1.44 log CFU/fruit, which are still relatively low. Applying WWA or WAW procedures on cucumbers showed no difference in reduction (1.39 vs. 1.35 log CFU/cucumber) for L. monocytogenes.
Efficacy of Triple-Wash With Antimicrobials Against L. monocytogenes
Survival and reductions of L. monocytogenes on cucumbers and tomatoes are shown in Tables 3, 4, respectively. As shown in Table 3, triple-washing cucumbers in the six antimicrobials by the WAW process significantly (P < 0.05) reduced L. monocytogenes, with the survival populations ranging from 4.44 to 5.55 log CFU/cucumber compared to 6.31 log CFU/cucumber for the control ( Table 3). For the WAW process, reductions of SH, ASH, LCA, SD-0.25%, and SD-50% were greater than the water control (0.59 log CFU/cucumber) except for SD-0.0064%, showing similar reduction (0.76 log CFU/cucumber). Among the tested antimicrobial treatments, no difference (P > 0.05) was found between the reductions (1.56-1.87 log CFU/cucumber) caused by SH, ASH, SD-0.25%, and SD-0.50%, which were greater (P < 0.05) than LCA (1.23 log CFU/cucumber) and SD-0.0064% (0.76 log CFU/cucumber) treatments. Applying the WWA process increased (P < 0.05) reductions from 1.64 to 2.25, 1.87 to 2.41, and 0.76 to 1.30 log CFU/cucumber for SH, ASH, and SD-0.0064% washed samples, respectively (Table 3). However, the WWA process did not increase reductions of the pathogen on cucumbers washed in LCA, SD-0.25%, and SD-0.50% as compared to the WAW process (Table 3). The efficacy of antimicrobials in inactivating L. monocytogenes on tomatoes has not been widely studied. As shown in Table 4, significantly lower survival (4.42-5.46 log CFU/tomato) was observed on tomatoes washed in the six antimicrobials using the WAW process compared with the untreated control (6.39 log CFU/g) and water wash control (5.96 log CFU/g). SD-0.25% and SD-0.50% reduced the pathogen counts by 1.51-1.72 log CFU/tomato, which were slightly (P > 0.05) lower than ASH (1.97 log CFU/tomato) and greater (P < 0.05) than the reductions of SH (0.93 log CFU/tomato), LCA (1.18 log CFU/tomato), and SD-0.0064% (0.94 log CFU/tomato, Table 4). Compared to the WAW process, applying WWA procedures (P < 0.05) increased reductions of all six tested antimicrobial treatments by 0.49-1.40 log CFU/tomato ( Table 4).
DISCUSSION
Results from this study suggest that applying the WWA process during triple-wash is a better approach than WAW for reducing foodborne pathogens. This conclusion could possibly be explained by the fact that the residual sanitizers after the WWA process further inactivate pathogens on produce samples, since a neutralization step was absent from this study. The WWW control showed reductions of 0.5-1.2 log CFU/fruit across all tested pathogens on cucumbers and tomatoes in this study. These results are similar to those of Wang and Ryser (2014), who 8 | A comparison of the reduction of Salmonella and surrogate E. faecium on tomatoes (log CFU/tomato) by triple-wash procedure WAW or WWA in water, SH (100 ppm, pH 8.2), ASH (SH, 100 ppm, pH 6.8 adjusted by citric acid), LCA (Veggiexide ® , 2.5%), and a PAA and hydrogen peroxide mix (SD, 0.0064, 0.25, and 0.50%). reported a 1.0 log CFU/g reduction of Salmonella from plain water wash for 15 s in a pilot-scale processing line.
As the physiochemical properties of cucumber and tomato surfaces are greatly different, it is plausible that the same sanitizer may not result in the same level of antimicrobial effect. Our results suggest that it is critical to consider the types of fresh produce when choosing a sanitizer to reduce foodborne pathogens, as the antimicrobial effect of one sanitizer may vary. Furthermore, the amount of organic load created by dust, soil, and debris from produce surfaces could impact the washing process.
Chlorine is a common sanitizer used in fresh produce processing due to the economic feasibility and the strong antimicrobial effect preventing cross-contamination (Shen et al., 2013). When chlorinated water is used as a sanitizer for fresh produce, the maximum concentration regulated by the U.S. FDA is 200 ppm. However, even 100 ppm chlorinated water demonstrates strong antimicrobial activity. An earlier study reported that tomatoes dipped in 100 ppm chlorinated water for 30 s showed significant reduction of Salmonella, and the level of reduction was not different between 30 s, 1, and 2 min treatments (Wei et al., 1995). A recent study by Sreedharan et al. (2017) showed that 100 ppm of free chlorine reduced Salmonella by > 4.5 log CFU/tomato in a model flume water for 30 s. It suggested that longer treatment time with chlorinated water was not necessary for fresh produce against Salmonella, which benefits actual produce processing plants. When immersing tomatoes into 200 ppm chlorinated water, its antimicrobial activity against Salmonella was significantly higher than 1 or 2 ppm ozonated water under different levels of turbidity of the water (Chaidez et al., 2007). Currently, there are no small produce growers (we contacted eight local growers in WV) that acidify the chlorine wash water before triple-washing their produce in WV (personal communication with Dr. Tom McConnell, Program Leader of the WV Small Farm Center). However, during the large-industry-scale produce washing process, adjusting the pH of chlorine solutions to 6.8 with citric acid is often conducted to ensure that the protonated form of hypochlorite predominates in the wash solution (Luo et al., 2012). Therefore, chlorine solutions with near-neutral pH at 6.8 adjusted by citric acid were included as a test treatment in this study. The results of this study suggest that the antimicrobial efficacy of chlorinated water in the concentration of 100 ppm was significant against Salmonella, and the reductions were improved when the pH of SH was adjusted to 6.8 in most cases, except for E. faecium on tomatoes. A previous study by Wang and Ryser (2014) also reported that chlorine plus citric acid (pH 6.0) yielded a greater reduction (3.1 log CFU/g) of Salmonella on tomatoes than chlorine at alkaline status (2.1 log CFU/g). This is because hypochlorous acid, the most effective antimicrobial component, predominates in chlorine water at near-neutral pH, whereas the hypochlorite would be in the ionic form rather than the antimicrobial protonated state in an alkaline-pH solution (White, 2010). The antimicrobial effect of 100 ppm SH or ASH can also be maximized when the WWA triple-wash procedure is used on cucumbers and tomatoes, as the reduction of Salmonella on cucumbers and tomatoes was significantly higher in the WWA than the WAW process.
The antimicrobial activity of chlorinated water is not limited to Salmonella. L. monocytogenes., another common food pathogen, was also sensitive to chlorinated water treatment on the surface of fresh produce. On cucumbers, 1 or 2 min washing with 200 ppm chlorinated water demonstrated the same level of Salmonella reduction (Yuk et al., 2006). Spraying 200 ppm chlorinated water on tomatoes significantly reduced L. monocytogenes on tomatoes (Beuchat et al., 1998;Bari et al., 2003). Surprisingly, although a L. monocytogenes outbreak associated with cucumber has been reported (Meldrum et al., 2009;Ponniah et al., 2012;Hossein et al., 2013), the antimicrobial effect of chlorine water against L. monocytogenes is relatively lacking in the current literature. Considering the accessibility of chlorinated water, chlorinated water without, and with neutralizing pH was included in this study to better contribute to the current database of antimicrobial effects against L. monocytogenes on cucumbers (Table 3). Our results suggest that 100 ppm chlorinated water (SH) is an effective antimicrobial against L. monocytogenes on cucumbers, and neutralizing pH to 6.8 by citric acid and the WWA procedure maximize the antimicrobial effect of chlorinated water, as significantly higher reduction was observed. On tomatoes, although 100 ppm SH is still an effective sanitizer (Table 4), the antimicrobial efficacy of SH against L. monocytogenes was significantly increased when the pH was adjusted to 6.8 (ASH), which was similar to SD-0.25% and SD-0.50% and >SD-0.0064% and LCA.
Recently, there is growing interest for produce processors to apply antimicrobial chemicals rather than chlorine during produce washing, as chlorine water easily reacts with water constituents and generates chlorine byproducts after repeated replenishing with new chlorine solutions (López-Gálvez et al., 2012;Shen et al., 2016). Local small produce growers in WV are also losing interest in chlorine use due to the increased marketability of natural and organic fresh produce (personal communication with Dr. Tom McConnell, Program Leader of the WV Small Farm Center). LCA, a buffered mixture of lactic and citric acid solution, was introduced by the food chemical industry about a decade ago and was reported to be effective in reducing Salmonella on poultry carcasses, avoiding the discoloration of chicken meat caused by lactic acid solutions (Laury et al., 2009). The only study of LCA on produce demonstrated that spraying 2.5% LCA onto jalapeno peppers through a commercial cabinet reduced the natural flora, Salmonella, and the surrogate generic Escherichia coli by 1.3, 1.1, and 0.8 log CFU/g, respectively (Adler et al., 2016). The mechanism of LCA to inhibit bacterial survival is the combination effect of lactic and citric acids. Lactic acid decreases the ionic concentration within the bacterial cell membrane of the exterior cell wall, and citric acid can diffuse through the cell membrane, being a weak non-dissociated acid. The combination of both acids leads to an accumulation of the acid within the cell cytoplasm, acidification of the cytoplasm, disruption of the proton motive force, and inhibition of substrate transport (Vasseur et al., 1999). Results showed that similar reductions (<0.5 log CFU/g) were achieved by LCA compared to SH against Salmonella, L. monocytogenes, and the surrogate E. faecium for most tests in the current study. However, LCA was less effective than ASH for inactivating Salmonella on tomatoes and E. faecium on cucumbers.
SD is a mixed antimicrobial solution composed of 23% H 2 O 2 , 5.3% PAA, and 70% unknown ingredients, which has been recommended by the WV Small Farm Center to wash fresh produce processed from local small farms , since the major wholesale buyer in WV Appalachian Harvest requires the use of SD as part of the post-harvest protocol for growers selling to their business, especially for the organic farming process (personal communication with Dr. Tom McConnell, Program Leader of the WV Small Farm Center). Like other oxidizing chemicals, SD oxidizes bacterial cells, denatures protein, and further disrupts the cell wall structure to kill or inhibit bacteria (Block, 2011). A previous study by Briñez et al. (2006) reported that a mix of H 2 O 2 and PAA reduced nonpathogenic strains of Staphylococcus, Listeria spp., and E. coli by more than 5 log CFU/ml after 10 min contact even with organic matter present in solutions (Briñez et al., 2006). The results of the present study suggested similar (P > 0.05) reductions of Salmonella, L. monocytogenes, and E. faecium on cucumbers and tomatoes achieved by SD-0.25 and SD-50% when compared to ASH, which were greater (P < 0.05) than the reductions with SH and LCA solutions. The market price of a 5gallon pallet of SD is $330 compared to $12 for SH and $108.5 for LCA. Therefore, agricultural economic cost-effectiveness analysis is needed to verify that SD is economically feasible for local small produce growers as an alternative antimicrobial solution to chlorine water.
Enterococcus faecium has been previously studied and validated as a potential Salmonella surrogate in almonds (Jeong et al., 2011), a balanced carbohydrate-protein meal (Bianchini et al., 2014), and pet foods (Ceylan and Bautista, 2015) during thermal activation processing. Our recent study also confirmed that E. faecium could be a non-pathogenic surrogate of Salmonella for in-plant antimicrobial validation studies on broiler carcasses . To evaluate the suitability of choosing surrogate microorganisms for foodborne pathogens when exposed to antimicrobials, the surrogate should behave equally well or better (resistant to interventions) compared to the target pathogen in challenge studies (Adler et al., 2016). Therefore, side-by-side comparisons of reduction levels of Salmonella and E. faecium after triplewashing through WAW or WWA with antimicrobials on cucumbers and tomatoes are presented in Tables 7, 8. Results indicated that the E. faecium strain used in this study could potentially be a surrogate of Salmonella for validating triplewash with commercial antimicrobials on cucumbers in local small produce processing settings; however, more studies are still needed to confirm its use on tomatoes as a Salmonella surrogate since opposite results were generated from the WAW compared with the WWA process. Other non-pathogenic bacteria such as generic E. coli (ATCC BAA-1427, ATCC BAA-1428, ATCC BAA-1429, ATCC BAA-1430, and ATCC BAA-1431) could be surrogates for Salmonella on different produce commodities including tomatoes (Adler et al., 2016). This may be surmised from our previous pilot plant trial, which showed that spraying 50 ppm SH or 1.0% LCA reduced the generic E. coli on jalapeno peppers by 0.8-1.0 log CFU/g, which was not different from the reductions of Salmonella (0.5-1.1 log CFU/g) (Adler et al., 2016).
CONCLUSIONS
Under the conditions of this study, the triple-wash WWA procedure was better than WAW at inactivating Salmonella, L. monocytogenes, and E. faecium on cucumbers and tomatoes. SD at concentrations of 0.25 and 0.50% was similar or better in antimicrobial efficacy compared to chlorine water without or with pH adjustment. Enterococcus faecium could be a potential Salmonella surrogate used for validation studies of antimicrobial treatments during post-harvest produce washing. The results from this study provide important information for local small produce growers who are interested in adopting the triplewash procedure during post-harvest processing. Future studies are needed to validate the same procedure in commercial pilot plant settings, and cost-effectiveness analyses are necessary to evaluate whether SD is economically feasible for local small produce processors.
LIMITATIONS OF THIS STUDY
The authors recognize the following limitations of this study. First, this extension validation study is valuable for local, very small produce growers in WV, which do not represent large commercial-scale industry produce processing. Second, the extent of cross-contamination in the different wash regimes of the triple-wash process was not reported in this study. It is wellestablished that preventing cross-contamination is more critical than reduction of pathogens during the produce washing process (Gombas et al., 2017). A cross-contamination study of triplewash in three washing tanks with or without antimicrobials should be included in future studies. Third, the cucumbers tested for E. faecium were pre-treated to remove background microbiota, which may not well represent the cucumbers' surface characteristics. An antibiotic marker should be introduced into E. faecium to solve this issue in our future related studies. Fourth, the absence of a neutralization step from the WWA process may promote further pathogen reduction by the residual sanitizer on produce samples.
DATA AVAILABILITY STATEMENT
The raw data of the whole study are recorded by hand in our lab notebooks and stored electronically in multiple electronic devices, which will be available from the corresponding author to any researchers who are interested in our results.
AUTHOR CONTRIBUTIONS
LJ created the idea of this study. KL, Y-CC, and WJ designed and conducted the experiments. KL and CS performed statistical analysis. KL, Y-CC, and CS drafted this manuscript. XE revised this manuscript.
|
v3-fos-license
|
2022-03-18T13:15:16.278Z
|
2022-03-18T00:00:00.000
|
247500519
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fdgth.2022.785591/pdf",
"pdf_hash": "662c5d69393a3bd8f4c7a3980f8a5a267ba89a8f",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41955",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "662c5d69393a3bd8f4c7a3980f8a5a267ba89a8f",
"year": 2022
}
|
pes2o/s2orc
|
Implementation of Mobile Psychological Testing on Smart Devices: Evaluation of a ResearchKit-Based Design Approach for the Implicit Association Test
Objective To determine whether a framework-based approach for mobile apps is appropriate for the implementation of psychological testing, and equivalent to established methods. Methods Apple's ResearchKit was used for implementing native implicit association test methods (IAT), and an exemplary app was developed to examine users' implicit attitudes toward overweight or thin individuals. For comparison, a web-based IAT app, based on code provided by Project Implicit, was used. Adult volunteers were asked to test both versions on an iPad with touch as well as keyboard input (altogether four tests per participant, random order). Latency values were recorded and used to calculate parameters relevant to the implicit setting. Measurements were analyzed with respect to app type and input method, as well as test order (ANOVA and χ2 tests). Results Fifty-one datasets were acquired (female, n = 21; male, n = 30, average age 35 ± 4.66 years). Test order and combination of app type and input method influenced the latency values significantly (both P<0.001). This was not mirrored for the D scores or average number of errors vs. app type combined with input method (D scores: P = 0.66; number of errors: P = 0.733) or test order (D scores: P = 0.096; number of errors: P = 0.85). Post-hoc power analysis of the linear ANOVA showed 0.8 by f2=0.25, with α = 0.05 and 4 predictors. Conclusions The results suggest that a native mobile implementation of the IAT may be comparable to established implementations. The validity of the acquired measurements seems to depend on the properties of the chosen test rather than the specifics of the chosen platform or input method.
Background
Mobile apps running on smartphones, tablet PCs, and other mobile smart devices are not only widely used for social networking and news, entertainment and gaming, travel, shopping, education, or finance, but also for health, fitness, and medical purposes (1).
Researchers such as (10) nevertheless emphasize their potential for the field of psychology, be it for psychologists, patients, or the general public. Their utility for research seems apparent considering that, unlike software used on stationary devices, such mobile apps commonly have a narrower focus compared to often more complex desktop applications. This may facilitate their efficiency and reduce development costs. Independent of these factors, programmers still need to be aware of the intricacies of the underlying platform, e.g., related to specific design paradigms.
To help reduce the development overhead, for standardizable processes and tasks, reusable programming frameworks have become the tool of choice independent of the field of application (11,12). For mobile apps, they commonly provide programmers with convenient, standardized components for the user interface (e.g., survey templates, buttons), methods for accessing a device's sensors, or data management. For research apps, libraries such as ResearchKit (13,14) for Apple's iOS-based 1 devices or the ResearchStack library (15) for Android-based devices follow this paradigm. Using these or other frameworks and solutions available for creating apps for research purposes (9) may not only facilitate development, but may also have scientific benefits. These may for example relate to making app-based research more easily reproducible, by allowing researchers to more easily build upon the work of their peers, or, if necessary, to adapt the provided methods to their specific research questions.
Despite the aforementioned advantages, the ResearchKit documentation currently lists only six pre-built "Active Tasks" (i.e., building blocks for ResearchKit-based apps) that can be used in the field of cognition, and thus, in the broadest sense, for psychological research (16). Within this group, for example, the mPower Study (17), evaluates one of the five initially released ResearchKit apps, which applies spatial memory testing in the context of researching Parkinson's disease. Golden et al. (18) use stroop and trail making tests to measure the cognitive effects on caffeine and l-theanine. Finally, Munro (19) analyzes improvements in problem-solving skills using the Tower of Hanoi puzzle for people living with cardiovascular disease during fasting phase.
Objective
The objective of the work presented here is to determine whether and how a framework-based approach is appropriate for the implementation of psychological testing, and equivalent to established methods, such as, for example, web-based approaches. For this purpose, an exemplary, well-established test method, the implicit association test (IAT) was chosen for native implementation on a single mobile platform (namely iOS).
This article describes the underlying methods used for 1. building the native app, specifically its technical aspects and implementation steps, as well as 2. a preliminary cross validation with a web-based installation of the original IAT provided by Project Implicit (20).
A real-world evaluation of the mobile IAT version, using a categorization task similar to the one described here, is however not part of the objective of the presented work and will be described in another publication.
Organization
Since there are several building blocks that form the basis for the study (from data collection to evaluation), the presentation will follow a three-tiered approach.
In the methods part, firstly, the basics of the implicit association test (IAT) will be introduced, along with its structure, setup and the evaluation of the recorded data. This description will also cover essential aspects to consider regarding its implementation on a mobile platform.
Afterwards, the tools and methods used during the implementation phase will be introduced. This part includes a short overview of relevant programming concepts to be used in the native app, specifically regarding data structures and methods provided by Apple's ResearchKit, with an emphasis on those necessary for the actual implementation of the IAT on the chosen mobile platform, i.e., iOS.
The third block will focus on the initial evaluation of the app-based test vs. a web-based test implementation, and will therefore cover aspects related to this evaluation in further detail, using the example of an implementation for evaluating weightbased stigmatization.
Where appropriate, the results section will mirror the breakdown described here by firstly presenting the app based on the described programming methods, and secondly the comparative, comprehensive evaluation of the native and webbased test implementations.
On a side note, while the tests as they are shown in this paper use the English language terminology employed by Project Implicit in their weight stigma related test implementation, namely "fat" vs. "thin, " throughout the text and figures, where applicable, we have tried to use less stigmatizing terms (21) for describing the different weight strata, i.e., "overweight" and "obese" vs. "normal weight" or "lean." overweight or thin individuals was then developed for evaluation. For comparison, a web-based implementation of the IAT, using a combination of materials and code provided by Project Implicit (22,23) was deployed on a Linux-based web server.
Participants that were recruited for the evaluation were asked to work through both versions of the test, once each using the iPad's built-in touchscreen, and another time using a keyboard connected to the device. Thus, each participant had to undergo a total of four tests (in random order). For calculating the scores related to the participants' implicit attitudes, latencies recorded for a user's reaction to specific (combinations of) stimuli were used.
For the actual evaluation, complete datasets for 51 participants could be acquired using the native and web app based versions. There was data for 21 female and 30 participants. On average, participants were 35 ± 4.66 years old.
Both test order as well as the combination of app type and input method exerted a significant influence on the recorded latencies (P <0.001 in both cases). This was however not mirrored for the actual D scores representing the implicit attitude (bias) or the average number of errors vs. input method (D scores: P = 0.66; number of errors: P = 0.733) or test order (D scores: P = 0.096; number of errors: P = 0.85). Demographic aspects such as age or gender did not influence the calculated D scores significantly.
Post-hoc power analysis of the linear ANOVA showed 0.8 by f 2 =0.25, with α=0.05 and four predictors.
The Implicit Association Test
In psychology, interest into assessing people's attitudes, behavior patterns, opinions, as well as other constructs in a standardized manner has significantly grown over the past few decades. However, direct questioning of subjects on sensitive topics may result in responses that are more in line with societal expectations than with a person's actual opinions and attitudes. One way to work around these problems is to employ socalled implicit measures. Implicit measures are based on an individual's reactions while performing a series of categorization tasks for specific, contrasting conditions. It is assumed that such tasks will be performed with a higher accuracy and in less time if the presented stimuli that represent the conditions and categories are in line with the person's attitudes toward the topic being evaluated. Roughly speaking, the actual measurement of individual bias, in the form of a differential score, is then calculated from the difference in the response times (latencies) for the contrasting conditions and stimuli that the test subject is asked to categorize (24). A popular test method in this context is the implicit association test (IAT) that was first introduced by Greenwald et al. in the late 1990s (25,26).
The main reasons for selecting this specific test for our project were that • it is a simple psychometric method for implicit social cognition, and also allows determining how strongly two complementary concepts (e.g., shown as textual or pictorial representation, such as silhouettes of overweight vs. normal weight people, hereafter referred to as concept 1 and 2) are associated with either of two contrasting attributes (e.g., a set of positively vs. negatively connoted words), • various (multilingual) sample implementations are available, mainly in digital (web-based) form (20), with sample data sets (and the source code) often being provided (27), which made a comparison of our work to existing implementations feasible, and • that the IAT is established in the field for gaining insights into the (implicit) attitudes of test subjects related to varying topics [see, for example (28) for a review investigating applications of the IAT toward individuals with various disabilities or (29) for its use in the context of moral concepts].
To the best of our knowledge, there are currently only few native implementations of this specific test on any mobile platform, such as the "Implicit Association Test" app provided both for the Android (30) (last updated in 2014) as well as the iOS platform (31) (last updated in 2013). However, source code for these is unavailable, and subject areas are not configurable. In case of the aforementioned app, only gender-bias is tested. The iOS platform was selected for the exemplary implementation of a broader approach described here because Apple, who, as the manufacturer, is intimately aware of the platform's specifics, provides ResearchKit, an open source framework specially adapted to this platform (14). Soon after its initial release, ResearchKit already proved its value for research in projects of various working groups (32). Frameworks available for other (mobile) platforms commonly do not benefit from a similar degree of integration with the respective platforms.
Basic Structure of the IAT
As defined by (33), there are seven blocks an individual has to work through when an IAT test is administered. There are (shorter) blocks where a test subject may practice the classification tasks (i.e., B 1 to B 3 , as well as B 6 , with 20 trials each and B 5 that may vary between 28 trials for the US IAT (34) and 40 trials for the German IAT (35), as well as (longer) test blocks of 40 trials each. While B 1 , B 2 , and B 5 are sorting blocks only presenting terms of one category-concept or attribute stimuliblocks B 3 , B 4 , B 6 , and B 7 present paired terms of both categories. In cases such as our study, where the IAT is administered to multiple individuals, and possibly more than once, it makes sense to randomly assign the order of blocks. This specifically relates to which of the contrasting attribute types or concept is assigned first to the left side, with the order of presentation switched between blocks B 1 , B 3 , and B 4 vs. B 5 -B 7 for concepts, respectively [see Table 1, adapted from (33) for a basic description of the block order for a single participant]. For each individual test run, the side is initially (randomly) assigned for a particular attribute and concept type. For the attribute stimuli, the side is maintained for the duration of the test (e.g., with positive attributes either on the left or right side of the screen). For a larger number of participants, this should prevent a bias being caused by the presentation of certain classes of stimuli on only one side.
Evaluation of the IAT Data
Based on the response times (latencies) recorded in Blocks B 3 , B 4 , B 6 , and B 7 , a differential score (short: D score) representing an individual's reaction to the presented stimuli is calculated (33) that represents a user's implicit bias toward either of the two concepts. There are basically six different algorithms D 1 -D 6 that can be applied for obtaining this D score. Their choice depends on whether users are provided with feedback (e.g., a red ×) in case of erroneous answers and how the answers that were given too fast to be plausible are handled. Detailed information about this can be found in the literature [e.g., (24,36)]. For the actual score calculation, there are several external packages and libraries that can be applied to the acquired raw data [e.g., as described in (37)(38)(39)(40)].
App-Based Implementation of the IAT: Programming Environment and Employed Concepts
For extending the ResearchKit to include an IAT and building the iOS-based IAT app, the latest version of Xcode was used on an Apple Mac (at the time of the implementation of the app, running Xcode 11.6 on macOS Catalina 10.15), along with the ResearchKit framework (version 2.0). While ResearchKit supports development for both the Objective-C and the Swift languages, it was originally developed in Objective-C, and the latter was also chosen for the purpose of developing the ResearchKit-based IAT test classes. However, the pilot app employing these classes for use in initial testing was implemented using the Swift language, which also allows access to Objective-C based source code.
Relevant ResearchKit Elements and Paradigms
ResearchKit specifically supports the development of healthrelated research apps for iOS (Apple iPhone and iPod touch), iPadOS (Apple iPad), and watchOS (Apple Watch) devices. It was announced and open-sourced by Apple in March 2015 (13). Its source code is available on GitHub under a BSD style license (14) and provides the basic structural and methodical framework for apps that are used in (medical) research.
Tasks are the basic element study participants are confronted with when using a ResearchKit-based study app. They lay the foundation for common activities such as obtaining and handling consent, as well as executing surveys (questionnaires) and active tasks for (touch input or sensor-based) data collection. As this paper describes the functional and technical implementation of the IAT in ResearchKit, it will focus on active tasks employed in this context, while also briefly touching on the basics of consent acquisition and performing surveys.
A basic, (ordered) task object 2 defines the processes of a specific task at hand. The task object determines the order in which the individual steps are performed, either in a fixed or adaptive flow (depending on previous results), and provides methods for indicating progress.
Tasks are divided into steps 3 , roughly corresponding to a single screen each, that take care of presenting information to the user, as well as data acquisition for the respective step. While many of the available steps either present data or ask users to (manually) enter data in answer to one or more questions 4 , there are also so-called active steps 5 that enable (automatic) data collection.
There are basically three modules for such tasks that can be adapted to the specific research question: • The "consent" module is meant to be used for obtaining informed consent when an app is initially started. This includes methods for providing general information about the study (e.g., purpose, type, and amount of data gathered, rationale), determining individual eligibility, etc. The provided consent templates have to be set up by the developer depending on the specifics of the respective study. • The "survey" module provides templates for confronting users with a sequence of questions, and there can either be a 2 For example, ORKOrderedTask or ORKNavigableOrderedTask implementing ResearchKit's ORKTask protocol. 3 Subclassed from ORKStep. 4 For example, ORKQuestionStep for a single question and answer pair, or ORKFormStep for forms with multiple elements, e.g., for asking participants about their, name, date of birth, or other information on a single screen. 5 Subclassed from ORKActiveStep. single or multiple questions per screen, with numerous answer types being allowed (e.g., multiple choice, text, or number input). It is possible to make the sequence of questions adaptable by branching into more detailed sub-questionnaires or skipping certain questions depending on answers given in previous steps. • Active tasks (16) enable researchers to gather data that differs from what can be acquired in surveys, and these are the main foundation for the ResearchKit-based IAT. They can collect data from multiple sources such as different device sensors, audio input, or even data acquired from the heart rate sensor. There are already a number of tasks that were developed for use in research, e.g., related to gait and balance (using motion data), as well as some psychological tests such as spatial memory (17), stroop, or trail making tests (18).
How tasks are managed in the user interface (and their results are handled), is defined by task view controllers 6 . For each step, there are special view controllers 7 for handling the workflow. Overall, the task view controllers take care of handling results obtained in the steps, and these are not only accessible once a task has completed, but, if necessary, also while it is still in progress. Specifics of the mobile, app-based implementation of the IAT will be described in the relevant part of Section 3.
Web-Based Implementation
For comparison between the mobile IAT and the original, web-based version as used by Project Implicit (20,27), a local installation of the Web IAT was prepared, based on a combination of the experiment materials provided for the Weight IAT, as provided on (23), as well as one of the examples given for the minno.js-based minimal server available at (22). The adapted version that was used corresponded to the Weight IAT instance provided for users in the United States, based on silhouettes of overweight as well as normal weight individuals, along with positive or negative terms. The web-based app was deployed on a Linux server (at the time of the evaluation, Ubuntu Server 16.04 LTS, using the Apache and PHP packages supplied with this release). As all potential participants were native German speakers, in contrast to the examples shown in Figure 1, the web IAT employed in our study used the descriptions and terms of the German IAT, and wordings between both the app and web-based versions were aligned in order to prevent potential bias in this regard.
Comparative Evaluation of Both Approaches
Potential participants for the comparative evaluation were recruited from a professional and private circle and were asked for their informed consent. Participants were given the opportunity to withdraw their participation at any time.
The evaluation setup itself consisted of four IAT test that were to be performed on the provided iPads: • The native IAT app, based on the aforementioned ResearchKit classes, one test being administered with an external keyboard, one using the device's touch screen. • A web-based IAT implementation using the JavaScript and PHP constructs provided by Project Implicit (34), again one test each being applied using the keyboard and touch screen input methods.
In both the native as well as the web-based version, for the tests relying on the touch interface, there were two buttons, one on the left and one on the right-hand side of the display, with the stimuli (terms as well images) appearing centered between the two buttons. The respective category assigned to each button was shown in close proximity, above the button itself.
To stay consistent with the Web versions provided by Project Implicit, for those test runs relying on keyboard input, the "E" key was used in lieu of the left button, and "I" had to be pressed for items assigned to the right-hand category.
All four tests were performed on Apple iPads (8th generation, 10.2-inch display) running the latest operating system version (at the time of the study, iPadOS 14.0.1). The keyboard model usedboth for the native app and the web version-was an Apple Smart Keyboard connected via Apple's Smart Connector. For the native app, results were initially only stored locally on the respective iPads, while for the web-based version, results were kept in a protected directory of the web server.
Study Procedure
The participants were assigned a random identifier to be able to compare the four test variants on an intraindividual basis. This identifier was entered manually per test and participant. The order of tests-native or web-based app with either touch screen or keyboard input-was randomized for each participant in order to minimize bias, e.g., due to higher latencies, decreasing concentration, and thus possibly increasing number of erroneous categorizations for repeated testing due to fatigue after repetitive execution of the tests.
For each participant, all four tests were performed on a single day, with a short break (usually around 1 min) between the tests, and altogether, the test sessions did not require more than 30 min per participant.
The participants were also asked to fill out an additional online survey (using a SoSciSurvey installation at the authors' university) using their individual, randomly assigned identifier. The questionnaire presented in this survey was comprised of demographic questions (sex, education, and age) as well as questions related to weight (i.e., explicit preference between overweight or normal weight persons). Participants were also asked about their individual height, weight, and personal interest into the topic of obesity or diabetes).
Answers to all of the questions were optional (in case that any of the participants felt uncomfortable providing any of the answers), and filling out this survey took <10 min per individual.
Evaluation of the Study Data
The datasets acquired using the web and native app based implementations of the IAT were evaluated using R (version 4.1.2) for both descriptive as well as statistical analyses.
For the description of the study population, it was initially decided to stratify by gender, which seemed the most promising due to the study population's relative homogeneity regarding other demographics. In literature, various sociodemographic factors are often associated with influencing an individual's body weight perception or predisposition to stigmatizing individuals based on their weight (41,42) [while there are other authors that refute this claim at least for some factors; (43)]. As the recruited participants hailed from similar backgrounds and were largely of similar age, gender was the most obvious demographic factor we deemed to potentially have an effect on (explicit or implicit) attitude regarding body image and weight.
To determine whether there were any significant differences in means at different points in time or between the different app and input types, aside from descriptive analysis, ANOVA testing was applied where appropriate. A post-hoc power analysis of the linear ANOVA was conducted using G*Power [version 3.1.9.6, (44)].
Altogether, the statistical analysis aimed at comparing the native, ResearchKit-based version of the app with its web-based implementation using both touch screen and keyboard-based user interactions. To determine whether the app type and input method or even the order in which the four combinations had been applied influenced the results, this part of the analysis was applied to both methods of stratification. More specifically, the evaluation focused on the influences of app type and input method as well as test order on either the D scores that were obtained as well as the latencies that were recorded for each trial.
Mobile Implementation of the IAT
As, to the best of our knowledge, there were no implementations of the implicit association test (IAT) for ResearchKit-based iOS apps when the project was initially planned, it was decided to fork ResearchKit's repository on GitHub (14) and to add the IAT related functionality to this fork. The resulting code, after integration of the IAT, is available on GitHub (45).
Forking the ResearchKit Framework
The design of the presented IAT implementation closely follows the currently provided United States (English) version of the Project Implicit web version (20) as closely as possible. It has however been adapted from the keyboard input used in the webbased version to employ the touch interface available on mobile devices. Ideas on how to better adapt the implementation to the mobile platform will be explained as part of the development related considerations that are presented in the discussion.
The mobile IAT implementation is comprised of seven components. Four of these, implementing the step, content view, view controller and result objects with specific adaptations to the IAT test's requirements 8 , follow the common class structure established for active steps in ResearchKit. The additional three components were designed for providing instructions on how to perform the test, or for specific aspects of the presentation of the IAT 9 . There is also a predefined active task for the IAT steps, extended from the ORKOrderedTask object, which is used to initiate the test process.
Since ResearchKit apps commonly only take care of data acquisition and do not include any algorithms for evaluating the acquired data, it was decided to only include a basic implementation of the D score calculation in our study app. This functionality is however not part of the ResearchKit classes upon which the study app is based. For the app, the calculation was however included to be able to provide participants with feedback about their score if so desired. Similar to our study app, developers making use of the provided IAT classes will also need to implement this functionality in separate parts of their 8 That is, ORKImplicitAssociationStep, ORKImplicitAssocia-tionContentView, ORKImplicitAssociationStepViewController, and ORKImplicitAssociationResult. 9 ORKImplicitAssociationCategoriesInstructionStep, ORKIm-plicitAssociationTrial, and ORKImplicitAssociation-Helper. app should they decide provide score related feedback instead of solely evaluating the data at a later stage.
For illustration purposes, the screenshots shown in the following paragraphs are in English language and use silhouettes of overweight people for the first and individuals of slim to normal stature for the second concept stimulus, but these settings can of course be adapted, e.g., to support random assignment of the chosen stimuli to either side in the actual study app used in the evaluation.
Look and Feel of the IAT
An ORKImplicitAssociationContentView, subclassed from ResearchKit's ORKActiveStepCustomView classwhich serves as the basis for custom views in active stepsprovides the visual interface for an IAT trial (Figure 2). It defines two containers (based on UIView) for items-one in the upper left and one in the upper right corner-each containing one label for the first or only item, and, in cases where concepts and attributes are paired, additional labels for a divider as well as the second item. The first label displays either the identifier for the respective attribute or concept, i.e., "positive" or "negative" attribute or "concept 1" or "concept 2" in the sorting phase, or the category name for attributes-"positive" or "negative"-in the pairing phase.
In the latter case, the second label shows the category name for concept stimuli-"concept 1" or "concept 2"-while the dividing label displays the term "or" to instruct the user to touch the button on the appropriate side when either a corresponding attribute or a concept stimulus being displayed in the corresponding part of the screen (Figure 2). Both the second label as well as the divider are hidden in sorting phases. As implemented in the study, attribute category names are always colored blue, while those for concepts use green color. Dividers ("or") are always shown in black color.
In the view's center, there is a term container (again, based on UIView) containing a label and an image 10 showing either (exclusively) the current attribute or concept stimulus in the trial. Another container hosts round tapping buttons (ORKRoundTappingButton) on either side of the screen. As an alternative, when using a keyboard, the keys "E" for the left, respectively "I" in place of the right tapping button can be used. Initially, a label indicating that one of the buttons must be touched to start the test is shown at the screen position where later on, the term label or image will be shown. In the view's lower part, users are informed that an error indicator in the form of a red × will be displayed in case of any misclassifications, indicating the need for reclassification of the current stimulus. In any such case, this error indicator is displayed directly above this hint.
All label elements used in this view are inherited from UILabel class.
To allow ORKImplicitAssociationStepViewContr oller to exert control over the user interface of the IAT, ORKImplicitAssociationContentView provides six methods for external access: • Firstly, a mode (of type ORKImplicitAssocia-tionMode)-either "instruction" or "trial"-has to be set. In the first case, the term label and image are hidden, while the introductory label is displayed. In test mode, this is the reverse. • Secondly, the term (NSObject) and its category (either attribute or concept, ORKImplicit-AssociationCategory) has to be specified. The term can either be a string (for an attribute stimulus) or the image of a concept stimulus to be displayed in the trial. The color in which the term is shown depends on the category: Terms representing attribute stimuli are always colored blue and those for concept stimuli are green. This corresponds to the coloring of the category names in the view's upper left and right corner.
• To specify the names for attribute and concept categories, two methods are provided.
• The first method defines the names of the categories on both sides (NSString) as well as the category type (ORKImplicitAssociationCategory, either attribute or concept). The type is used to choose the appropriate color (blue or green). • The second method determines which labels are shown first and second (separately, for both sides of the view).
There is no need to specify the corresponding categories to define colors, as the initial labels always show the attribute category names (blue) while the second labels show the concept category names (green).
• The fifth method can show a red × in the event of misclassifications. • Finally, it is possible to disable the presented classification buttons once the user has correctly classified the current stimulus. This is done to prevent reactions to inadvertent additional taps on the buttons.
There are also two tapping button objects that are programmatically accessible from outside the ORKImplicitAssociationContentView in order to be able to react to taps in the ORKImplicit-AssociationStepViewController.
Control of the IAT's Blocks
An ORKImplicitAssociationStepViewController (derived from ORKActiveStepViewController) controls the logic for an IAT block and structures the lifecycle for the so-called active steps employed to represent the blocks of the IAT. On startup, it sets the mode of the view (ORKImplicitAssociationContentView) to "instruction, " and for sorting or pairing blocks, passes the category names (ORKImplicitAssociationCategory) to be displayed on the top left and right corner to it accordingly. For any button taps executed by a user, the procedure remains identical within the currently running IAT session. Once the correct choice has been made within the last trial of the respective block, the current step is completed and the next step within the task is started. The structure for each trial is the following: • The trial enforces the ORKImplicitAssociation-ContentView to hide the error indicator (i.e., the red ×), sets the mode (ORKImplicitAssociationModeTrial) to trial to show the term, and hides the start label. It also passes the term and its corresponding category (ORKImplicitAssociationCategory, attribute or concept) to be displayed, and activates the buttons in order to allow taps. Note is also taken of the point in time at which the trial was started.
• As soon as a button is tapped, the view controller checks if this event took place on the correct side of the display.
If not, the ORKImplicitAssociationContentView is instructed to show the error indicator and to log that an error has been made within the respective trial. This is repeated as long as the user keeps making an incorrect choice. Once the expected answer has been given, the ORKImplicitAssociationContentView is instructed to disable the buttons and to hide the error indicator. An ORKImplicitAssociationResult is then created to save the time span (latency) between when the term was initially shown and the point in time when the correct answer was given. The trial code of the correct term as well as the pairing of the categories, and whether the answer was initially incorrect are recorded as well. • Finally, the process is started once again for the next trial.
Keeping Track of Results
An ORKImplicitAssociationResult (derived from ORKResult) holds the results per trial. It is meant to keep track of the overall latency (i.e., the time between the initial presentation of a stimulus until the correct answer has been given). The trial code, for identifying on which side the termi.e., attribute or concept-correctly matched, as well as the (concept and/or attribute) pairings employed on either side of the view for that trial are also included, as is whether the initial classification for that trial was correct or not.
To enable serialization of the IAT's results into JSON, the ORKESerialization class was extended. This base class is available within the ORKTest project that is provided within ResearchKit, and is meant to test functionality during development. For the purposes described here, this added functionality includes being able to take note of the latency as well as the trial code, the pairing of the categories, and information about whether the user's initial reaction to the respective stimulus was correct.
Step Objects for the IAT
ORKImplicitAssociationStep forms the basis for active task steps used in the implicit association test and is derived from ORKActiveStep. Used to represent the blocks of the IAT, it manages the respective number of trials of the block in the form of an array (NSArray of ORKImplicitAssociationTrial) and also keeps track of whether the respective block is a sorting or pairing block (ORKImplicitAssociationBlockType).
Keeping Track of a Trial's Information
Objects of type ORKImplicitAssociationTrial hold the information for the trial within a block. This includes the term to be displayed (either text or an image), the category (ORKImplicitAssociationCategory) of that term (either an attribute or concept), as well as the initial items shown on either the left or right side of the view, containing either the attribute or concept category name (for sorting blocks) or the attribute (for pairing blocks). For pairing blocks, the concept category name used for the left and right side is always stated. In addition, the correct term (ORKImplicitAssociationCorrect) representing the left or right attribute-for attribute sorting blocks-or the first or second target on the left or right side, respectively, are specified. There is also a computed property returning an identifier (ORKTappingButtonIdentifier) for indicating whether the left or right button needs to be chosen for giving the correct answer.
Supporting the UI Design
ORKImplicitAssociationHelper defines the colors to be used for displaying attribute (blue) and concept (green) names, button side names (left or right in light blue), and the red error indicator symbol ×. These colors are not only used in the active steps of trials, for both sorting and pairing blocks (ORKImplicitAssociationContentView), but also be for attribute and concept instructions ORKImplicit-AssociationCategoriesInstruction Step as well as for the instruction pages (ORKInstructionStep) before each block. This will be explained later on.
ORKImplicitAssociationHelper also contains a method to convert a text that may contain XML-based tags (e.g., <attribute>, <concept>, . . . ) into an attributed string, which in turn can be displayed in a view. This is provided in order to simplify the color design of instruction pages for developers.
User Instruction
An ORKImplicitAssociationCategoriesInstructi onStep can display all attribute stimuli and concept stimuli in a tabular view once an IAT has been started. It is subclassed from ORKTableStep and has methods to pass the category names for the two attributes and both concepts as well as the terms for each category. Terms can either be texts for word attribute and concept stimuli or image file names for image concept stimuli (kept in arrays of the appropriate object types).
Defining the Order of Things
ORKOrderedTask+ORKPredefinedActiveTask is an extension of ORKOrderedTask to define the steps (ORK-ActiveStep and ORKStep) for an active task (implementing the ORKTask protocol) to be presented from a task view controller (ORKTaskViewController).
Two functions with partly different parameterizations were added that allow creation of an IAT task depending on the respective requirements. Both implement three parameters that are configurable for all active tasks: Firstly, there is a textual identifier for the task, and secondly an optional description for the data collection's intended purpose. The third parameter represents predefined task options (ORKPredefinedTaskOption), e.g., to exclude instruction and conclusion steps, or to prevent data collection from the device's sensors (such as accelerometer, location, or heart rate data). Both functions also allow to pass the required IAT specific parameters, e.g., the two attribute and two concept category names, as well as the terms (texts for word attribute and concept stimuli or images for image-based concept stimuli) for each category (provided as arrays of the appropriate data types).
The second of the two functions differs from the first in that it allows to pass additional parameters, such as the number of trials for each of the seven blocks. It also makes it possible to enable or disable randomization of concepts and attributes to either side. If not specified otherwise, blocks 1, 2, 3, and 6 are set up with 20 trials, block 4 and 7 are set up with 40 trials and block 5 is set up with 28 trials, as in the Project Implicit US Web IAT (34). Also, per default, the sides on which attributes are displayed are not randomized-meaning that the first attribute is always presented left while the second attribute is presented right-while the concepts are randomized to either side; also see Table 1 above. All randomizations are programmatically based on the RC4 cipher (Rivest Cipher 4).
A complete test run is constructed as follows: First, the concepts for blocks 1 and 5, as well as the attributes for block 2 are randomly selected from the respective sets of available stimuli. The numbers of chosen stimuli correspond to the numbers of trials for each of the blocks (see Table 1). Then, for each of the blocks 3, 4, 6, and 7, attribute and concept stimuli are randomly chosen so that within each block, the numbers for the contrasting attributes and/or stimuli are in balance.
Next, the steps of the IAT, managed by the IAT active task, are constructed. For this purpose, first, an overview of all attribute and concept stimuli is created, as well as an introductory instruction for the IAT 11 . Both steps can be skipped if an option is passed to the IAT active task method that specifies the exclusion of instruction steps 11 based on ORKImplicitAssociationCategoriesInstructionStep and ORKInstructionStep, respectively.
(ORKPredefinedTaskOptionExcludeInstructions). Secondly, for each of the seven blocks, one step object is created (ORKImplicitAssociationStep), and the numbers of trials are added by using the ORKImpli-citAssociationTrial with the terms that were previously selected as mentioned above. Before each block, an introductory instruction step (ORKInstructionStep) is added to provide information about what is expected in the respective block. These instruction steps can also be included by passing the option to exclude instructions. Lastly, a final step is added for informing users about the completion of the IAT task 12 . Both methods finally return the created IAT active tasks that can be presented to users as an ORKTaskViewController within the app.
Integrating the ResearchKit-Based IAT in a Project
To build an IAT app for the iOS or iPadOS platform, the ResearchKit-based IAT elements as they were described in the previous paragraphs can be employed in two different manners, firstly by using a predefined IAT active task or, secondly, by specifying the IAT steps manually.
Using the Predefined Active Tasks
As described for ORKOrderedTask, a predefined active task can be created by calling one of two methods. This provides the full IAT implementation, with its seven blocks and the corresponding default (or adapted) numbers of trials for each block, as well as the (optional) instruction and completion steps. Both methods return an ORKTaskViewController (subclassed from of UIViewController) that can be shown in any iOS or iPadOS application.
The results of the IAT can then be obtained from the task view controller's result property (of type ORKTaskResult, subclassed from ORKCollectionResult), which in turn holds the results in its results property-an array of ORKStepResult each containing all its ORKImplicit-AssociationResult objects. The step results for each IAT block are identified by implicitAssociation.block1 to implicitAssociation.block7.
Using the Active Steps Manually
The IAT active step may also be used separately as a step inside any task a programmer decides to build. An ORKImplicitAssociationStep has to be initialized with an unique identifier. A block type (ORKImpli-citAssociationBlockType) can be assigned to the step to distinguish between sorting and pairing blocks. Finally, an array of trials (for the trials within the block) (ORKImplicitAssociationTrial) has to be assigned to the trials property of the respective step. The results can be obtained in the same manner as described for the predefined active task, by identifying the step results via the identifiers that were specified. 12 ORKCompletionStep, with the option to skip this via ORKPredefinedTaskOptionExcludeConclusion.
An Example of a Mobile IAT on the iOS Platform
The following paragraphs and figures give a short overview over an actual (English language) implementation of the IAT task predefined in ORKOrderedTask, specifically targeting weight bias (i.e., normal weight vs. overweight individuals). Text shown in italics indicates instruction and completion steps that can be omitted by passing the appropriate options. Figure 3 shows the introduction for the IAT active task. Figure 3A introduces the concept stimuli and attribute stimuli for the IAT, while Figure 3B informs users about the structure of the IAT itself and reminds them to stay attentive.
In Figure 4, the first block of the IAT is demonstrated. Figure 4A introduces the block with the left button to be tapped for concept stimuli representing overweight individuals, and the right button to be tapped for concept stimuli depicting normal weight individuals, along with general information about the task. Figure 4B shows the information screen just before the test block begins. Terms representing silhouettes for overweight or slender people are randomly presented, as is illustrated in Figures 4C,D. The second block is implemented similarly, albeit this time, using positive and negative connoted textual stimuli, to be classified as either "good" or "bad, " in place of the silhouette images.
The third block combines the categorization tasks of the previous blocks: Here, the left button is to be tapped if either "good" attribute stimuli or silhouettes of "overweight people" are shown, while a tap on the right button is expected for either "bad" attribute stimuli or silhouettes representing individuals of normal weight. Again, the order in which the stimuli are shown is randomized, and care is taken to use the same quota (i.e., 5 per kind) for each type of stimulus.
Block 4 is similar to block 3, but, as specified in Table 1, uses a larger number of trials (40 instead of 20). Again, the number of trials per type of stimulus is balanced, and the actual order in which they are presented is chosen randomly.
Block 5 essentially uses the same configuration as the first block, the difference being that the sides for categorizing "overweight people" and "normal weight people" concept stimuli are swapped. Also, there are 40 trials in Block 5.
Blocks 6 and 7 correspond to blocks 3 and 4, albeit with the assignment of the concepts to the left and right sides being swapped.
Finally, on the last screen, it is possible to thank users for their perseverance in finishing the IAT. Feedback about the results of the test should be provided in other parts of the app, after the actual IAT test has concluded.
The ResearchKit-based classes described above were employed for constructing the IAT app used in the study. This study app made use of silhouettes of overweight and normal weight individuals for the concept stimuli [as they were provided by Project Implicit, (22,23)], as well as a number of terms with positive and negative attributes.
Comparison Between the Native,
ResearchKit-Based IAT Version, and a Web-Based Implementation
Demographics of the Participants
Participants were recruited from a circle of colleagues and friends. While originally, there were 56 participants, full data sets were only available for 51 individuals (see Table 2). For five participants, either answers related to demographics or parts of the test data were missing. Overall, those who participated were on average 34.9 (sd = 4.7) years of age (with the 21 female participants being slightly, albeit only insignificantly younger than the 30 males, P = 0.392), and there were only insignificant differences between the two genders regarding their education (P = 0.448). With respect to body mass index, the differences between both groups were insignificant (BMI value: P = 0.092). Interest in the topics of diabetes and adipositas significantly differed between both genders only when looking at the data in its original five point scale (P = 0.045), and this was largely due to the reversal in proportions between the "not at all" and "less" interested strata between both groups. Rescaled to "not interested, " "neutral, " and "interested, " there were however only negligible differences (P = 0.877). Neither were there any major differences in explicit or implicit ratings between female and male participants (see Table 2). Only in case of the numeric D score value for the native app being used with the touch screen-based interface was P significant (P = 0.044), but even for this case, there were no relevant differences considering the D score category (P = 0.104). In all other cases, differences in ratings between both genders were negligible (i.e., P > 0.05 in all cases).
Overall, for the participants included in this evaluation, the influence of gender on the attitudes ( Table 2) regarding personal preferences of normal weight to overweight individuals seems negligible. For other demographic factors, due to the relative homogeneity of the participants, there was insufficient data to make a reliable assessment. A decision was therefore made not to include demographic factors in the evaluations presented in the following sections.
Comparisons of the Test Variants (Based on Application Type and Input Method)
The following paragraphs address the comparison of the implicit assessments obtained using the four different input methods.
D Score Evaluation
D scores between the four test variants, i.e., "native app, keyboard, " "web app, keyboard, " "native app, touch screen, " and "web app, touch screen" do not seem to differ much. Descriptively, independent of the test method applied, there are only insignificant differences between the mean D score values of the four test variants (see Table 3). This is to be expected, as D scores are calculated as relative values based on the latencies recorded within each of the four blocks of an IAT test. Consistently longer (or shorter) latencies depending on the input method or application type-which, as the following paragraph will show, are a reality-should therefore not influence the calculated D scores, even though (average) latencies clearly differ. Additionally, in the study, the order in which the four tests were administered to each participant was randomized. There was also a short pause of variable length (usually around one minute) in between the tests. Thus, for the overall group of participants, fatigue due to repeated testing should also not have played a role with respect to the D score calculation (see below for a closer look at the influence of the test order on the results).
Evaluation of Latency Values
While there were no significant differences in the calculated D scores between the four test methods, the same does not hold true regarding the (mean) latencies. The results differ significantly, as can be seen in Table 4. Similar to the D score calculations, where latency values below 400 and above 10,000 ms were filtered out, these were removed here as well, thus reducing the number of measurements per combination from the maximum number of 6,120 (51 × 120 per test) to the numbers specified in the respective table columns.
The data suggests that, at least descriptively, on average, keyboard inputs tend to be faster (i.e., to have a lower latency) than when a touch screen interface is used. Considering mean latencies, there also appears to be a noticeable difference between using the app and the web-based versions of the test. However, as the order in which the tests were performed was randomized for each participant, this warrants additional investigations (see below).
Susceptibility to Errors Depending on App Type and Input Method
It was also of interest to what extent the input mode or program type being used had an influence on the number of errors the users made when performing the four tests. Descriptively, there appear to be slightly more mistakes on average for the web-based app, although the differences between the four combinations of app type and input method are statistically insignificant (P = 0.733, Table 5).
Proper Randomization of Test Order vs. Test Type
For evaluating the data with respect to the order in which the tests were performed per participant, it was first of interest whether there was adequate randomization. Table 6 shows the distribution of the four variants vs. the order in which the tests were taken. There was no significant dependency (P = 0.752) between the type of test and the order in which the tests were administered. Thus, randomization was satisfactory.
D Score Evaluation
Descriptively, there does seem to be a small trend in D scores and corresponding ratings depending on the order in which the tests are performed, independent of the type of test that was taken. However, although mean D scores slightly decrease with each additional, this is not statistically significant (P = 0.096), as can be seen in Table 7.
Evaluation of Latency Values
As shown in Table 8, for the latency values, the order in which the tests are being administered is however important (P <0.001). This holds true independent of which kind of test combination (i.e., native app or web-based testing, using either keyboard or touch screen input) is being applied. As practice increases, the participants' measured latencies decrease. Similar to the D score calculation, where latency values below 400 ms and above 10,000 ms were filtered out, these were removed here as well, thus reducing the number of measurements per combination from the maximum number of 6,120 (51 · 120 measurements per test) to the numbers presented in Table 8.
The data thus supports the assumption that overall, there is indeed a dependency of the latency values measured in the trials on the order of tests: For the later tests, the measured latencies are on average lower than for the earlier tests. This may reflect the increasing experience of the participants in performing the tests multiple times-even if the input methods and app types differ-as well as familiarization effects with respect to the IAT itself.
Susceptibility to Errors Depending on Test Order
Similar to the type of test being applied, there was no apparent influence regarding the average number of errors per test with respect to the order in which the tests were taken ( Table 9, P = 0.85).
Post-hoc Power Calculation
Post-hoc power analysis of the linear ANOVA showed 0.8 by f 2 =0.25 α=0.05, and four predictors.
Principal Findings
Based on the easily extensible ResearchKit, we were able to create a responsive app that was appropriate for the purposes of the research presented here. Moreover, feedback from the participants indicated that the app was sufficiently intuitive to use. For others who are interested in using the implicit association test in their own (research) apps, the code of the IAT tasks is available on GitHub (45).
Using the study app, it was possible to show that a mobile, ResearchKit-based implementation of the IAT has the potential to be equivalent to other manners of administering this test. We were unable to find any significant differences between either of the two test methods (established, web-based test method vs. native, ResearchKit-based app) combined with two input methods (touch screen vs. keyboard interfaces): Overall, results for the D scores (and corresponding categories of implicit opinions) did not diverge in a statistically significant manner, and neither did the number of errors change significantly for specific combinations of app type and input method (P=0.733) or the test order (P=0.85).
Nevertheless, there were relevant differences in latency values (corresponding to the users' reactions to the stimuli they were presented with) for both the combinations of app and input types, as well as for test order (both P <0.001). This is, however, at least in part easily explained: Regarding application and input type, it is not only the technology in use which may influence the recorded user latency values. Varying response times of the touch display and the keyboard, possibly also differences of input and output delays that may originate in the manner of implementation, including the toolkits being used, may have an impact here. In addition, there are human factors to consider (46), and these may for example be related to differences in posture between using a keyboard or a touch interface when interacting with the app (47), or a user's perception of tactile effects when using the different input methods (46,48). For the test order, with average latency values decreasing with each additional test, it seems sensible to conclude that faster response times may be due to increasing practice. Nevertheless, the differences that were noted regarding latencies do not seem to have influenced the calculated D score values and corresponding scores. This may be due manner in which D scores are calculated: As long as latency ranges overall stay in sync for a single test, the influence on the D scores' calculation, which is essentially based on a ratio between the values of individual test blocks, should be negligible.
As such, our results support the idea, that for the iOS platform, ResearchKit seems well suited for implementing various kinds of research related apps [also see, for example, (17,19,(49)(50)(51)(52)], be it for use in a laboratory setting or for research conducted in field studies. This may also extend to similar libraries on other mobile platforms as well.
There are however several considerations and limitations to be kept in mind that specifically relate to the implementation of the study presented here (see below), as well as the IAT itself, and its implementation.
Development Related Considerations
Standardized frameworks such as ResearchKit (53) or even the PHP and JavaScript-based framework (20,22,27) that the web application employed in the study was based on, may well be able to facilitate the development of apps to be used in scientific research. Predefined modules such as surveys, consent, and active tasks can be designed, connected, configured, and filled with appropriate content, e.g., informative descriptions and answer options. However, apps built using any type of framework may be required to follow a certain, predefined "look and feel" that may not be fully adjustable to one's desires. This may for example relate to the use of specific styles and layouts for interactive elements such as buttons that a user may interact with. In the case of the two app types compared in the study, this was a concern: We first suspected that differences in the size of the touch buttons, whenever the touch interface was used (i.e., much smaller, round buttons for the native app vs. control elements encompassing a larger area on either side of the screen for the web-based version) might have influenced the measurements, as we thought that the larger elements for the web solution would have been more forgiving with respect to triggering the respective (correct) touch event. However, as seen in Table 4, this was not supported by the average latencies that were recorded: rather, the web-based version was slower in this regard. A possible explanation for this effect might be that, while the PHP part of the web-based app was of course already interpreted by the web server, the JavaScript code still needed to be interpreted in the devices' web browser. This, along with the inherent latencies of the browser interface itself, might have slowed down the interaction, compared to the natively running code for the ResearchKit-based app, with latency values increasing accordingly. We did, however, not actually measure these effects. Since the slowdown was presumably constant over the entire test run, we do not believe that the calculated D scores were affected.
Also, building an app-based on such libraries still has to be done programmatically, thus preventing those unfamiliar with app programming for the respective platform from building their own apps. In contrast, a graphical user interface (GUI), allowing for "drag and drop" building of such apps and providing research data like questions and IAT data in an XML-based format (27), would enable researchers to create research apps more easily. However, a research app does not only consist of user interface elements. While building an app may seem easy when just using predefined steps for consent, surveys and active steps, adding functionality going beyond the predefined possibilities may require significantly more effort. This may for example be the case when underlying platform features (such as notifications or reminders) are required, or if there is a need to use a participant's location data to determine his or her geographical area. Additionally, adequate methods and procedures for making the results available for evaluation have to be implemented. For this purpose, it is commonly necessary to adapt one's evaluation procedures to varying data formats.
For example, in ResearchKit, all steps-not only surveys and active tasks, but also consent and informational steps-return a nested result object structure. Therefore, one must traverse through a tree structure that reflects the entire process of running the application. As such, the data does not only encompass the data relevant to the test, but also additional meta data.
An additional problem for researchers interested in building their own research apps, be it for IAT testing or different purposes, is that the official ResearchKit framework provided by Apple still only supports development for iOS-based devices (i.e., iPhones, iPads, and iPod touch devices) and the Apple Watch. If one were to use devices based on another (mobile) platform for research, this would necessitate an additional implementation on that platform, possibly requiring a complete redesign for that platform. While there are a number of toolkits available that aim at providing a basic compatibility to ResearchKit, for example, for developing Android-based devices, not all of these are well maintained, and neither do they currently provide the full functionality of ResearchKit. An example of this is the ResearchStack project (15), maintained by Cornell Tech's Small Data Lab and Open mHealth, that closely follows Apple's ResearchKit programming interfaces (APIs) and strives to assist with porting ResearchKit-based apps.
Additionally, aside from the storage aspect itself, for the ResearchKit-based IAT implementation presented here, results are (temporarily) stored in the step results 13 , to be accessed when a task 14 completes. In theory, after completion of all tasks, it would seem reasonable for the acquired data to be used to calculate the implicit preference of the test subject based on the D score algorithm, and to provide users with appropriate feedback regarding their implicit preferences. However, as mentioned above, this functionality is not included in the ResearchKit IAT implementation.
Apart from these more generic concerns, there are also a few additional points to consider regarding the chosen approach. These deal with the manner of implementation as well as usability and styling related questions.
Implementation of the IAT
Aside from instruction and completion steps, in ResearchKit, active tasks commonly only include one active step, often only used once while the task is executed. In contrast, the IAT implementation is more complex and uses its active step 15 seven times-once per block (B 1 to B 7 ). In addition, each block has its own instruction step 16 . Furthermore, there are additional steps before the IAT test itself is started, such as an instruction step specifically addressing categories 17 and a general instruction step. Altogether, there are 17 steps inside the task. Moreover, the task includes the logic for randomizing and shuffling the concept and attribute stimuli, and for creating the appropriate sorting and pairing blocks within the respective trials. As, following ResearchKit's approach, all active tasks are specified in a single file, this is not easy to manage: solely based on the number of steps and the configuration logic, there are more than 3,000 lines of code. This can significantly complicate future maintenance of the code should ResearchKit's APIs introduce breaking changes. For sustainability, it would therefore be desirable to recruit additional programmers for the project who would then participate in the ongoing maintenance of the code.
Usability and Technical Peculiarities of the Mobile Interface
In contrast to other active steps that are available in ResearchKit, IAT-based active steps place significantly higher demands on the screen layout with respect to visual components that need to be shown concurrently. For implementations on the web that usually run on larger scale (computer) screens, such as the one provided by Project Implicit (20), this is much less of a problem than when a small mobile device such as a smartphone is used.
For example, the category names of concepts as well as of attributes have to be positioned in the upper left and right part of the screen. A term, which may not only consist of a word, but also an image (both representing the respective concept stimuli), has to be shown, ideally at the screen's center. Last but not least, 13 ORKStepResult. 14 that implements the ORKTask protocol. 15 ORKImplicitAssociationStep. 16 ORKInstructionStep. 17 ORKImplicitAssociationCategoriesInstructionStep.
for recording a user's reaction, two buttons need to be provided on the centered left and right side of the screen, and these buttons are under the constraint that they may not be positioned at the outermost edges of the display. This is to enable users to still hold the devices in their hands, without inadvertently triggering the buttons-a danger especially present on devices with curved displays-while also allowing for a one-handed usage approach, i.e., accessing the buttons with the thumb. Moreover, stimuli must not be too long (for word stimuli) or wide (for image stimuli) in order to still fit between the buttons shown on the screen's left and right side. Additionally, for incorrect responses, a red × needs to be presented, and, while this error indicator is shown, there needs to be a text element instructing the user to select the other button instead. None of these elements may overlap in order to make it possible for users to still correctly trigger touch events on the respective buttons. Especially on smaller iOS-based devices such as iPhone and iPod touch models with a screen diagonal of 4.7 inches, this only leaves very little screen space to work with. When longer instructions are shown to users of such devices, the available space may make it necessary to scroll inside the respective text area to see the stimuli shown on the lower side of the screen. Additionally, as instructions for each trial on the web are presented inside the administration screen of the IAT, they have to be moved to an extra screen prior the actual trial: especially on smart phones, there will otherwise often be insufficient space on the screen. This may negatively impact the user experience.
Also, on current iPhone or iPod touch devices, it is currently impossible to administer the IAT in landscape view, as there would then be insufficient vertical space to prevent UI controls from overlapping. This is however not only a limitation for the IAT presented here, but also for other types of active tasks (e.g., as they are available in ResearchKit) if these are executed on such devices, even if they use fewer UI controls.
Styling
ResearchKit provides a consistent (somewhat fixed) design for surveys, consent and active steps and other elements to allow developers to focus on the actual implementation of a research app by using the templates provided by ResearchKit. This makes it more difficult when custom styling is needed, e.g., for the IAT category and button side names in instruction steps.
Recruitment and Study Size
Since we wanted to obtain standardized data for all participants, even going so far as to require that all participants use the same iPad model (and external keyboard), it was necessary to conduct the tests in person.
Due to the ongoing pandemic with its varying contact restrictions, combined with the aforementioned desire to ensure standardization, it was therefore only possible to recruit a limited number of N = 56, and complete data sets were unfortunately only obtained for 51 individuals.
Post-hoc power analysis nevertheless showed a sufficient power of 0.8 for the linear model ANOVA tests. However, a larger number of participants, ideally from a more diverse background, would have been desirable.
Data Evaluation
Calculating the D scores in a reliable manner relies on appropriate data cleaning procedures, i.e., removing latency measurements that are either too long or too short, perhaps indicating that the respective participant was either distracted or possibly unintentionally triggered a response before the actual classification was made. For the presented evaluation, we decided to retain the established cut-off values for the time being (33), thus removing latencies below 400 or above 10,000 ms. Due to the dependency of the latencies on the app type (native vs. web) and input type (touch screen vs. keyboard) as well as the order of the test execution, one should consider whether it would make sense to optimize this in the future. However, the extent of the data recorded in this study was too small to be able to make a statement on this.
Follow Up
Since the datasets for the participants were only recorded in one session per individual and there was no second appointment, no statement can be made as to whether the participants (either individually or as a group) would also have shown similar test results for the mobile or web-based IAT implementations over a longer period of time.
However, this study was not concerned with an evaluation of the longer-term stability of the test procedure or the IAT per se. Rather, it was meant to establish basic comparability of the newly designed native implication based on ResearchKit with another, already established web-based implementation, which we believe to have accomplished.
Future Work
The results of the presented work point to at least basic comparability of the ResearchKit-based IAT implementation to existing approaches [more specifically, the web-based version provided by (20,22,27)]. Future research will focus on evaluating our approach in a more realistic setting, with a more diverse study population. It is planned to recruit potential participants at a professional conference (e.g., one about diabetes or adipositas related issues) and to invite the attendees to use the IAT app for a topic related to the conference's focus (such as weight bias).
Ideally, the topic chosen for this more detailed evaluation should be one for which Project Implicit has already acquired and published data for a large number of participants. Many of the published datasets are provided separately on a per-country basis, since social norms and potential biases for specific subject areas may differ in this regard. As this IAT data is commonly available on Open Science Framework's IAT repository (27), the data of our ResearchKit-based app can then be compared to this to determine whether the group of professionals at the chosen conference differs from the (much larger) "Project Implicit" population.
CONCLUSIONS
Despite the limitations of ResearchKit and the current implementation of the IAT, we were able to show at least basic suitability and comparability for administering mobile tests, and, more specifically, for those based on ResearchKit, in the social sciences and psychology. Based on the presented implementation of the IAT, researchers-or their IT staff-may easily build their own IAT-based app.
In settings similar to the one described here, ResearchKit does not only allow the IAT to be used in its predefined form, but also provides researchers with the means to adapt the provided active task to their specific requirement. This may even include building an entirely new version of the tasks, either through appropriate parameterization or by subclassing the provided classes. Altogether, this is a good representation of Apple's statement that ResearchKit "[. . . ] allows researchers and developers to create powerful apps [. . . ]" (53), and we expect this approach to be readily transferable to other tests as well. Nevertheless, close collaboration of both researchers and developers remains essential for this to be successful.
DATA AVAILABILITY STATEMENT
The raw data used in the evaluation will be made available by the authors upon reasonable request. The fork of the base ResearchKit functionalities upon which the app used in the study was based is available on GitHub (45). The fork uses the same BSD style license as ResearchKit itself (54).
ETHICS STATEMENT
For the evaluation part of the work presented here, approval was obtained from the local Ethics Committee of Hannover Medical School (study number 8142_BO_K_2918, dated 05.11.2018).
AUTHOR CONTRIBUTIONS
TJ was responsible for programming the ResearchKit-based functionality as well as the app used in the evaluation, recruited the volunteers to be used in the preliminary evaluation and administered the tests. All authors discussed the GUI aspects of the app design. U-VA conceived the part of the study presented here and all authors participated in designing the survey as well as the overall study. UJ adapted the web-based version of the test used in the evaluation and deployed it on the web server. All authors discussed and contributed to the evaluation of the collected data, contributed to writing the manuscript, and approved the submitted version.
|
v3-fos-license
|
2019-04-25T13:03:16.841Z
|
2019-04-23T00:00:00.000
|
129943384
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1371/journal.pone.0215681",
"pdf_hash": "59a547d2ba8e2cd6e666cb33cd83576d69eb06ee",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41960",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "59a547d2ba8e2cd6e666cb33cd83576d69eb06ee",
"year": 2019
}
|
pes2o/s2orc
|
Can nano-hydroxyapatite permeate the oral mucosa? A histological study using three-dimensional tissue models
Nano-hydroxyapatite is used in oral care products worldwide. But there is little evidence yet whether nano-hydroxyapatite can enter systemic tissues via the oral epithelium. We investigated histologically the ability of two types of nano-hydroxyapatite, SKM-1 and Mi-HAP, to permeate oral epithelium both with and without a stratum corneum, using two types of three-dimensional reconstituted human oral epithelium, SkinEthic HGE and SkinEthic HOE respectively with and without a stratum corneum. Both types of nano-hydroxyapatite formed aggregates in solution, but both aggregates and primary particles were much larger for SKM-1 than for Mi-HAP. Samples of each tissue model were exposed to SKM-1 and Mi-HAP for 24 h at concentrations ranging from 1,000 to 50,000 ppm. After treatment, paraffin sections from the samples were stained with Dahl or Von Kossa stains. We also used OsteoSense 680EX, a fluorescent imaging agent, to test for the presence of HAP in paraffin tissue sections for the first time. Our results for both types of nano-hydroxyapatite showed that the nanoparticles did not penetrate the stratum corneum in SkinEthic HGE samples and penetrated only the outermost layer of cells in SkinEthic HOE samples without stratum corneum, and no permeation into the deeper layers of the epithelium in either tissue model was observed. In the non-cornified model, OsteoSense 680EX staining confirmed the presence of nano-hydroxyapatite particles in both the cytoplasm and extracellular matrix of outermost cells, but not in the deeper layers. Our results suggest that the stratum corneum may act as a barrier to penetration of nano-hydroxyapatite into the oral epithelium. Moreover, since oral epithelial cell turnover is around 5–7 days, superficial cells of the non-keratinized mucosa in which nanoparticles are taken up are likely to be deciduated within that time frame. Our findings suggest that nano-hydroxyapatite is unlikely to enter systemic tissues via intact oral epithelium.
Introduction
Nanomaterials are generally defined as entities with at least one dimension in the range of 1-100 nm [1]. In the European Union, nanomaterial has been officially defined as meaning 'a with an estimated turnover of 5-7 days [26]. Unlike the skin, the oral mucosa comprises both cornified and uncornified regions, depending on its location in the mouth [27]. In cornified regions, the stratum corneum or outermost layer is mainly composed of keratin proteins formed by the continuous death of spinous layer cells [28,29] and this keratinous layer forms part of the oral defense mechanism, as it is known that, for any material to penetrate this layer the '500 Dalton rule' applies, i.e. substances more than 500 Daltons or approximately 1 nm in size cannot penetrate the stratum corneum [30,31]. As a preliminary step to investigate whether n-HAP particles can enter systemic tissues through the oral epithelium, we studied histologically to what extent n-HAP could penetrate the stratified layers in two types of three-dimensional (3-D) reconstituted human oral epithelial models, one with and one without a stratum corneum, in what we believe to be the first study of its kind using n-HAP particles.
Preparation of n-HAP samples
Two types of n-HAP were used in the present study, both prepared by Sangi Co., Ltd., Japan, one having rod-like nano-scale primary particles and produced by wet chemical synthesis (SKM-1) and the other having smaller, irregularly shapes nanoparticles and produced by similar chemical synthesis followed by grinding in a wet-mill (Mi-HAP).
Physicochemical evaluation of SKM-1 and Mi-HAP
(1) Particle size and morphology. To investigate the primary particle size and morphology of SKM-1 and Mi-HAP, each was observed by transmission electron microscope (TEM: JEM-2100HR, JEOL Ltd., Japan). Samples of each powder were mounted onto a collodioncoated copper grid, after which each sample was observed by TEM at an acceleration voltage of 200 kV and magnification of 200,000 times. The average primary particle size of SKM-1 and Mi-HAP was calculated using 606 particles in 62 fields for the former and 777 particles in 54 fields for the latter, respectively. SKM-1 and Mi HAP samples were suspended at 50,000 ppm in the maintenance medium (Episkin, France), and the particle size distribution of each sample was measured using a laser diffraction particle size distribution analyzer (LA-950, HORIBA Ltd, Japan).
(2) Specific surface area. The specific surface area of each nanomaterial was measured using a surface area and pore size analyzer (SA 3100, Beckman Coulter, Inc, USA) using N 2 as adsorbent at −196˚C after outgassing the samples for 20 min at 120˚C. The Brunauer-Emmett-Teller (BET) specific surface area was calculated for each from the N 2 adsorption isotherm. The amount of each sample used for specific surface area measurement was approximately 0.1 g, and both samples were measured in triplicate.
(3) Zeta potential. Measurement of zeta potential was outsourced to Shimadzu Techno-Research, Inc. SKM-1 and Mi-HAP were each suspended at 1,000 ppm in the maintenance medium and the pH of each suspension immediately recorded. Samples were collected in a capillary cell and the zeta potential was measured using a zeta potential analyzer (Zetasizer ZS, Malvern Instruments, UK). The zeta potential and pH were measured in duplicate.
Mucosal permeability testing
Two types of 3-D oral mucosal tissue models were used to investigate the permeability of each n-HAP sample into the oral epithelium, one a human gingival epithelial model with stratum corneum (SkinEthic HGE) and the other a human oral epithelial model without stratum corneum (SkinEthic HOE) (Episkin, France). These tissue models were cultured with Ski-nEthic maintenance medium according to the manufacturer's directions. After preincubation, each insert dish with the 3-D tissue was moved to a 24-well multi-plate previously filled with 300 μL of maintenance medium. SKM-1 and Mi-HAP were respectively suspended in maintenance medium at different concentrations, and 50 μL of each respective suspension added to an insert dish. Plates were then incubated at 37˚C in a 5% CO 2 atmosphere for 24 h. After incubating, all 3-D cultured tissues were fixed in Lilly's buffered formalin solution at 4˚C for 24 h and removed from the insert dish carefully together with the polycarbonate filter in each case, using a micro-knife. The tissues were dehydrated in an alcohol series and xylene and then embedded in paraffin according to the conventional method. Slides were prepared by slicing at a thickness of 3 μm.
The doses of SKM-1 and Mi-HAP were calculated as follows. Collins et al reported that the total surface area of the adult oral cavity is 214.7 ± 12.9 cm 2 [32]. Presuming that adults use 1 g of toothpaste containing 10% n-HAP in one brushing, the net amount of exposure to n-HAP would be 100 mg, or on a per unit area basis 465.7 ± 26.3 μg/cm 2 using those two values. This calculated value corresponds to about 5,000 ppm of n-HAP when suspended in 50 μL of maintenance medium fluid. Based on this concentration, the doses of n-HAP were set at 0, 1,000, 5,000, 10,000 and 50,000 ppm in 50 μL of medium, with medium only (zero n-HAP) added to insert dishes as a negative control. Each test group consisted of 3 tissue samples.
Detection of n-HAP particles by histochemical and fluorescent methods
All tissue sections, including negative controls, were stained by Dahl [33] and Von Kossa [34] methods for histochemical detection of SKM-1 and Mi-HAP. Calcium deposits in tissue stain reddish orange with alizarin red S, contained in Dahl staining solution, and calcium phosphate and calcium carbonate deposits stain black or blackish brown with Von Kossa stain. After staining, each tissue section was observed using an upright microscope (BX 53, Olympus Corporation, Japan) at a magnification of 600 times.
As a new and potentially clearer method of investigating the localization of n-HAP in histological samples, a further group of paraffin sections of SkinEthic HOE tissue were stained with OsteoSense 680EX (PerkinElmer, Inc., USA), a fluorescent dye commonly used in in vivo applications because of its affinity for HAP. OsteoSense 680EX is usually administered intravascularly for the imaging of bone resorption and/or regeneration sites because it binds specifically to biological HAP [35,36]. However, there has been no previous report using OsteoSense 680EX histologically, and we believe our study is the first to test its ability to detect synthesized n-HAP in paraffin tissue sections.
First, we tested whether OsteoSense 680EX could bind to the chemically synthesized n-HAP used in our study. SKM-1 and Mi-HAP were respectively mixed with a solution of Osteo-Sense 680EX prepared to be 0.08 nmol/mL in DW, and each mixture was stirred at 37˚C for 3 h. Each sample was then centrifuged at 12,000 rpm for 10 min and the precipitate washed in DW and treated for 5 min with an ultrasonic device (USC-100Z38S-22, Ultrasonic Engineering Co., Ltd., Japan), and this procedure was repeated 3 times. The precipitate of SKM-1 or Mi-HAP was then suspended in a small amount of DW and a drop of each suspension placed on a slide glass and sealed with glycerol (Wako, Japan). As a negative control, DW alone was used without addition of OsteoSense 680EX under the same conditions.
In a preliminary experiment using n-HAP exposed tissue, no red fluorescence after staining with OsteoSense 680EX could be detected. We suspected that the cause may be masking of n-HAP by the presence of protein and found that after pre-treatment of the tissue with trypsin, a routine procedure used in immunostaining, HAP could be detected in the tissue by staining with OsteoSense 680EX.
As a result, each paraffin section, after deparaffinization, was first immersed in a trypsin solution (TRYPSIN 1: 250, Difco Laboratories, USA) then prepared to be 0.01% in DW at 37˚C for 3 min, and washed 3 times with DW. To stop further digestion by trypsin, each section was then immersed in a trypsin inhibitor solution (Trypsin Inhibitor from Soybean, Wako, Japan) prepared to be 0.1% in DW at 37˚C for 15 min, then washed 3 times with DW. Tissue sections were then incubated in 150 μL of 0.08 nmol/mL OsteoSense 680EX at 37˚C for 3 h. After incubating, each section was washed with running water for 5 min, and then with DW 3 times. Water was removed as much as possible from the slide and each tissue sealed with glycerol. All sections were observed using a confocal laser scanning microscope (Leica Microsystems, TCS-SP 5, Germany) under the following conditions: excitation wavelength 633 nm; emission wavelength 680 ± 10 nm; magnification of 630 times.
TEM observation of n-HAP samples
TEM images of SKM-1 and Mi-HAP (magnification ×200,000) are shown in Fig 1. SKM-1 was observed to be rod-like in shape, with an average primary particle size of 72 nm × 15 nm ( Fig 1A). Mi-HAP was seen to be irregular in shape, with an average primary particle size of 53 nm × 9 nm (Fig 1B).
Specific surface area, particle size and zeta potential
The average values of specific surface area, particle size, zeta potential of SKM-1 and Mi-HAP and the pH values of their suspension are shown in Table 1. The specific surface area of Mi-HAP was about three times larger than that of SKM-1. The results of particle size distribution analysis in the maintenance medium showed that SKM-1 formed aggregates at micron level and Mi-HAP formed aggregates at nano-level. Mi-HAP had a larger negative charge than SKM-1 and the pH value of the Mi-HAP suspension was higher than that of the SKM-1 suspention.
Histological evaluation
Images of SkinEthic HGE tissue stained by Dahl and Von Kossa methods are shown in Figs 2 and 3 respectively. All sections in the control groups without addition of n-HAP showed a negative reaction to the stains. In the SkinEthic HGE group stained with Dahl (Fig 2), several aggregates stained reddish orange were observed on the surface of the stratum corneum in the SKM-1 group at concentrations of 5,000 ppm or more (arrows), but not within the stratum corneum or the squamous epithelium. In the case of epithelium exposed to Mi-HAP, positive staining was not observed on or within the tissue at any concentration.
Von Kossa stained images of SkinEthic HGE tissue are shown in Fig 3. The presence of calcium-containing deposits (stained black; arrows) was observed on the surface of the stratum corneum in the SKM-1 group at concentrations of 5,000 ppm and more, but not within the stratum corneum or the squamous epithelium. In contrast, no black staining was observed at any concentration in the Mi-HAP group. The histological findings observed in the SkinEthic HGE sections stained with Von Kossa were similar to those observed with Dahl stain.
In contrast, in the SkinEthic HOE group, without stratum corneum (Fig 4), positive reddish orange staining by Dahl was observed in the cytoplasm of superficial epithelial cells at concentrations of 1,000 ppm and above in tissue exposed to both SKM-1 and Mi-HAP. The extent of positive reaction increased with increasing concentrations of n-HAP. Positive staining was stronger in tissues exposed to SKM-1 than in tissues exposed to Mi-HAP, and a diffuse reaction around superficial cells was observed in the SKM-1 group at a concentration of 50,000 ppm. However, positive staining was not detected in the deeper layer of stratified cells in either group.
Von Kossa stained images of SkinEthic HOE tissue, without stratum corneum, are shown in Fig 5. Calcium-containing black or blackish brown deposits were observed on the surface and within the cytoplasm of cells of the outermost epithelial layer both in the SKM-1 and Mi-HAP groups, at concentrations of 1,000 ppm or more, and the amount of deposits was concentration-dependent, though the degree of positive staining was less in the Mi-HAP group than in the SKM-1 group. However, black or blackish brown deposits were not detected in the deeper layer of stratified cells in any group, similar to the result observed with Dahl staining.
Evaluation of fluorescent staining
Histological evaluation using Dahl and Von Kossa stains, which detect the presence of calcium, suggested that n-HAP particles could not penetrate the stratum corneum or enter the underlying non-keratinized stratified squamous epithelium of the tissue models tested. We attempted to confirm this, and investigate the localization of n-HAP particles in the non-keratinized oral epithelium more clearly, by using OsteoSense 680EX, a fluorescent dye known to bind specifically to HAP in biological tissues, and the results are shown in Figs 6 and 7.
First, preliminary testing confirmed that OsteoSense 680EX could bind to chemically synthesized n-HAP (Fig 6). Confocal laser scanning microscopy showed autofluorescence in the case of both SKM-1 and Mi-HAP respectively (B, D) at the excitation wavelength of 680nm, on treatment with OsteoSense 680EX, compared with untreated controls (A, C).
Images of SkinEthic HOE tissue treated with n-HAP then stained with OsteoSense 680EX are shown in Fig 7. Fluorescence was detected in the cytoplasm of cells in the outermost layer Left and middle columns show tissues exposed to SKM-1 and Mi-HAP, respectively. Right column shows control tissue. The broken line shows the boundary between the stratum corneum and non-keratinized stratified squamous epithelium. Numbers show the concentration of n-HAP applied, and bar shows scale (20 μm). Calcium-containing deposits (stained reddish orange; arrows) were observed on the surface of the stratum corneum in the SKM-1 group, at concentrations of 5,000 ppm or more, but not within the stratum corneum or the underlying stratified cellular layer. In contrast, tissues exposed to Mi-HAP were negative for the stain at all concentrations. Left and middle columns show tissues exposed to SKM-1 and Mi-HAP, respectively. Right column shows control tissue. The broken line shows the boundary between the stratum corneum and non-keratinized stratified squamous epithelium. Numbers show the concentration of n-HAP applied, and the bar shows scale (20 μm). Calcium-containing deposits (black stain; arrows) were observed on the surface of the stratum corneum in the SKM-1 group, at concentrations of 5,000 ppm or more, but not within the stratum corneum or the underlying stratified cellular layer. In contrast, tissues exposed to Mi-HAP were negative for the stain at all concentrations. https://doi.org/10.1371/journal.pone.0215681.g003 Nano-hydroxyapatite permeability in 3-d oral tissue models in all tissues at concentrations of 1,000 ppm or more, and the intensity of fluorescence was concentration-dependent, as seen previously with Dahl and Von Kossa staining. However no Fig 7D' and 7H'. Quantities of both n-HAP aggregates were found in the outermost layer of epithelial cells, and a small amount of fluorescence was also observed in the cytoplasm, and in the case of Mi-HAP, also in the extracellular matrix of the second layer of cells, with a much larger amount of deposits observed in the case of Mi-HAP than for SKM-1. However, no fluorescence was found in the deeper layers of cells for either type of n-HAP when observed at higher magnification.
Discussion
Recently nanomaterials have come to be used in various fields and the use of nanotechnology in oral care products is increasing worldwide. However, there is still insufficient data on the behavior of nanomaterials in the oral cavity, including whether nanoparticles could enter the bloodstream and systemic tissues via the oral mucosa. We focused on toothpaste which accounts for a large share of oral care products. Toothpastes containing HAP, which is the main component of tooth enamel, are now widely available some of them containing n-HAP.
We carried out what we believe to be the first study of its kind using 3-D reconstituted oral mucosal tissue to examine histologically whether n-HAP particles can permeate the oral mucosa, and our results suggest that n-HAP is unlikely to enter the systemic tissues via this route. Nano-hydroxyapatite permeability in 3-d oral tissue models According to one generally accepted EU definition, even aggregates are classified as nanomaterials if the primary particle size is in the range of 1-100 nm [37].Two types of n-HAP, SKM-1 and Mi-HAP, were used in this study. SKM-1 showed a larger primary particle size and a smaller negative surface charge than for Mi-HAP. It is known that specific surface area increases as primary particle size decreases [38]. The average value of specific surface area of Mi-HAP was much larger than that of SKM-1, which supported the results of the primary particle size evaluation by TEM.
SkinEthic HGE with stratum corneum and SkinEthic HOE without stratum corneum were used as 3-D oral mucosal tissue models in this study. The exposure amounts of SKM-1 and Mi-HAP nanoparticles administered were calculated based on the estimated surface area of the oral cavity and the amount of toothpaste normally used, to emulate actual likely exposure during tooth brushing, similar to the calculation used by Scheel and Hermann in an earlier study that tested the penetrability of n-HAP in a 3-D human corneal epithelial tissue model (SkinEthic HCE) [39]. Our study however is the first to test n-HAP penetrability using reconstituted 3-D human oral mucosal tissue.
Histological staining by Dahl and Von Kossa in the present study identified calcium deposits in the 3-D tissue exposed to both kinds of n-HAP, but not in untreated control tissue. It is known that HAP dissolves below around pH 5, the most frequently reported value being pH 5.5 [40]. Both types of n-HAP in our study can therefore be presumed to have remained in the form of solid particles when applied to the tissues, since the pH value of the SKM-1 and Mi-HAP suspensions in maintenance medium in both cases was 7.5.This supports the view that the positive reaction observed with Dahl and Von Kossa staining reflected the presence of n-HAP particles. Moreover, the positive reactions observed with both stains were at the same locations in the treated tissues.
In the case of SkinEthic HGE tissues, attachment of Dahl-and Von Kossa-positive deposits on the outermost layer of the stratum corneum was observed only for SKM-1 at 5,000 ppm or more, and no positive reaction within the stratum corneum or in the spinous layer was observed for either type of n-HAP. The reason for this adhesion of SKM-1 but not Mi-HAP to the surface layer of the stratum corneum is unclear. Any n-HAP particles merely precipitating on the surface of the stratum corneum would be likely to be washed away during preparation of the paraffin blocks, so the fact that deposits were observed in the final tissue section indicates strong adhesion to the superficial layers of the stratum corneum. The stratum corneum of the oral mucosa is comprised mainly of keratin proteins rich in glycine, serine, leucine and Fig 7D' and 7H'. Higher magnification confirmed that OsteoSense 680EX-positive n-HAP was located on the surface and in the cytoplasm of cells only in the outermost layers, occasionally reaching the second-most layer, in the case of SKM-1 (left: arrowhead), and the third layer and surrounding extracellular matrix in the case of Mi-HAP (right: yellow arrowheads). But no positive reaction in the underlying layers for either type of n-HAP was seen. https://doi.org/10.1371/journal.pone.0215681.g008 Nano-hydroxyapatite permeability in 3-d oral tissue models glutamic acid, formed by the continuous emergence and death of underlying spinous layer cells [26]. Bulk hydroxyapatite is known to adsorb many kinds of proteins depending on their type and physical properties, and environmental factors etc [41]. However little is known about the protein adsorption properties of n-HAP. SKM-1 and Mi-HAP differed not only in size but also in zeta potential, and we postulate that the larger absolute value of surface charge shown in Mi-HAP may be one reason why attachment of Mi-HAP on the surface of the stratum corneum was not observed. The fact that neither type of n-HAP was observed within the stratum corneum or in the underlying spinous layer is supported by the '500 Dalton rule', according to which substances of more than 500 Daltons, or approximately 1 nm in diameter, cannot permeate into the stratum corneum of human skin [30,31].
In the non-keratogenic SkinEthic HOE model, although positive reactions to Dahl and Von Kossa staining were observed in the cytoplasm of the superficial layer of cells at all concentrations of both SKM-1 and Mi-HAP, the positively stained area was diffuse in the case of Dahl, extending to the cytoplasm and extracellular matrix with increasing concentrations of n-HAP, and the outline of deposits was unclear, whereas in the case of Von Kossa, the positive reaction was observed merely as large black or blackish brown aggregates. Moreover since both stains are used to detect the presence of calcium, the fact that the deposits comprised n-HAP could only be postulated but not confirmed.
However using the fluorescent dye OsteoSense 680EX, which has hitherto been used to detect HAP specifically in in vivo applications, we observed a positive fluorescence reaction in the SkinEthic HOE tissue sections at exactly the same locations and intensity as seen with Dahl and Von Kossa stains, confirming the presence of n-HAP and showing for the first time that OsteoSense 680EX can be used to detect synthetic HAP in histological specimens.
Although the 3-D reconstituted human oral mucosa used in our study closely resembled the cellular structure of the oral mucosa in vivo, environmental factors operating in the oral cavity were obviously not present. Salivary secretions contain substances with many different functions, including minerals, proteins such as immunoglobulins, enzymes, mucins and nitrogen compounds [42]. Little is known about the defense properties of saliva against solid particles, however the attachment of proteins from body fluids such as saliva (the so-called 'protein corona') is known to cause aggregation of nanoparticles, increasing their size and altering their surface functions, resulting in a reduction in the amount of nanoparticles taken up into cells [23]. As with other body portals (the airways, female reproductive tract etc.) a complex layer of mucus on the surface of the oral epithelium [43,44] approximately 70-100 μm thick [45] and containing highly glycosylated mucin fibers, acts as a barrier to trap nanoparticles with viscoelasticity. Because this mucus is supplied from saliva, its turnover time is short and the trapped particles are reported to be removed within minutes [43,46], suggesting that at least a portion of any n-HAP present in the oral cavity may be trapped by mucus and not permeate into the epithelial cell layers.
In areas of the oral epithelium without stratum corneum, even if nanoparticles penetrate the mucus defense mechanism and reach epithelial cells, further barriers lie in wait. It is known that non-keratinized oral epithelium is thicker than epithelium with stratum corneum, comprising tissue approximately 500-800 μm in thickness [47]. Cells progressively differentiating upwards from the basal layer to the superficial layer in this epithelium change in morphology with differentiation, and it is known that lipids derived from membrane coating granules in the upper third of the mucosa form a barrier against permeation of foreign entities [48]. Furthermore, cells are constantly renewed, and the turnover of epithelial cells from the basal to the superficial layer is reported to be 5-7 days [48], indicating that any superficial oral epithelial cells that are penetrated by nanoparticles would be sloughed within this time frame. We found that there was no n-HAP penetration into the deeper layer of epithelial cells even at an exposure level 10 times higher than what could be considered a normal level during regular toothbrushing.
In addition, while the exposure dosages we used were based on actual likely exposure to nanoparticles from a toothpaste per unit area of the oral cavity, the exposure time was 24 h in this study, whereas in actual toothbrushing, the recommended time is roughly 2-3 min, and the majority of toothpaste is washed out from oral cavity after brushing [49]. Therefore, the level of exposure to n-HAP in this study was much larger than would be the case during actual use of an n-HAP-containing toothpaste. The very small amount of exposure during actual toothbrushing would therefore further reduce the possibility of n-HAP passing through the mucosa to enter the systemic tissues. For these reasons, it was presumed that n-HAP would not enter the systemic tissues via intact oral epithelium. However, it is reported that silver nanoparticles penetrate wounded skin more easily than intact skin [50]. This raises the possibility that n-HAP particles may enter systemic tissue via the wounded oral mucosa. On the other hand, it is reported that n-HAP particles administered intravenously at 300 mg/kg or less in rats did not produce side effects [51], and HAP-sol injected intravenously at 26 mg/kg in rats and dogs showed no chronic damage or permanent side effects over two years of experiments [52]. This suggests that even if n-HAP particles did enter systemic tissue via the wounded oral mucosa, they may not show side effects. However further study is required to determine whether n-HAP particles could enter into systemic tissue via the wounded oral mucosa.
Conclusion
This study was a first-step experiment to investigate whether n-HAP used in oral care products is likely to enter the systemic tissues via the oral mucosa. Histological investigation showed that neither of the two different types of n-HAP particles used in our study penetrated the stratum corneum of the 3-D oral epithelial model with stratum corneum that we used, though nanoparticles of both types were observed in the cytoplasm and around the membrane of cells in the outermost layers of the 3-D oral epithelial model without stratum corneum, regardless of their size or concentration. Moreover in no case was the presence of n-HAP detected in the deeper layers of the epithelium in either model. In the actual oral mucosa, there are defense mechanisms at work, such as salivary mucin, the mucus membrane and certain barrier functions of mucosal epithelial cells, which are not present in the 3-D reconstituted tissue models. Furthermore, since the exposure dosage of n-HAP used in this study was much larger than the likely exposure during actual toothbrushing, it was concluded that n-HAP particles are very unlikely to enter the blood stream or systemic tissue via intact oral mucosa.
|
v3-fos-license
|
2019-06-28T13:22:02.579Z
|
2019-06-01T00:00:00.000
|
195694375
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://res.mdpi.com/d_attachment/nutrients/nutrients-11-01431/article_deploy/nutrients-11-01431.pdf",
"pdf_hash": "f04886b585ac9f44dcdae8c473379e1a4cb44e2a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41962",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "f04886b585ac9f44dcdae8c473379e1a4cb44e2a",
"year": 2019
}
|
pes2o/s2orc
|
Potential Factors Influencing the Effects of Anthocyanins on Blood Pressure Regulation in Humans: A Review
Dietary intake of anthocyanins (ACNs) is associated with a reduced risk of cardiovascular and coronary heart disease. While the anti-inflammatory, antioxidant, and lipid-lowering effects of ACN consumption have been consistently reported, their effect(s) on blood pressure regulation is less consistent and results from human studies are mixed. The objective of this review is attempting to identify potential patterns which may explain the variability in results related to blood pressure. To do so, we review 66 human intervention trials testing the effects on blood pressure of purified ACN or ACN-rich extracts, or whole berries, berry juices, powders, purees and whole phenolic extracts, from berries that are rich in ACN and have ACNs as predominant bioactives. Several factors appear to be involved on the mixed results reported. In particular, the baseline characteristics of the population in terms of blood pressure and total flavonoid intake, the dose and duration of the intervention, the differential effects of individual ACN and their synergistic effects with other phytochemicals, the ACN content and bioavailability from the food matrix, and individual differences in ACN absorption and metabolism related to genotype and microbiota enterotypes.
Introduction
Hypertension is a known risk factor of cardiovascular disease (CVD), which is the leading cause of death worldwide [1]. The role of lifestyle modification, including diet, as a means to reduce CVD risk is attracting increasing interest, but requires a careful evidence-based approach [2]. This is especially true for management and control of hypertension, which burdens most European and Western countries [3], Anthocyanins (ACN) are a large class of water-soluble plant metabolites belonging to the flavonoid family of phenolics, responsible for the red, blue, and purple pigmentation of many flowers, fruits, and vegetables. They are the glycosylated forms of highly reactive and relatively unstable molecules characterized by a flavylium cation structure, called anthocyanidins [4]. Although more than 600 naturally occurring ACNs have been identified, for the vast majority, they are made by different glycosylated forms of only six anthocyanidins: pelargonidin, cyanidin, delphinidin, peonidin, petunidin, and malvidin [5]. The average daily intake of ACNs is 200 mg/day, which is about one fifth of the average daily intake of total phenolics [6]. While plants produce ACN as a defense against environmental stressors, such as temperature extremes, UV light, and drought, dietary intake of anthocyanins has been extensively studied for its health-promoting potential, in particular with respect to cardiovascular disease prevention [7].
Strong epidemiological evidence associates ACN intake with a reduced risk for cardiovascular disease and coronary heart disease [8]. While ACN consumption has been consistently associated with 1. ACNs have been consistently shown to increase endothelial-derived nitric oxide (NO), via modulation of endothelial NO synthase (eNOS) expression and activity. Nitric oxide is one of the major contributors to endothelium-dependent vasorelaxation. It causes vascular smooth muscle relaxation following activation of soluble guanylate cyclase, which in turn increases cGMP. This blocks the release of intracellular calcium, preventing it from causing vascular smooth muscle contraction [15].
2.
Reactive oxygen species damage NO, thus promoting vasoconstriction and hypertension. Due to their strong antioxidant activity, ACNs act to prevent NO oxidative damage and radical-induced NO conversion, such as the reaction caused by NADPH oxidase [16]. 3.
ACNs have been shown to reduce synthesis of vasoconstricting molecules, such as angiotensin II via inhibition of the angiotensin-converting enzyme (ACE) activity, endothelin-1, and thromboxanes via inhibition of the cyclooxygenase (COX) pathway [17].
While most of these mechanistic observations come from in vitro and animal model studies [18], the practical outcome of ACN dietary consumption on blood pressure regulation in humans is likely complicated by multiple factors.
Epidemiological and Meta-Analysis Data
Epidemiological data appears to confirm the existence of a link between ACNs and blood pressure regulation. A prospective epidemiological study on 34,489 postmenopausal women from the Iowa Women's Health Study, analyzing the effects of total flavonoids or seven different individual flavonoid subclasses on cardiovascular health over 12 years, found a significant inverse association between ACNs and CHD and CVD and total mortality, while total flavonoids and the other individual subclasses had no significant effect [6]. Analyzing data from a cohort of 156,957 men and women from the Nurses' Health Study, the NHS I and the Health Professionals Follow-Up Study, followed for 14 years, Cassidy et al. found an inverse association between ACN intake and hypertension [19]. Interestingly, this association was not observed for total flavonoid intake, nor for any other subclass of flavonoids (flavones, flavonols, flavan-3-ols, and flavanones), with the exception of two single compounds (apigenin and catechin) [19]. In a cross-sectional study from a cohort of 1898 adult women from the TwinsUK registry, a higher intake of ACNs was associated with significantly lower central systolic blood pressure (SBP) and mean arterial pressure (MAP). Again, the inverse association was not observed for total flavonoid intake, flavanones, flavan-3-ols, flavonols, or flavones [20].
Several meta-analyses have been conducted using data from clinical trials involving sources of ACNs, with mixed results. A meta-analysis of 128 clinical trials on different sources of ACN and ellagitannins, with a total of 5538 participants, found that both systolic and diastolic blood pressure (DBP) were significantly lowered by consumption of berries, red grapes, and red wine-the main sources of ACN investigated in the study [12]. The effect is also significant when studies on berries only and studies on red wine/red grapes only are considered [12]. A meta-analysis of 22 clinical trials studying the effects of berries (total 1251 subjects), found a significant reduction of SBP, but not DPB [21]. Per contra, a meta-analysis on 32 clinical trials (1491 total participants) investigating the effects of ACNs and ACN sources on cardiometabolic health, found that the reduction in SBP and DBP did not reach statistical significance [22]. Similarly, a meta-analysis of 19 clinical studies found no significant effect of ACN supplementation on either SB or DBP [23]. A meta-analysis of six clinical studies with 472 total participants, found no significant effect of ACN supplementation on either SBP or DBP [24]. A meta-analysis of six clinical trials with 204 total participants, detected no significant effect of blueberry consumption on blood pressure [25]. The participants of all the above mentioned meta-analyses represented a mixed population, including men and women of all age groups and from different geographical regions, both healthy and with cardiovascular risk factors.
In light of such mixed results, we searched the literature for clinical trials, testing the effects on blood pressure of ACNs or ACN-rich berries and examined the results individually to attempt to identify potential patterns explaining the variability in results related to blood pressure.
Literature Search
A search of the literature for human acute and chronic intervention studies was carried out on multiple databases (PubMed, ScienceDirect, and Web of Science) with the keywords (anthocyanin* OR blueberr* OR raspberr* OR bilberr* OR blackberr* OR blackcurrant OR açai OR cherr* OR aronia OR elderberr* OR chokeberr*) AND (blood pressure OR systolic OR diastolic OR MAP OR aldosterone OR angiotensin* OR renin OR nitric oxide OR blood flow). Abstracts and full texts were screened, and reference lists were also searched for related articles. Further details about the literature search process are provided in Figure 1. Sixty-six relevant studies were identified , and their results related to blood pressure are summarized in Tables 1 and 2.
Single-Dose Interventions
Of the 14 single-dose interventions which were identified and reviewed; six found a significant effect on blood pressure, as summarized in Table 1.
All single-dose studies were performed on healthy participants, with the exception of the study by Keane et al. which was conducted on 15 subjects with early hypertension: both SBP and mean arterial pressure, but not DBP, were significantly lower at 1, 2, 3, 4, 5, 6, 7, and 8 hours after a serving of tart cherry juice providing 73.5 mg ACN [31]. Del Bo et al. investigated the effect of a serving of blueberry juice providing 300-350 mg ACN in restoring blood pressure after a cigarette was smoked by young otherwise healthy smokers. In one study, the SBP spike induced by smoke was restored by the blueberry juice [28], but the effect was not replicated in a following study [29]. One-hundred-and-thrity-five abstracts and full texts were reviewed. Eighty studies not reporting both ACN source and blood pressure markers were excluded. Eleven additional relevant studies were identified during the screening process from the reference lists, and were added to the review.
Single-Dose Interventions
Of the 14 single-dose interventions which were identified and reviewed; six found a significant effect on blood pressure, as summarized in Table 1.
All single-dose studies were performed on healthy participants, with the exception of the study by Keane et al. which was conducted on 15 subjects with early hypertension: both SBP and mean arterial pressure, but not DBP, were significantly lower at 1, 2, 3, 4, 5, 6, 7, and 8 hours after a serving of tart cherry juice providing 73.5 mg ACN [31]. Del Bo et al. investigated the effect of a serving of blueberry juice providing 300-350 mg ACN in restoring blood pressure after a cigarette was smoked by young otherwise healthy smokers. In one study, the SBP spike induced by smoke was restored by the blueberry juice [28], but the effect was not replicated in a following study [29].
Long-Term Interventions
Fifty-two long term interventions were identified and summarized in Table 2. For obvious practical reasons, and for the purpose of standardization, only few studies used fresh berries, while most studies used more stable, but processed berries such as juices, concentrates, or freeze-dried powders. Eleven studies used only phenolic extracts, and 14 studies used isolated ACNs or ACNrich extracts. Of the 52 reviewed interventions, 21 found a significant effect on blood pressure ( Table 2). 66 One-hundred-and-thrity-five abstracts and full texts were reviewed. Eighty studies not reporting both ACN source and blood pressure markers were excluded. Eleven additional relevant studies were identified during the screening process from the reference lists, and were added to the review.
Long-Term Interventions
Fifty-two long term interventions were identified and summarized in Table 2. For obvious practical reasons, and for the purpose of standardization, only few studies used fresh berries, while most studies used more stable, but processed berries such as juices, concentrates, or freeze-dried powders. Eleven studies used only phenolic extracts, and 14 studies used isolated ACNs or ACN-rich extracts. Of the 52 reviewed interventions, 21 found a significant effect on blood pressure (Table 2). Freeze-dried blueberry drink, blueberry baked product (with same amount of blueberry powder), or baked control 339 mg in drink, 196 mg in baked product =SBP, =DBP a Age (in years) and BMI (in kg/m 2 ) data expressed as mean ± SD. When data was reported as SEM, SD was calculated as SEM*SQRTparticipant. When BMI was unreported, weight in kg is reported instead. A question mark (?) indicates unreported data. b When daily ACN dose is unreported, the best alternative information is reported in parentheses. c Only blood pressure data is reported in this table. Other outcomes of the studies are not reported. SBP, systolic blood pressure; DPB, diastolic blood pressure; MAP, mean arterial pressure; > or <, statistically significant increase or decrease; =, no significant change.
Potential Factors Influencing ACN Effects on Blood Pressure
Several factors appear to influence the effects of ACNs on blood pressure regulation in humans, and they are discussed in the following subsections.
Baseline Characteristics of the Population
A consistent observation is that the effect on blood pressure is only detectable in subjects with high blood pressure baseline value. Of the six studies targeting specifically prehypertensive or hypertensive subjects, five found an effect on blood pressure [48,53,58,67,83], and only one did not [52]. In contrast, of the 16 studies enrolling completely healthy subjects with normal blood pressure, only one [68] found an effect on blood pressure, while the remaining 15 studies did not find any effect ( Table 2).
After a six-week intervention with high bush blueberries on 25 adults, McAnulty et al. did not find a significant effect on DBP for the whole group, but the reduction became significant when considering only the subset of nine prehypertensive subjects [70]. Similarly, after an eight-week intervention with a mix of berries providing 515 mg ACNs, on 72 subjects with CVD risk factors, Erlund et al. found an overall significant reduction in SBP, but observed that the effect was very strong in the subset of hypertensive participants [49]. Following 12 weeks consumption of a mixed berry fruit juice, a group of 134 prehypertensive or hypertensive subjects had a significant reduction in SBP, but the reduction was more pronounced in the subset of participants with higher baseline blood pressure values [83].
Another interesting observation comes from the study of Cook et al. on 13 healthy participants receiving a blackcurrant extract providing 315 mg ACNs for a week [44]. While no effect on blood pressure was observed at rest, a significant reduction in SBP, DBP, and MAP was observed during isometric contraction, suggesting once more that ACNs act to lower BP when it is higher than normal, but do not lower it when it is already in a healthy range [44].
Another potentially relevant factor is the baseline total flavonoid intake of the population under study. It is reasonable to hypothesize that the effect of ACN supplementation may be greater in subjects with low baseline flavonoid intake. Unfortunately, this information is rarely collected or reported in studies, making it difficult to identify a pattern.
Dose Effect
Of the seven studies providing high doses of ACN (>500 mg/day), four studies found a significant effect on blood pressure [38,40,49,67], while the other three studies did not find any significant effect [63,80,81]. In contrast, a significant blood pressure-lowering effect was detected by three studies providing low doses of ACN (<100 mg/day) [56,71,79].
It is important to consider, however, that ACN quantification is rather complicated, and only few studies rely on complete ACN profiles of the food under investigation. More frequently, studies measure cyanidin-3-glycoside equivalents, or rely on a quick but approximate spectrophotometric quantification following methanol extraction. In some cases, they only estimate ACN content based on the USDA food composition tables. To further complicate matters, ACNs are rather unstable compounds and sensitive to pH variations, heat, light, oxygen exposure, and enzyme activity, and they are highly reactive with other molecules, sugars, proteins, and other phenolics [4]. Thus, the food matrix, its chemical composition, the manufacturing process, storage conditions, and duration, may all significantly affect ACN content and activity at the time of consumption even when the initial quantification is accurate [89][90][91]. Indeed, when Rodriguez-Mateos et al. tested both a freeze-dried blueberry drink and a blueberry baked product prepared with the same amount of blueberry powder, the ACN content of the drink was 339 mg, while only 196 mg ACN was found in the baked blueberry product; the rest being converted to other phenolics [37]. Thus, the reported ACN content from different studies may very easily be under-or overestimated, making it extremely difficult to compare studies.
Only seven studies investigated specifically the effect of different ACN doses, of which four seem to suggest the existence of a dose-effect. After 12 days consumption of a blackcurrant extract providing either 105, 210, or 315 mg/day of ACNs, with a crossover design, a significant reduction in mean arterial pressure in a group of 15 athletes was only observed with the two higher ACN doses [43]. After an 8-week intervention with 1.5 or 2.5 g a day of black raspberry powder given to 45 prehypertensive subjects, a significant reduction of SBP was only observed in the group receiving the higher raspberry dose [53]. In a 6-week intervention with 122 older adults, Whyte et al. investigated the effects of either 1 or 2 daily grams of whole wild blueberry powder, or a 200 mg wild blueberry extract, providing 2.7, 5.4, or 14 mg ACNs, respectively. A significant reduction of SBP was only observed with the extract providing the higher ACN dose, but not with the whole berry powder [85].
In contrast, in another 6-week intervention with 66 mostly overweight adults, neither a low nor a higher dose blackcurrant juice (providing either 40 mg or 143 mg ACN daily, respectively) had any effect on blood pressure [62]. An intervention of 12 weeks on 134 prehypertensive or hypertensive subjects tested a mixed berry fruit juice providing 43 mg ACNs, or the same juice enriched with blackcurrant press residues providing 210 mg ACNs, and both juices significantly lowered SBP with no dose-dependency effect [83]. When 23 healthy participants received a single dose black-currant extract drink containing 150, 300, or 600 mg ACNs following a high-carbohydrate meal, no significant changes in blood pressure were observed for any of the doses, two hours after the challenge [27].
Another interesting observation comes from the study of Kent et al. providing a single dose of cherry juice containing 207 mg ACNs to a group of young or older adults and detecting a significant reduction of both SBP and DBP at two hours after consumption. However, when the same amount of juice was split into three doses provided one hour apart over two hours, the effect on blood pressure was no longer detectable [34].
Thus, a positive effect on blood pressure was found across a wide range of ACN doses, with both dose and time being relevant factors.
Study Duration
The study duration does not seem a consistent factor in determining the effect of ACNs on blood pressure. Of the 34 studies with 6 week durations or longer, 15 found a significant effect on blood pressure ( Table 2). Of the 19 studies with durations of 4 weeks or less, seven studies found an effect (Table 2). Thus, a blood pressure-lowering effect is found both in longer and shorter studies. Indeed, a significant effect on blood pressure can be found also following single-dose interventions (Table 1).
It is interesting to note, however, that after 6-week consumption of aronia juice providing 90 mg/day ACNs, a group of 58 male with mild hypercholesterolemia had a significant reduction in DBP but not in SBP, while after another 6 weeks of aronia juice consumption the reduction became significant also in SBP, suggesting that the duration of the intervention may indeed be a relevant factor [78].
Systolic vs. Diastolic Blood Pressure
Of the 28 studies registering a significant effect of blood pressure, 14 studies reported a reduction in SBP but not in DBP (Table 2), and 12 studies reported a reduction in both SBP and DBP (Table 2). Only one study found an effect on DBP but not in SBP, following a 16 weeks consumption of cold-pressed aronia juice and oven-dried aronia powder, providing 1024 mg/day of ACNs, to 37 subjects with mild hypertension [67].
Thus, the blood-pressure-lowering effect appears to be more evident on SBP than DBP.
Effect on Angiotensin-Converting Enzyme (ACE)
Hormonal long-term regulation of blood pressure, mainly via the renin-angiotensin-aldosterone and antidiuretic hormone (ADH) systems, is a potential target of ACN activity, as has been suggested by investigations in the animal model [92]. In humans, two studies have measured the effects of ACN consumption on the ACE.
A group of 23 subjects with untreated metabolic syndrome (MetS) received either aronia extract supplements providing 60 mg ACNs, or ACE-inhibitors. After 8 weeks, the activity of SBP, DBP, and ACE was significantly lower compared to baseline. However, ACE activity was still higher compared to a reference group of healthy controls or MetS controls treated with ACE-inhibitors (reference group was measured only once and did not undertake any intervention) [79]. Conversely, following 3-week consumption of 250 g fresh blueberries, a group of 20 overweight smokers experienced no effect on blood pressure or ACE activity [36].
More studies on how ACN intake may affect blood pressure hormonal regulatory systems are warranted, and the variability in biological responses should also be considered in view of the recent findings on the M235T polymorphism of the angiotensinogen gene, which has been linked with cardiovascular disease [93].
Synergistic Effects
Many studies which found significant effects on BP used ACN-rich foods or whole extracts, which also contain other bioactive phytochemicals potentially affecting blood pressure. The exact composition of such extract is often unknown in most studies. Of the 14 studies using isolated ACNs or ACN-rich extracts, only one found a significant effect on blood pressure, while the remaining 13 studies did not find any effect. In contrast, of the 11 studies using whole phenolic extracts, eight studies found a significant effect, while only three studies did not find any effect on blood pressure (see Table 2).
It is also interesting to report that while no effect on blood pressure was observed when whole grapes were given for 4 weeks to a group of 60 mildly hypertensive participants, a significant reduction of both SBP and DBP was detected with grape wine extract that had a comparable total phenolic content [48]. Thus, berry phenolics-but not isolated ACNs-appear to be effective on blood pressure, suggesting that the effect is synergistic with other molecules. Of course, it is also possible that the effect is entirely exerted by other phenolic compounds, or other phytonutrients, independent of ACNs, although as far as we know ACNs are the only molecules which are abundant and transversally present in all the different berries examined in this review, and for which positive effects on BP have been detected.
Furthermore, it is also to be noted that most of the studies testing isolated ACNs used the same commercially available supplement, made with ACNs isolated from bilberry and blackcurrant.
A synergistic effect with lifestyle and diet is also a likely relevant factor. Gurrola Diaz et al. tested the effect of 4-week consumption of a Hibiscus sabdariffa (HS) extract powder providing 19 mg ACN on a group of 71 healthy and 51 MetS patients, using a preventive diet as control. No significant effect on blood pressure was found either with HS alone or with diet alone. However, in the group receiving both the HS powder and the preventive diet, a significant reduction of both SBP and DBP was observed [50].
Differential Effect of Individual ACNs
Most interventions use fresh berries or berry extracts, which contain a mix of different compounds including different ACNs. This is undoubtedly the best approach in view of extrapolating the results to a real-life situation, in which whole foods are consumed and not isolated compounds, and the consumption of whole foods rather than supplements should be encouraged to promote health and prevent disease. However, this approach makes it more difficult to identify potential differential effects among different molecules, especially when trying to elucidate their mechanisms of action.
Indeed, when studying single anthocyanins, Rechner & Kroner observed that the inhibitory effect on the redox-sensitive p38 MAPK and c-jun-N-terminal kinase pathways often reported for ACNs, was only caused by delphinidin and cyanidin, but not malvidin and peonidin, suggesting that the hydroxyl residue in position 3 of the B ring may play a key functional role [94]. Thus, it is not unreasonable to hypothesize that the variability in the results between different studies is also at least in part due to the fact that some ACNs have a stronger effect on BP than others.
The different ACN profile of individual food sources is also a factor of variability. For example, blueberries contain predominantly delphinidin, malvidin, and petunidin; raspberries predominantly cyanidin and pelargonidin; and blackberries predominantly cyanidin and malvidin [14]. It must be noted, however, that significant effects on blood pressure have been observed with all different berries, including chokeberries, blueberries, raspberries, cherries, and blackcurrants (Tables 1 and 2).
ACN Absorption and Metabolism
ACN metabolism is complex and mostly unknown, and in order to fully understand their biological functions, it is necessary to better elucidate their metabolic fate [90]. While most in vitro studies focus on isolated ACNs, it is important to remember that only less than 1% of total dietary ACNs are absorbed intact. A higher proportion of dietary ACNs is absorbed after hydrolysis and partial degradation to other phenolic compounds. Part of the unabsorbed ACNs is also fermented by the colonic microbiota, and their catabolic products are subsequently absorbed into the bloodstream [95]. Furthermore, ACNs are quite unstable at neutral pH, and after they are absorbed, ACN parent compounds, degradation products and microbial metabolites all undergo significant metabolism by both phase I and phase II enzymes to form methyl, glucuronide, and sulfate conjugated metabolites [12].
Thus, it is not enough to study the biological effects of parent ACNs, but also the effects of their numerous catabolic products based on microbiota, physiology, and health of the GI in individuals. This likely explains most of the discrepancies between the mechanisms of action suggested in vitro and the actual in vivo outcomes.
Interaction with Gut Microbiota
Increasing evidence links the composition of gut microbiota to key physiological effects related to the prevention of chronic disease, and this includes the contribution of gut bacteria to blood pressure regulation [96,97]. The relationship between ACNs and gut bacteria goes both ways: on one hand, ACN intake influences the composition of gut microbiome, and on the other hand, colonic fermentations transform unabsorbed ACNs to different catabolic products that can be absorbed and act as bioactives [98]. Microbial catabolism of ACNs consists mainly in the cleavage of their heterocyclic flavylium ring (the C-ring), and subsequent dehydroxylation or decarboxylation to form phenolic acids [97,99].
Thus, interaction with gut microbiome is likely an important element of variability that could explain some of the different effects observed for ACN intake. To our knowledge, however, no study has directly investigated the relationship between gut microbiota and the effects of ACN intake on blood pressure regulation.
Conclusions
A consistent number of studies documented a significant blood-pressure-lowering activity related to ACNs and ACN-rich berry consumption, suggesting that an effect indeed exists.
The fact that many other studies failed to observe such an effect, indicates that the outcome is not generalized and likely depends on many other factors, and, in particular, the baseline characteristics of the population (more specifically, their baseline blood pressure and total flavonoid intake), ACN dose, duration of the intervention, differential effects of individual ACNs, and synergistic effects with other phenolics and bioactive phytochemicals in general. Additionally, ACN content and bioavailability from the food matrix (whole food, juice, freeze-dried powder, and extract), modified by the manufacturing process and storage conditions and duration, need to be taken into account. Finally, ACN absorption and metabolism, which is affected by the different microbiota enterotypes of each individual, his/her different genotype, the physiological condition of the gastrointestinal tract and its relative response to dietary bioactives, is also a factor to be considered.
Further research will need to identify more precisely the clinical conditions and the characteristics of individuals for which an increased consumption of ACN-rich foods may be especially recommended and could potentially reduce the dose and/or the administration of antihypertensive medications.
|
v3-fos-license
|
2020-03-12T10:30:55.521Z
|
2020-03-01T00:00:00.000
|
212693900
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/20/5/1521/pdf",
"pdf_hash": "14d44ed68454f66604383e5a349a92099238504e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41965",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"sha1": "06fdebd308546b0479437bdf695e86ec96ecdd3e",
"year": 2020
}
|
pes2o/s2orc
|
An Efficient Certificateless Aggregate Signature Scheme for Blockchain-Based Medical Cyber Physical Systems
Different from the traditional healthcare field, Medical Cyber Physical Systems (MCPS) rely more on wireless wearable devices and medical applications to provide better medical services. The secure storage and sharing of medical data are facing great challenges. Blockchain technology with decentralization, security, credibility and tamper-proof is an effective way to solve this problem. However, capacity limitation is one of the main reasons affecting the improvement of blockchain performance. Certificateless aggregation signature schemes can greatly tackle the difficulty of blockchain expansion. In this paper, we describe a two-layer system model in which medical records are stored off-blockchain and shared on-blockchain. Furthermore, a multi-trapdoor hash function is proposed. Based on the proposed multi-trapdoor hash function, we present a certificateless aggregate signature scheme for blockchain-based MCPS. The purpose is to realize the authentication of related medical staffs, medical equipment, and medical apps, ensure the integrity of medical records, and support the secure storage and sharing of medical information. The proposed scheme is highly computationally efficient because it does not use bilinear maps and exponential operations. Many certificateless aggregate signature schemes without bilinear maps in Internet of things (IoT) have been proposed in recent years, but they are not applied to the medical field, and they do not consider the security requirements of medical data. The proposed scheme in this paper has high computing and storage efficiency, while meeting the security requirements in MCPS.
Introduction
In the big data era, with the development of Internet of Things, smart healthcare provides people with more convenient and high-quality healthcare services [1]. The Medical Cyber Physical System (MCPS) [2] is a special type of Cyber Physical System (CPS) based on the application background of the smart healthcare field, which consists of physical space and cyber space. Physical space includes wearable devices, medical diagnostic equipment, and user space consisting of doctors, nurses, etc. Cyber space is the nerve center of MCPS. It receives sensing information from physical space through a network transmission system. Then cyber space identifies, stores, analyzes, processes, and generates feedback control information. Finally, it sends control information to physical space through a network transmission system. MCPS continuously collects the patient's physical signs data through various wearable devices and medical devices, so that the patient's physical condition can be better detected [3]. In order to provide patients with a more accurate and timely diagnosis, different medical institutions need to share a large amount of physical data collected by the sensors and healthcare staff [4]. At the same time, patient privacy should be protected. Thus, blockchain is needed to utilize peer-to-peer network and cryptography technology to achieve tamper proof, unforgeable, non-repudiation, and verifiable medical records. The combination of MCPS and blockchain [5] promotes the sharing of medical services and resources [6]. However, the block capacity limit is one of the main factors that affects the performance improvement of blockchain.
MCPS controls the embedded medical equipment through a wireless network, which senses and monitors the patient's physical data in real time. When the patient has an abnormal situation, the medical equipment sends the early warning information to the medical institution in time. Once MCPS is under cyberattacks, such as data inconsistency, unauthorized access, and data breaches [7], patients' lives and health will be seriously threatened. In practice, medical institutions need to check the accuracy and integrity of shared and sensed medical data before making medical diagnoses. The medical data, which is collected from wearable devices, medical equipment, medical apps, and healthcare staff needs the responsible healthcare provider to sign on it. A large number of signatures and verifications result in high time and space overheads. At the same time, considering the capacity limitation of the blockchain, the certificateless aggregate signature is an effective method because of its compression characteristics. In recent years, some certificateless aggregate signature schemes [8][9][10] have been proposed. However, the performance of these schemes is not ideal because they use more time-consuming bilinear maps. At the same security level, the Elliptic Curve Cryptography (ECC) is more efficient than bilinear maps [11]. Therefore, with the characteristics of low computation, low storage, high reliability, privacy protection, and timeliness, the certificateless aggregate signature scheme based on ECC is suitable for blockchain-based MCPS.
The contributions of this paper are as follows: • A two-layer storage model in which medical data is stored off-blockchain and shared on-blockchain is proposed. The model meets security and privacy requirements of MCPS.
•
Based on ECC, we present the multi-trapdoor hash function, which is secure and efficient to construct the certificateless aggregate signature scheme.
•
The certificateless aggregate signature scheme based on the multi-trapdoor hash function is proposed in this paper. It can reduce the computation cost of wearable medical devices and miners.
The rest of this paper is organized as follows. Related works are discussed in Section 2. The necessary preliminaries are presented in Section 3. Section 4 presents a multi-trapdoor hash function. In Section 5, we describe the certificateless aggregate signature scheme. A security discussion of the proposed scheme is given in Section 6. Then, we make an efficiency analysis in Section 7. Finally, the conclusion is offered in Section 8.
Blockchain
Blockchain is a decentralized, anonymous, untrusted, tamper proof, and traceable distributed data storage technology [5]. With the development of the medical industry, health data is growing exponentially. How to effectively store, share, and manage medical data involving a large number of patients' privacy has become an obstacle to the development of the healthcare industry. Due to the characteristics of blockchain [12], such as non-tamperability, traceability, and multi private key authorization management, it is possible to share medical data securely among different institutions [13].
According to the difference of open objects, blockchain can be divided into Public Blockchain, Private Blockchain, and Consortium Blockchain. These three types of blockchains are compared in Table 1.
In the special field of MCPS, medical data contains both a large amount of private information and has the need to be shared between different institutions, therefore the Consortium Blockchain is more suitable for the secure storage and sharing of medical data. Xue et al. [14] divided the existing medical institutions into medical institution federate servers (MIPS) and audit federate servers (AFS) according to their credit scores. Through the improved consensus mechanism, the medical data sharing model based on blockchain was realized. In the untrusted environment, Xia et al. [15] designed a sensitive medical data sharing model between cloud service providers based on blockchain through a smart contract and access control mechanism. The security requirements of medical records on integrity, confidentiality, and traceability can be realized by digital signature technology in the blockchain-based medical data sharing system.
In recent years, researchers have conducted in-depth research around blockchain-based multi-signatures [16], aggregate signatures [17,18], ring signatures [19], and homomorphic signatures [20]. Among them, aggregate signatures are favored for their advantages, such as fast computing speed, small storage space, and bandwidth saving. Moreover, some scholars have carried out in-depth research on the combination of quantum computing and the security of blockchain [21]. Gao et al. [21] proposed a lattice-based signature scheme and presented a cryptocurrency scheme based on post-quantum blockchain, which could resist quantum computing attacks.
Certificateless Aggregate Signature
In order to solve the management problems of certificate distribution and storage in the traditional PKI-based (Public Key Infrastructure) public key cryptosystem, Shamir proposed the identity-based public key cryptosystem (ID-PKC) in 1984 [22]. In ID-PKC, the public key is denoted by user information, such as mailbox, address, telephone number, etc. The private key is provided by the key generation center (KGC), a third-party trusted organization. Different from traditional public key cryptosystems, users cannot generate their own private key. For KGC, the user's private key is known, and KGC can decrypt ciphertext and forge identity at will. Therefore, ID-PKC has the defect of key escrow [23], which is only applicable to the environment with low security requirements.
To solve this problem, Al-Riyami and Paterson proposed the notion of certificateless public key cryptography (CL-PKC) in 2003 [24]. Unlike ID-PKC, the private key in CL-PKC consists of a partial private key generated by KGC and the secret value selected by the user. KGC only knows partial private key but cannot get the secret key. It can effectively solve the key escrow problem [25]. Moreover, the public key in CL-PKC does not need certificate verification, so the problem of public key authentication is solved. CL-PKC has neither the certificate management problem nor the key escrow problem. Its calculation efficiency is higher than traditional public key cryptosystems, and its security is higher than ID-PKC. Therefore, it is suitable for application scenarios with higher requirements for computing, storage efficiency, and security.
Boneh et al. first proposed the concept of aggregate signature [26] on EUROCRYPT 2003, which greatly promoted the development of digital signature cryptography. Aggregate signature [26] is suitable for compressing many signatures generated by many different users to many different messages into one short signature, and simplifying the verification of multiple signatures into one verification. Aggregation signature greatly improves storage efficiency and verification time.
In recent years, certificateless aggregate signatures (CLAS) have attracted many scholars' research interests because of the advantages of both a certificateless public key cryptosystem and aggregate signatures. Based on different theoretical foundations, scholars have proposed corresponding certificateless aggregate signature schemes. For example, most researchers proposed certificateless aggregate signature schemes based on bilinear maps [8][9][10]. For the first time, Gong et al. [9] proposed two certificateless identity-based aggregate signature schemes (denoted as CAS-1 and CAS-2 in [9]). In these two schemes, the aggregation verification of CAS-1 used 2n + 1 pairing operations on an elliptic curve. CAS-2 used n + 2 pairing operations and n scalar point multiplication operations on elliptic curves. It is clear that the verification efficiency was very low. Xiong et al. designed a more efficient certificateless aggregate signature scheme [8]. The verification of this scheme used only three pairing operations and 2n scalar multiplication operations. The efficiency of the scheme was not related to the number of signers. Moreover, it did not require a synchronized clock. As such, this scheme was more efficient than the Gong's scheme [9]. However, He et al. [27] and Zhang [10] [28]. Based on the Elliptic Curve Discrete Logarithm Problem (ECDLP), the schemes both used 2n + 1 scalar multiplication operations. The difference is that CLAS-2 provides a shorter constant-level signature length than CLAS-1. Cui et al. [29] proposed a certificateless aggregate signature scheme based on ECC and applied it to vehicular ad hoc networks (VANETs) communication. The verification of this scheme used n scalar multiplications. Since the computational overhead of bilinear pairs is significantly higher than that of scalar multiplication under ECC [11], Zhou's scheme and Cui's scheme had higher computational efficiency.
In recent years, with the development of blockchain technology, more and more scholars have focused on the research of the aggregation signature algorithm based on blockchain [17,18,30]. Gao et al. [18] designed a fair and efficient multi-party contract signing scheme based on blockchain by conducting a certificateless aggregation verifiable encryption signature scheme. Wang et al. [30] realized the full anonymous blockchain by homomorphic encryption, and aggregate signature technology, which effectively protected the privacy of the user's identity and the transaction amount. Neither of these schemes [18,30] is computationally efficient because they both used bilinear maps. Based on the gamma signature proposed by Yao et al. [31], Zhao [17] constructed an aggregate signature scheme without bilinear maps. By applying Zhao's scheme [17] to Bitcoin, it could be found that both computation and storage overhead have decreased to some extent, however the length of this aggregate signature scheme increased with the number of signers. Due to their low computing or communication efficiency, these schemes [17,18,30] were not suitable for wearable medical devices with limited computing and storage resources. On the other hand, these schemes [17,18,30] did not focus on the security requirements of MCPS, such as timeliness and privacy protection.
Some scholars focused on the research of digital signatures in blockchain-based Internet of things (IoT) applications [32,33]. In order to reduce the time cost of transmitting authentication information from blockchain nodes to IoT devices, Danzi et al. [32] proposed a repeat-authenticate scheme. In which blockchain information that consists of a copy of the block header and the signatures of blockchain nodes is multicasted. Kaga et al. [33] proposed a biometrics-based fuzzy signature scheme and applied it into the IoT blockchain system. This scheme achieved the verification of a creator of a transaction. These two schemes payed more attention to authentication of transaction creators or blocks in IoT scenario. However, they did not focus on the effective storage of a large number of digital signatures and the privacy protection of medical data in MCPS scenario. When a patient goes to the hospital, a great deal of medical records will be generated. The digital signatures of these medical records will occupy a large amount of block space, which will seriously affect the performance of the blockchain. At the same time, medical data involves personal privacy, and it is necessary to protect the private data.
The blockchain-based schemes mentioned above are compared in Table 2. From Table 2, we can conclude that none of these solutions [17,18,30,32,33] provide both high computing and communication efficiency. Furthermore, nowadays, certificateless aggregate signatures based on blockchain have not been widely used in MCPS. In this paper, we combine ECC and the multi-trapdoor hash function to propose a certificateless aggregate signature scheme and apply it to secure storage and sharing of MCPS. The proposed scheme provides high computing efficiency and low space occupation, which is suitable for blockchain-based MCPS scenario with limited blockchain capacity and low computing power wearable devices.
Elliptic Curve Discrete Logarithm
Let p, q be two large prime numbers, F p be a finite field determined by p, and E(F p ) be an elliptic curve over F p , which is defined by the equation: y 2 = x 3 + ax + b mod p, where a, b∈F p and 4a 3 + 27b 2 0. If the additive group G consists of the infinity point O and all points on E(F p ), P is a generator of group G with the order q, we have the following definition.
Trapdoor Hash Function
The trapdoor hash function is also called the chameleon function [35]. Different from general hash functions, it has a hash/trapdoor key (HaK, TrK). The hash key (HaK) is public, while the trapdoor key (TrK) is private. The trapdoor hash function uses some special information to generate a fixed hash value, and its collision resistance depends on the user's knowledge of trapdoor information (TrK) [36]. That is, without knowing the trapdoor key TrK, the trapdoor hash function is collision resistant. However, when the hash/trapdoor key is known, the trapdoor collision can be computed [37]. This property of the trapdoor hash function is suitable to construct various digital signature schemes [36][37][38][39].
The trapdoor hash function consists of the following four algorithms [37]: • ParG: Inputs security parameter k, outputs system parameter params; According to the number of trapdoor information (TrK), trapdoor hash functions include the single trapdoor hash function [35], the double trapdoor hash function [39], and the multi-trapdoor hash function [37,38]. A double trapdoor hash function usually has two pairs of hash/trapdoor keys, named long-term hash/trapdoor key and temporary hash/trapdoor key. Double trapdoor hash function protects the long-term trapdoor key from being leaked by sacrificing the temporary trapdoor key. The multi-trapdoor hash function has multiple hash/trapdoor keys, which combines multiple collisions generated by multiple entities to conduct a single collision. As a result, the multi-trapdoor hash function has the advantage of computing efficiency as well as storage space and bandwidth saving. In this paper, we build a certificateless aggregate signature scheme based on the multi-trapdoor hash function, with which a blockchain-based MCPS data storage and sharing model is proposed.
Definition of Certificateless Aggregate Signature
A certificateless aggregate signature consists of the following six algorithms [40]: • Setup: Inputs the security parameter k, KGC outputs the system public parameter K pub and system master key λ. • Partial-Private-Key-Gen: Inputs k, K pub , λ, and user's identity ID i , KGC outputs the partial private key θ i and sends it to the user ID i through a secure channel. • User-Key-Gen: Inputs k, the user ID i outputs secret/public key pair (α i , X i ).
if the verification is correct, the verifier outputs 1, otherwise, the verifier outputs 0.
Security Models of Certificateless Aggregate Signature
According to different capabilities, two types of adversaries are considered in certificateless aggregate signature schemes [9]. In addition, certificateless aggregate signature schemes should be existentially unforgeable under these adversaries, A I and A II .
A I adversary cannot get the system master key, but they can replace the public keys of legitimate users. Usually, A I adversary acts as malicious KGC.
A II adversary can obtain the system master key, however they cannot replace the public keys of legitimate users. A II adversary is often regarded as malicious inside signers.
For these types of adversaries, we define the following two games: (1) Game I: Setup: Challenger Z inputs security parameters k, generates system parameter pars and system master key λ, sends pars to adversary A I , and keeps λ secretly.
Query: A I adaptively performs the following oracle queries: • Hash queries: A I sends a hash oracle query for all hash values in the scheme, and challenger Z returns the corresponding value. • Partial-Key-Gen query: When A I makes a partial private key query on the user ID i , the challenger Z runs the partial private key generation algorithm to generate the corresponding partial private key θ i and returns it to A I .
•
Secret-Key-Gen query: When A I makes a secret key query on the user ID i , the challenger Z runs the secret key generation algorithm to generate the corresponding secret key α i and returns it to A I .
•
Public-Key-Gen query: When A I makes a public key query on the user ID i , the challenger Z runs the public key generation algorithm to generate the corresponding public key ( X i , V i ) and returns it to A I .
•
Public-Key-Replacement query: When A I queries user ID i for public key replacement, Z replaces the corresponding public key of user ID i with a randomly selected PK * DAU i = ( X * i , V * i ) and saves it.
• Signature queries: Inputs message s i , user ID i and corresponding private key (α i , θ i ) and status information Ω i , Z runs the signature algorithm to generate the corresponding signature σ i and returns it to A I .
Forge: After the above polynomial bounded queries, Z outputs the forged aggregate signature σ * = (ω * , D * ). The adversary wins the game if and only if: • Forged signature σ * is a valid signature. • A I cannot query at least one of n users for partial private key.
(2) Game II: Setup: Challenger Z inputs security parameters k, generates system parameter pars and system master key λ, sends pars and λ to adversary A II .
Query: In this stage, adversary A II adaptively performs the polynomial bounded oracle queries which are similar to Game I. The difference is that A II does not perform the public key replacement query and partial private key query.
Forge: Z outputs the forged aggregate signature σ * = (ω * , D * ). The adversary A II wins the game if and only if: A II cannot query at least one of n users for secret value.
System Model
In this paper, a two-layer system model is used to describe the secure storage and sharing of medical records in MCPS. As shown in Figure 1, the off-blockchain layer completes the acquisition, aggregation, and storage of medical data. In our proposed system model, every doctor, nurse, medical device, and medical app has a pseudonym, partial private key, secret value, and public key. The pseudonym is distributed by the Registry Center, and partial private keys are allocated by the KGC. Doctors, nurses, medical equipment, and medical apps are noted as data acquisition units (DAU). The medical record of a patient consists of several medical record items (MRI). Each MRI is signed by the DAU who is responsible for it. A patient's diagnosis and treatment process corresponds to a Central Hospital. When a patient goes to different Central Hospitals, it corresponds to different treatment processes. Each DAU encrypts the collected MRIs with the public key of the Central Hospital, and calculates the hash value of MRIs it is responsible for as digital digest. The DAU's private key is used to individually sign on the digest information. Then, the encrypted MRIs, digest information, and individual signatures are sent to the Central Hospital. The Central Hospital verifies the correctness of the individual signature. If it is correct, the encrypted original medical data is stored in the Medical Cloud. Finally, the Central Hospital combines the individual signatures into an aggregate signature, and sends the digest, aggregate signature, access control, and location index of the original MRIs to the Medical Blockchain. The on-blockchain layer completes the sharing of medical data. Figure 2 shows that each transaction of the Medical Chain contains a digest of the Pi's MRIs, an aggregate signature, access control, and a specific location index of the original medical data stored in the Medical Cloud. Each block contains a hash value linked to the previous block. This hash value can be used to retrieve the block. The Medical Chain uses time stamps to ensure that the blocks are linked in time. The latest generated blocks are broadcast to the entire network. The nodes receiving the information verify the correctness according to the consensus algorithm. If it is correct, they pass the information to other nodes. After most nodes verify the correctness, the miner adds the block into the main chain to form the permanent storage and sharing of medical records. The patient is the owner of medical data, who grants an entity (doctor, institution, researcher, etc.) access to original medical records through access control protocol. When an entity gains access, they look up on the Medical Chain, obtains the position index of medical data in cloud, then they can access the original medical records.
In the above model, one block contains multiple transactions, and one transaction relates to all medical records of one medical treatment process of a patient. By using blockchain to store the digest and aggregate signature, the unforgeability of DAU's service and the integrity of medical data can be guaranteed. Meanwhile, the block capacity limitation can be greatly eased. On the other hand, the encrypted original medical data is stored in the cloud, which is retrieved through the data location index on the blockchain. The access rights of entities are managed through the access control on the blockchain. Therefore, the secure storage and sharing of medical data in MCPS is realized. The on-blockchain layer completes the sharing of medical data. Figure 2 shows that each transaction of the Medical Chain contains a digest of the P i 's MRIs, an aggregate signature, access control, and a specific location index of the original medical data stored in the Medical Cloud. Each block contains a hash value linked to the previous block. This hash value can be used to retrieve the block. The Medical Chain uses time stamps to ensure that the blocks are linked in time. The latest generated blocks are broadcast to the entire network. The nodes receiving the information verify the correctness according to the consensus algorithm. If it is correct, they pass the information to other nodes. After most nodes verify the correctness, the miner adds the block into the main chain to form the permanent storage and sharing of medical records. The patient is the owner of medical data, who grants an entity (doctor, institution, researcher, etc.) access to original medical records through access control protocol. When an entity gains access, they look up on the Medical Chain, obtains the position index of medical data in cloud, then they can access the original medical records.
In the above model, one block contains multiple transactions, and one transaction relates to all medical records of one medical treatment process of a patient. By using blockchain to store the digest and aggregate signature, the unforgeability of DAU's service and the integrity of medical data can be guaranteed. Meanwhile, the block capacity limitation can be greatly eased. On the other hand, the encrypted original medical data is stored in the cloud, which is retrieved through the data location index on the blockchain. The access rights of entities are managed through the access control on the blockchain. Therefore, the secure storage and sharing of medical data in MCPS is realized.
Security Requirements
The following security requirements are important for medical data in MCPS: • Non-repudiation: Medical data is the record of treatment process, which has the function of legal evidence. Any modification of a medical record should be nonrepudiation; • Integrity: As an important record of the patient's treatment, medical data should be guaranteed to be accurate, which means it cannot be tampered by anyone in any way. In other words, any data tampering can be detected; • Privacy: Medical data involves patient's personal privacy, which should be kept confidential. It could not be allowed to be disclosed at will, only the authorized users can access it; • Traceability: When medical disputes occur between doctors and patients, medical data should be traceable as legal evidence; • Timeliness: Time factor is one of the key points in the whole treatment process. It is necessary to make effective time judgment on each sensitive link in the treatment process, so as to ensure the authenticity and effectiveness of medical data. Among these security requirements, tamper-proofing, data integrity, and privacy protection are crucial issues in MCPS [4]. It is necessary to use relevant technical means, such as identity authentication, blockchain technology, digital signatures, to achieve secure storage and sharing of medical information.
System Framework
The certificateless aggregate signature scheme based on the trapdoor hash function proposed in this paper consists of the following algorithms: • Setup: The algorithm is completed by KGC. Inputs security parameter k, outputs master key λ, system parameter pars. • Pseudonym-Gen: The algorithm generates pseudonyms for each entity by Registry Center. Inputs the real identity of each DAUi or patient Pj (denoted as RID DAU i and RID P j ), outputs its pseudonym PID DAU i or PID P j .
• DAUi Key-Gen: DAUi generates its secret value-public key pair (α i , X i ) and sends X i to KGC through the secure channel. After receiving DAUi's pseudonym RID DAU i , system parameters pars, public key X i and master key λ, KGC outputs the DAUi's partial private key θ i . The public key (long-term hash key) of the DAUi is X i , the long-term trapdoor key is α i , and the private key is θ i .
Security Requirements
The following security requirements are important for medical data in MCPS: • Non-repudiation: Medical data is the record of treatment process, which has the function of legal evidence. Any modification of a medical record should be non-repudiation; • Integrity: As an important record of the patient's treatment, medical data should be guaranteed to be accurate, which means it cannot be tampered by anyone in any way. In other words, any data tampering can be detected; • Privacy: Medical data involves patient's personal privacy, which should be kept confidential. It could not be allowed to be disclosed at will, only the authorized users can access it; • Traceability: When medical disputes occur between doctors and patients, medical data should be traceable as legal evidence; • Timeliness: Time factor is one of the key points in the whole treatment process. It is necessary to make effective time judgment on each sensitive link in the treatment process, so as to ensure the authenticity and effectiveness of medical data.
Among these security requirements, tamper-proofing, data integrity, and privacy protection are crucial issues in MCPS [4]. It is necessary to use relevant technical means, such as identity authentication, blockchain technology, digital signatures, to achieve secure storage and sharing of medical information.
System Framework
The certificateless aggregate signature scheme based on the trapdoor hash function proposed in this paper consists of the following algorithms: • Setup: The algorithm is completed by KGC. Inputs security parameter k, outputs master key λ, system parameter pars. • Pseudonym-Gen: The algorithm generates pseudonyms for each entity by Registry Center. Inputs the real identity of each DAU i or patient P j (denoted as RID DAU i and RID P j ), outputs its pseudonym PID DAU i or PID P j .
• DAU i Key-Gen: DAU i generates its secret value-public key pair (α i , X i ) and sends X i to KGC through the secure channel. After receiving DAU i 's pseudonym RID DAU i , system parameters pars, public key X i and master key λ, KGC outputs the DAU i 's partial private key θ i . The public key (long-term hash key) of the DAU i is X i , the long-term trapdoor key is α i , and the private key is θ i .
• Hash-Gen: In this algorithm, the trapdoor hash value of DAU i is generated. Inputs system parameter pars, original message s i , DAU i 's hash key X i , auxiliary parameter u i , outputs DAU i 's trapdoor hash value TH X i ( s i , u i ).
The Proposed Multi-Trapdoor Hash Function
The proposed multi-trapdoor hash function based on ECC is presented in this section.
•
ParG: Suppose the security parameter k, KGC selects large prime numbers p, q and elliptic curves over finite fields y 2 = x 3 + ax + b mod p, a, b ∈ F p . Given G is a cyclic subgroup of E(F p ), P is a q-order generator of G, KGC takes secure hash function: W = G → Z * q . KGC outputs the system parameter pars = (G, P, q, W).
• KeyG: Each DAU i selects randomly trapdoor key α i ∈ Z * q and computes hash key: HashG: Each DAU i randomly selects the auxiliary parameter u i , computes trapdoor hash value: Finally, the Central Hospital calculates multi-trapdoor hash value: Each DAU i randomly selects temporary trapdoor key β i ∈ Z * q and computes temporary hash key Y i = β i P. The collision parameter is given as Trapdoor collision is one of the properties of trapdoor hash functions [37]. Given hash keys ( X i , Y i ), trapdoor keys ( α i , β i ), message/auxiliary parameter pair (s i , u i ), and new message s i , collision parameter is given by That is From the above proof process, we can conclude that the owner of the trapdoor key can compute the trapdoor collision based on the given input. The proposed multi-trapdoor hash function aggregates multiple trapdoor collisions into one trapdoor collision, which improves the calculation efficiency. On the other hand, people who do not know the trapdoor key cannot calculate the trapdoor collision. Therefore, the proposed multi-trapdoor hash function is secure and efficient to construct the certificateless aggregate signature scheme.
The Proposed Certificateless Aggregate Signature Scheme
The proposed certificateless aggregate signature scheme based on the multiple trapdoor hash function is presented in this section. We introduce an attribute-based signature [41] and state the information, so that the requirements for medical data in blockchain-based MCPS can be better satisfied.
Setup
In this subsection, KGC will generate the system parameter and send it to data acquisition units DAU i , patients P j , and Central Hospitals. Suppose the security parameter k, KGC selects large prime numbers p, q and elliptic curves over finite fields y 2 = x 3 + ax + b mod p, a, b ∈ F p . Given G is a cyclic subgroup of E(F p ), P is a q-order generator of G, KGC takes seven secure hash functions: KGC randomly selects λ ∈ Z * q as the system master key. Then, the public key is K pub = λP. Finally, KGC outputs the system parameter pars = (G, P, q, K pub , W 1 , W 2 , W 3 , W 4 , W 5 , W 6 , H).
Pseudonym-Gen
In this phase, the Registry Center calculates the pseudonyms for DAU i and P j according to their real identities. The pseudonym system [42] is used to provide conditional privacy protection for doctors, nurses, patients, medical devices, etc. When relevant organizations need to know their real identity, the Registry Center can index their real identity. The Registry Center performs the following procedure to generate pseudonyms for DAU i and P j .
•
The Registry Center accepts DAU i 's real identity RID DAU i and calculates its pseudo identity ID DAU i = W 1 (RID DAU i ). After selecting a random a i ∈ Z * q , DAU i calculates F i = a i P, PID DAU i ,1 = λW 2 ( F i ), and sends PID DAU i ,1 to the Registry Center through the secure channel. The Registry Center calculates PID DAU i ,2 = W 3 ( ID DAU i , PID DAU i ,1 ), and outputs pseudonym PID DAU i = (PID DAU i ,1 , PID DAU i ,2 ).
•
The Registry Center accepts P j 's real identity RID P j and calculates its pseudo identity ID P j = W 1 (RID P j ). After selecting a random b j ∈ Z * q , P j calculates E j = b j P, PID P j ,1 = λW 2 (E j ), and sends PID P j ,1 to the Registry Center through the secure channel. The Registry Center calculates PID P j ,2 = W 3 ( ID P j , PID P j ,1 ), and outputs pseudonym PID P j = (PID P j ,1 , PID P j ,2 ).
At the same time, the Registry Center builds an index table between the real identities of DAU i (P j ) and their pseudonyms, such as (RID DAU i , PID DAU i ), (RID P j , PID P j ), so that when relevant organizations need to know the real identities of DAU i or P j , the Registry Center could return their real identities.
DAU i Key-Gen
In this stage, DAU i completes secret value/public parameter pair generation and sends the public parameter to KGC. With the received public parameter, KGC computes partial private key/partial public key pair. These two key pairs constitute the public keys and private keys of DAU i . Because the keys of DAU i are obtained by two entities (KGC and DAU), it is effective to protect the security of the keys.
DAU i randomly selects the secret value α i ∈ Z * q , calculates X i = α i P as the public parameter. Then, DAU i sends the public parameter X i to the KGC and the Central Hospital.
It then inputs the pseudonym PID DAU i and public parameters X i of DAU i , KGC randomly selects γ i ∈ Z * q as the secret value, calculates V i = γ i P and DAU i 's partial private key θ i = γ i + λW 4 ( PID DAU i , X i , V i ), then sends V i and θ i to DAU i through the secure channel. DAU i verifies the correctness of partial private key θ i by checking whether the equation DAU i 's public and private keys are: The partial private key and pseudonym effectively protect DAU i 's identity information. It plays a role of privacy protection.
Hash-Gen
In this section, each DAU i generates its own trapdoor hash value and sends it to the Central Hospital. Then, the Central Hospital combines all verified trapdoor hash values into a single value. Based on the trapdoor hash value, the trapdoor collision can be calculated, which can be used to achieve the individual signature.
Firstly, it inputs system parameter pars, original message s i , DAU i 's hash key (public parameter) X i , DAU i randomly selects auxiliary parameter u i , and calculates trapdoor hash value Where the original message s i depends on the attribute value of DAU i . That is to say, if DAU i is a doctor or a nurse, then s i is composed of the ID of the hospital where he or she works, his or her working department, and position titles, etc.; if DAU i is a medical equipment or app, then s i is composed of DAU i 's pseudonym PID DAU i , its manufacturer, categories, the affiliated institutions (hospitals, communities, scientific research institutions, etc.), etc. Using a series of attributes related to the signer to determine their identity can effectively protect the privacy of the signer, such as phone number, home address, email, etc.
When a patient P j starts data interaction with a DAU i , the trapdoor hash value T i of DAU i is calculated in advance and sent to the Central Hospital. When the treatment of P j is completed (assuming that P j generates n MRIs with n DAU i s), the Central Hospital aggregates the trapdoor hash value T = n i=1 T i of all the DAU i s responsible for P j 's MRIs, and sends T to each DAU i , which interacts with P j .
Individual-Sign
In this subsection, each DAU i that provides medical services to the patient P j completes an individual signature on the medical data for which it is responsible. We define the state information of DAU i as Ω i , that is, the pseudonym of P j associated with this DAU i . Only the individual signatures with the same Ω i (that is, for the same patient) can be aggregated.
DAU i selects the latest timestamp t i and calculates θ i = W 6 ( t i , V i , Ω i ), y i = θ i P. The latest timestamp ensures the timeliness of data collection and resists replay attacks. DAU i randomly selects temporary trapdoor key β i ∈ Z * q , and calculates the temporary hash key Y i = β i P and the trapdoor hash value TH Y i (s i , u i ) = W 5 (s i , Y i )Y i + u i P. s i represents the digest of P j 's MRI, which is in the charge of DAU i during this treatment. According to trapdoor collision (that is
Individual-Verify
In this stage, the Central Hospital achieves the verification of DAU i 's individual signature. When the Central Hospital receives DAU i 's individual signature σ i = (y i , d i ) and new auxiliary parameter u i , the Central Hospital performs the following steps: Check whether d i P + (X i + V i + K pub W * 4 )H * = y i holds or not. If it holds, the Central Hospital accepts σ i and then stores the encrypted original medical data in the Medical Cloud.
Aggregate-Sign
In this phase, the Central Hospital aggregates the accepted individual signatures for medical data from the same patient. The Central Hospital checks the status information Ω i of each DAU i whose individual signature σ i is accepted. For individual signatures with the same Ω i , the Central Hospital calculates ω = n i = 1 y i , D = n i = 1 d i , and the aggregate signature σ = (ω, D). Then, the Central Hospital forms a transaction by P j 's MRI digest, aggregation signature, access control, and the specific location of the original medical data in the Medical Cloud. Finally, a transaction request is sent to the Medical Chain.
Aggregate-Verify
After the miner receives the message, the aggregate signature is verified through the consensus mechanism. If the equation DP + n i=1 (X i + V i + K pub W * 4 ) H * = ω holds, the information is broadcast to other nodes in the network. The other nodes start consensus verification of the transaction and broadcast on the network. After the verification is successful, the transaction is added to the block.
Correctness Proof
The correctness proof of the aggregate is verified as follows:
Theorem 1.
In the random oracle model, the proposed certificateless aggregate signature scheme is existentially unforgeable against adaptive chosen-message attacks under the assumption that the ECDLP problem is hard. This theorem is obtained by combining Lemmas 1 and 2.
Lemma 1.
Given an A I type adversary C 1 makes at most q S Sign queries, q K Partial-Key-Gen queries, q SK Partial-Key-Gen queries within a period t in the random oracle model, and wins the game with an non-negligible probability ε, that is, successfully forging the signature of the proposed scheme. Then, an algorithm T 1 can be performed in polynomial time, and solve an instance of ECDLP with probability (supposing the number of aggregate signatures is n) ε ≥ Proof. Suppose T 1 is a solution of ECDLP and (P, xP) G as an instance of ECDLP, the goal of the algorithm T 1 is to compute x. Suppose T 1 makes q S Sign queries on q S identities, and generates n aggregate signatures at the challenge stage, T 1 selects PID DAU k as the target victim, and the probability of the selection is µ ∈ [ 1 q S + n , 1 q S + 1 ]. We set up a game between adversary C 1 and challenger Z 1 , and the detailed interaction process is as follows: Setup: Given K pub = xP, challenger Z 1 inputs security parameters k, generates system parameter pars = (G, P, q, K pub , W 1 , W 2 , W 3 , W 4 , W 5 , W 6 , H), and sends pars to adversary C 1 . Z 1 needs to maintain nine lists ( L W 4 , L W 5 , L W 6 , L H , L P , L PK , L SK , L T , L S ), whose initial values are empty.
Query: C 1 adaptively performs the following oracle queries.
• W 4 hash query: When C 1 makes a W 4 hash query with parameter (PID DAU i , If the list L W 4 does not include the tuple (*, *, *, δ W 4 ), Z 1 sends δ W 4 to C 1 and saves (PID DAU i , X i , V i , δ W 4 ) into the hash list L W 4 .
• W 5 hash query: When C 1 makes a W 5 hash query with parameter (s i , X i ), Z 1 checks whether existing (s i , X i , δ W 5 ) ∈ L W 5 or not, if so, Z 1 sends δ W 5 to C 1 . Otherwise, Z 1 selects a random δ W 5 ∈ Z * q . If the list L W 5 does not include the tuple (*, *, δ W 5 ), Z 1 sends δ W 5 to C 1 and saves (s i , X i , δ W 5 ) into the hash list L W 5 .
• W 6 hash query: When C 1 makes a W 6 hash query with parameter (t i , V i , Ω i ), Z 1 checks whether existing (t i , V i , Ω i , δ W 6 ) ∈ L W 6 or not, if so, Z 1 sends δ W 6 to C 1 . Otherwise, Z 1 selects a random δ W 6 ∈ Z * q . If the list L W 6 does not include the tuple (*, *, *, δ W 6 ), Z 1 sends δ W 6 to C 1 and saves (t i , V i , Ω i , δ W 6 ) into the hash list L W 6 .
• H hash query: When C 1 makes an H hash query with parameter (PID DAU i , T, u i ), Z 1 checks whether existing (PID DAU i , T, u i , δ H ) ∈ L H or not, if so, Z 1 sends δ H to C 1 . Otherwise, then Z 1 selects a random δ H ∈ Z * q . If the list L H does not include the tuple (*, *, *, δ H ), Z 1 sends δ H to C 1 and saves (PID DAU i , T, u i , δ H ) into the hash list L H .
•
Partial-Key-Gen query: When C 1 makes a Partial-Key-Gen query with parameter (PID DAU i , X i ), Z 1 checks whether existing ( PID DAU i , θ i , V i ) ∈ L P or not. - -If L P does not include the tuple ( PID DAU i , θ i , V i ) and PID DAU i PID DAU k , Z 1 selects a random θ i , δ W 4 ∈ Z * q , computes V i = θ i P − K pub δ W 4 , sends ( θ i , V i ) to C 1 and saves ( PID DAU i , θ i , V i ) into the hash list L P . If list L W 4 does not include corresponding tuple, then Z 1 adds tuple (PID DAU i , -If L P does not include the tuple ( PID DAU i , θ i , V i ) and PID DAU i = PID DAU k , Z 1 randomly selects θ i , δ W 4 ∈ Z * q , lets V k = γ r P (γ r ∈ Z * q is a known random number to Z 1 ), then saves ( PID DAU k , θ k , V k ) into the hash list L P and sends ( θ k , V k ) to C 1 If list L W 4 does not include corresponding tuple, then Z 1 adds tuple (PID DAU k , X k , V k , δ W 4 ) into L W 4 .
•
Secret-Key-Gen query: Suppose that the query is on a pseudo identity PID DAU i . If the list L SK includes (PID DAU i , α i , θ i ), Z 1 sends (α i , θ i ) to C 1 Otherwise, Z 1 selects a random α i ∈ Z * q and computes X i = α i P. Then Z 1 makes a Partial-Key-Gen query by (PID DAU i , X i ) and adds Public-Key-Gen query: Suppose that the query is on a pseudo identity PID DAU i . If the list L PK includes (PID DAU i , X i , V i ), Z 1 sends ( X i , V i ) to C 1 Otherwise, Z 1 selects a random α i ∈ Z * q and computes X i = α i P. Then Z 1 makes a Partial-Key query by (PID DAU i , X i ) and adds (PID DAU i , X i , V i ) into list L PK . Z 1 sends ( X i , V i ) to C 1 and adds (PID Public-Key-Replacement query: C 1 can select a new public key PK * DAU i = ( X * i , V * i ) to replace the original public key PK DAU i of any legitimate DAU i .
•
Hash-Gen query: When C 1 makes a Hash-Gen query with parameter (s i , u i ), Z 1 checks whether existing (s i , u i , T i ) ∈ L T or not, if so, Z 1 returns T i to C 1 . Otherwise, selects a random α i ∈ Z * q and computes: Sends T i to C 1 and saves (s i , u i , T i ) into the hash list L T . • Sign query: When C 1 makes a sign query with parameter (α i , Ω i , s i , s i ), Z 1 checks whether PID DAU i = PID DAU k or not, if so, Z 1 randomly selects t i ∈ Z * q and β i ∈ Z * q , and computes: Then, Z 1 generates individual signature (y i , d i ) and sends it to C 1 . Otherwise, Z 1 outputs failure and halts.
•
Aggregate-Sign query: When all of the PID DAU i (1 ≤ i ≤ n) satisfies PID DAU i PID DAU k , Z 1 randomly selects t i ∈ Z * q and β i ∈ Z * q for every DAU i (1 ≤ i ≤ n). Then Z 1 calculates Then, Z 1 generates aggregate signature (ω, D) and sends it to C 1 . Otherwise, if PID DAU i = PID DAU k , Z 1 outputs failure and halts. • Individual-Verify query: When C 1 makes an Individual-Verify query, Z 1 checks whether the corresponding tuple of PID DAU i is included in list L PK .
-If the corresponding tuple of PID DAU i is included in list L PK and PID DAU i T , u i ) and verifies whether the equation d i P = y i + (X i + V i + K pub W * 4 )H * holds or not, if so, Z 1 returns 1 to C 1 , otherwise, returns 0 to C 1 . - If the corresponding tuple of PID DAU i is included in list L PK and PID DAU i = PID DAU k , Z 1 returns 1 to C 1 when the list L H includes the tuple (PID DAU i , T, u i , δ H ), otherwise, Z 1 returns 0 to C 1 - If the corresponding tuple of PID DAU i is not included in list L PK , Z 1 returns 1 to C 1 when the list L H includes the tuple (PID DAU i , T, u i , δ H ), otherwise, Z 1 returns 0 to C 1 Forge: After the above polynomial bounded queries, Z 1 outputs the aggregate signature σ * = (ω * , D * ) of PID DAU i (1 ≤ i ≤ n), in which at least one PID DAU i (i ∈ [1, n]) does not make Partial-Key-Gen query and Secret-Key-Gen query, and at least one message s i (i ∈ [1, n]) does not make Sign query.
If all the PID DAU i (1 ≤ i ≤ n) satisfies PID DAU i PID DAU k , then Z 1 outputs failure and halts. Otherwise, if one PID DAU i (1 ≤ i ≤ n) satisfies PID DAU i = PID DAU k , then Z 1 queries the corresponding tuples of PID DAU i (1 ≤ i ≤ n) in the lists L PK , L SK , L H and checks whether the equation Otherwise, Z 1 cannot solve the discrete logarithmic problem, because: with Partial-Key-Gen and Secret-Key-Gen, Z 1 will terminate the simulation. Suppose that
•
Event E 1 represents that at least a PID DAU k (1 ≤ k ≤ n) does not make Partial-Key-Gen query and Secret-Key-Gen query.
•
Event E 2 represents that Z 1 does not terminate at the Sign-query stage.
•
Event E 3 represents that Z 1 does not terminate at the challenge stage.
The probability of solving the ECDLP by algorithm T 1 is as follows: The probability that Z 1 does not terminate during the whole simulation is at least Since µ ∈ [ 1 q S + n , 1 q S + 1 ], when q S is large enough, (1 − ϕ) q S tends to e −1 , so the probability that Z 1 does not terminate during the simulation is at least In summary, if Z 1 is not terminated during the simulation, and C 1 breaks the unforgeability of the proposed scheme with a non-negligible probability ε, T 1 can successfully solve ECDLP with a non-negligible probability: Given an A II type adversary C 2 makes at most q S Sign queries, q K Partial-Key-Gen queries, q SK Partial-Key-Gen queries within a period t in the random oracle model, and wins the game with an non-negligible probability ε, that is, successfully forging the signature of the proposed scheme. Then, an algorithm T 2 can be performed in polynomial time, and solve an instance of ECDLP with probability (supposing the number of aggregate signatures is n) ε ≥ Proof. Suppose T 2 is a solution of ECDLP and (P, xP) G as an instance of ECDLP. The goal of the algorithm T 2 is to compute x. T 2 selects PID DAU k as the target victim, and the probability of the selection is µ ∈ [ 1 q S + n , 1 q S + 1 ]. We set up a game between adversary C 2 and challenger Z 2 , and the detailed interaction process is as follows: Setup: Challenger Z 2 inputs security parameters k, generates system parameter pars, and sends pars = (G, P, q, K pub , W 1 , W 2 , W 3 , W 4 , W 5 , W 6 , H) to adversary C 2 . Z 2 needs to maintain nine lists ( L W 4 , L W 5 , L W 6 , L H , L P , L PK , L SK, L T , L S ), whose initial values are empty.
Query: Adversary C 2 makes the same queries as that of W 4 hash, W 5 hash, W 6 hash, H hash, Secret-Key-Gen, Public-Key-Gen, Hash-Gen, Sign query, Aggregate-Sign query in Lemma 1.
•
Partial-Key-Gen query: When C 2 makes a Partial-Key-Gen query with parameter (PID DAU i , X i ), Z 2 checks whether existing ( PID DAU i , θ i , V i ) ∈ L P or not. - Individual-Verify query: When C 2 makes an Individual-Verify query with parameter (PID DAU i , s i ), Z 2 checks whether the corresponding tuple of PID DAU i is included in list L PK .
-If the corresponding tuple of PID DAU i is included in list L PK and PID DAU i and verifies whether the equation d i P+ (X i + V i + K pub W * 4 )H * = y i holds or not, if so, Z 2 returns 1 to C 2 , otherwise, returns 0 to C 2 . - If the corresponding tuple of PID DAU i is included in list L PK and PID DAU i = PID DAU k , Z 2 returns 1 to C 2 when the list L H includes the tuple (PID DAU i , T, u i , δ H ), otherwise, Z 2 returns 0 to C 2 Forge: After the above polynomial bounded queries, Z 2 outputs the aggregate signature σ * = (ω * , D * ) of PID DAU i (1 ≤ i ≤ n), in which at least one PID DAU i (i ∈ [1, n]) does not perform the Partial-Key-Gen query and Secret-Key-Gen query, and at least one message, s i (i ∈ [1, n]) does not make Sign query.
If all the PID DAU i (1 ≤ i ≤ n) satisfy PID DAU i PID DAU k , then Z 2 outputs failure and halts. Otherwise, if one PID DAU K (1 ≤ K ≤ n) satisfies PID DAU K = PID DAU k , then Z 2 queries the corresponding tuples of PID DAU i (1 ≤ i ≤ n) in the lists L PK , L SK , L H , L W 4 and checks whether the Otherwise, Z 2 cannot solve the discrete logarithmic problem, because: It can be seen from the proof of Lemma 1 that the probability that Z 2 does not terminate during the simulation is at least Therefore, if Z 2 is not terminated during the simulation, and C 2 breaks the unforgeability of the proposed scheme with a non-negligible probability, T 2 can successfully solve ECDLP with a non-negligible probability:
Security Analysis
• Message authentication: As Theorem 1 states, no polynomial adversary could forge a valid message under the assumption that the ECDLP problem is hard. Therefore, the Central Hospital verifies the validity and integrity of the message (PID DAU i , Thus, the proposed scheme for MCPS provides message authentication. • Identity privacy protection: The pseudonym proposed in this paper is divided into two types: the pseudonym of DAUs (PID DAU i , 1 ≤ i ≤ n ) and the pseudonym of patients (PID P j , 1 ≤ j ≤ n). PID DAU i and PID P j are generated by combining the randomly chosen secret value a i or b j and the system master key λ. No adversary could compute the real identity from the pseudonym without knowing the secret a i or b i and λ. Thus, the pseudonym proposed in this paper can protect the identity privacy of DAUs and patients.
•
Resistance to replay attack: Whenever DAU i makes an individual signature, it chooses a latest timestamp t i . The Central Hospital will check the freshness of the timestamp t i in order to detect the replay attacks. • Resistance to modification attack: According to Theorem 1, the Central Hospital can protect the integrity of message (PID DAU i , X i , V i , t i , u i , σ i ). Therefore, any modification on the message will be detected by checking whether the equation d i P = y i + (X i + V i + K pub W * 4 )H * holds or not.
•
Resistance to spam attack [17]: Because of natural compression property of the aggregate signature, the proposed signature scheme can combine n individual signature into one short signature. The length of the aggregate signature will not increase with the increase of the number of signers. Therefore, in the blockchain-based MCPS, more transactions can be added into a block. However, the attacker has to send more transactions to congest the network. It will spend more transaction fee which will increase the cost of spam attacks.
Efficiency Analysis
Certificateless aggregate signatures can be classified into pairing-based certificateless aggregate signatures and ECC-based certificateless aggregate signatures. In this paper, we adopt the same efficiency evaluation method as reference [11,29], in which the simulations are conducted on an Intel I7 3.4 GHz, 4 GB machine with Windows 7. Pairing-based aggregate signature schemes can be simulated on the bilinear pairing e : G 1 × G 1 → G 2 . G 1 is an additive group generated with the order q 1 on the type A elliptic curve E 1 : y 2 = x 3 + x mod p 1 , where p 1 and q 1 are 512-bit and 160-bit prime number, respectively [11]. For ECC-based aggregate signature schemes, the simulation can be conducted over the non-singular elliptic curve E : y 2 = x 3 + ax + b mod p 2 . G is an additive group generated on E with the order q 2 , where p 2 , q 2 are two 160-bit prime numbers, respectively. The above mentioned bilinear pairing and elliptic curve constructed in the experiments are on the same security level of 80 bits. As shown in Tables 3 and 4, the running time of these encryption operations has been presented. Table 3. Different encryption operation running time [11,29,37].
Encryption Operation Description Time (ms)
t p The bilinear pair operation 4.2110 t mp The scalar multiplication in the bilinear pair 1.7090 t ap The bilinear pair-to-midpoint addition 0.0071 t hp The hash-to-point operation in bilinear pair 4.4060 t mecc The scalar multiplication in elliptic curve 0.4420 t aecc The point addition operation in elliptic curve 0.0018 t h The general hash operation 0.0001 Table 4. Group parameter [11,29,37].
Symbol Description Length (bytes)
The size of elements in group G 1 128 |G| The size of elements in group G 40 |q| The size of the elements in Z * q 20 The computation cost and communication cost are two important factors to evaluate certificateless aggregate signature schemes. In this section, the efficiency analysis is divided into two parts. First, we compare the proposed scheme with related certificateless aggregate signature schemes. Second, we compare the proposed scheme with related aggregate signature schemes based on blockchains.
1. The efficiency analysis of certificateless aggregate signature schemes Table 5 compares the computation cost of the proposed scheme and related certificateless aggregate signature schemes [9,29].
-
In the individual sign algorithm, DAU i needs three scalar multiplications in the elliptic curve and two general hash operations to generate individual signature. The computation cost of our scheme in individual signature is smaller than related certificateless aggregate signature schemes [9,29]. -In the individual-verify algorithm, the Central Hospital needs three scalar multiplications, three point addition operations in the elliptic curve, and two general hash operations to verify the DAU i 's individual signature. The computation cost of our scheme in individual verification is smaller than that of Gong et al.'s scheme [9], but slightly higher than that of Cui et al.'s scheme [29]. -As shown in Figure 3, in the aggregate verify algorithm, the Central Hospital needs (2n+1) scalar multiplications, (2n + 1) point addition operations in the elliptic curve, and 2n general hash operations to verify the aggregate signature. The computation cost of our scheme in aggregate verification is smaller than Gong et al.'s scheme [9], but slightly higher than that in Cui et al.'s scheme [29]. (2n + 1)t mecc + (2n + 1)t aecc + 2nT H ≈ 0.8878n + 0.4438ms Table 6 shows the communication cost of our scheme and related certificateless aggregate signature schemes. In the proposed scheme, the aggregate signature length, such as that of CAS-2 in [9], is a constant, which does not increase with the number of individual signatures.
From Figure 4, we can see that the communication cost of the proposed scheme is obviously smaller than that of CAS-1 [9] and Cui et al.'s scheme [29], and slightly smaller than that of CAS-2 [9]. Table 6 shows the communication cost of our scheme and related certificateless aggregate signature schemes. In the proposed scheme, the aggregate signature length, such as that of CAS-2 in [9], is a constant, which does not increase with the number of individual signatures. [29] (n + 1) G Yes Our scheme G + q No From Figure 4, we can see that the communication cost of the proposed scheme is obviously smaller than that of CAS-1 [9] and Cui et al.'s scheme [29], and slightly smaller than that of CAS-2 [9].
The comparison of certificateless aggregate signatures based on blockchain
In this subsection, we compare the computation cost and communication cost of the proposed scheme with two most recently proposed certificateless aggregate signature schemes based on blockchain [17,18]. As shown in Table 7 and Figure 5, in the individual sign algorithm and aggregate verify algorithm, the computation cost of the proposed scheme is lower than that of Gao et al.'s scheme [18], but it is close to Zhao et al.'s scheme [17]. In the individual verify algorithm, the computation cost of the proposed scheme is lower than Gao et al.'s scheme [18] but slightly higher
The comparison of certificateless aggregate signatures based on blockchain
In this subsection, we compare the computation cost and communication cost of the proposed scheme with two most recently proposed certificateless aggregate signature schemes based on blockchain [17,18]. As shown in Table 7 and Figure 5, in the individual sign algorithm and aggregate verify algorithm, the computation cost of the proposed scheme is lower than that of Gao et al.'s scheme [18], but it is close to Zhao et al.'s scheme [17]. In the individual verify algorithm, the computation cost of the proposed scheme is lower than Gao et al.'s scheme [18] but slightly higher than that of Zhao et al.'s scheme [17]. As shown in Table 8 and Figure 6, the aggregate signature length of the two most recently proposed certificateless aggregate signature schemes [17,18] based on blockchain is correlated to the individual signature number. However, the aggregate signature length of our scheme is |G|+|q|, which is a constant and is obviously lower than the other two schemes [17,18]. That is to say, the storage capacity of the aggregate signature does not increase with the increase of the DAUi's in each transaction, which can effectively improve the storage efficiency of each block. As shown in Table 8 and Figure 6, the aggregate signature length of the two most recently proposed certificateless aggregate signature schemes [17,18] based on blockchain is correlated to the individual signature number. However, the aggregate signature length of our scheme is G + q , which is a constant and is obviously lower than the other two schemes [17,18]. That is to say, the storage capacity of the aggregate signature does not increase with the increase of the DAU i 's in each transaction, which can effectively improve the storage efficiency of each block. Table 8. Communication cost of schemes based on blockchain.
Conclusions
In this paper, a certificateless aggregate signature scheme based on blockchain is proposed, which can be used for secure storage and sharing of medical data in MCPS. To improve performance, the function of trapdoor collision calculation in trapdoor hash function is included in our proposed scheme. The security analysis presents that the proposed scheme is existentially unforgeable against adaptive chosen-message attacks, which is resistant to replay attack and modification attack. The proposed scheme provides message authentication and identity privacy protection, which satisfies the security requirements of MCPS. Compared with pairing-based schemes, the scheme proposed in this paper is based on ECC with better computational efficiency, and the computational cost of our scheme is lower. More importantly, the aggregate signature length of the proposed scheme is independent of the number of signers, which can effectively increase the number of transactions stored in each block. Therefore, the proposed scheme can alleviate the capacity limitation of blockchain and prevent spam attacks to a certain extent.
In the future work, we will focus on the lattice-based digital signature algorithm and combine it with blockchain to improve the security of blockchain. More importantly, we will apply our research to practice and obtain measurement results from practical implementation.
|
v3-fos-license
|
2023-03-23T15:06:56.615Z
|
2023-03-21T00:00:00.000
|
257672469
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2023.1159222/pdf",
"pdf_hash": "c67b71f008aaa613c251f5aeff3d092988003e4e",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41966",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "1956cc7385ca48a52ddf04e66c99721b0360fcfd",
"year": 2023
}
|
pes2o/s2orc
|
Curative effect of anti-fibrosis Chinese patent medicines combined with ursodeoxycholic acid for primary biliary cholangitis: A systematic review and meta-analysis
Objective: To delineate the curative effect and safety of anti-fibrosis Chinese patent medicines (CPMs) combined with ursodeoxycholic acid (UDCA) for primary biliary cholangitis (PBC). Methods: A literature search was conducted using PubMed, Web of Science, Embase, Cochrane Library, Wanfang database, VIP database, China Biology Medicine Database, and Chinese National Knowledge Infrastructure from their inception until August 2022. Randomized controlled trials (RCTs) of the treatment of PBC with anti-fibrotic CPMs were collected. The eligibility of the publications was assessed using the Cochrane risk-of-bias tool. The evaluation indicators were the clinical efficacy rate, liver fibrosis, liver function, immune function, and symptom score. Meta-analysis and subgroup analysis were conducted to evaluate the effectiveness of anti-fibrosis CPMs. Risk ratio (RR) was used to assess dichotomous variables, and continuous variables with a 95% confidence interval were calculated using mean difference. Results: Twenty-two RCTs including 1,725 patients were selected. The findings demonstrated that anti-fibrotic CPMs combined with UDCA improved the efficacy rate, liver function, liver fibrosis, immunological indicators, and clinical symptoms compared with UDCA alone (all p < 0.05). Conclusion: This study demonstrates that the combination of anti-fibrotic CPMs and UDCA can improve both clinical symptoms and outcomes. Nevertheless, more high-quality RCTs are needed to assess the effectiveness of anti-fibrosis CPMs for PBC.
Introduction
Primary biliary cholangitis (PBC), also called primary biliary cirrhosis, is a chronic autoimmune cholestatic liver disease whose pathogenesis has not been fully elucidated (European Association for the Study of the Liver, 2017). PBC frequently occurs in middle-aged women, and its clinical serological characteristics include positive anti-mitochondrial antibodies and elevated levels of alkaline phosphatase (ALP) or gamma-glutamyl transpeptidase (GGT) (You et al., 2022). The main pathological features of the liver include progressive, non-suppurative, and destructive intrahepatic cholangitis, leading to fibrosis and eventually cirrhosis You et al., 2022; European Association for the Study of the Liver, 2017). PBC is mainly caused by genetic and environmental factors, with unknown pathogenesis and hidden clinical manifestations (You et al., 2022). Some patients with PBC have cirrhosis at the time of diagnosis , making anti-fibrotic therapy particularly important.
Ursodeoxycholic acid (UDCA) is an effective treatment for PBC (You et al., 2022), and its mechanism of action includes cholelithiasis, cytoprotection, anti-inflammatory effects, and immune regulation . UDCA has greatly improved longevity in patients with PBC. However, 40% of patients with PBC still respond poorly to it (Cheung et al., 2016), and non-responders have a lower survival rate than the general population. Owing to rapid disease progression and poor long-term prognosis, patients who respond poorly to UDCA may require alternative treatment methods urgently; however, currently, no unified therapies exist.
Traditional Chinese medicine (TCM) has advantages in treating liver fibrosis, owing to its combination of ingredients and multiple pathways and targets. Previous studies have shown that Fuzheng Huayu capsules (FZHY) (Liu et al., 2019), Fufang Biejia Ruangan tablets (FFBJRG) (Ji et al., 2022), and Anluo Huaxian pills (ALHX) (Lu et al., 2017) are commonly used clinically as Chinese patent medicines (CPMs) for the anti-fibrosis treatment of liver these have been approved by the State Food and Drug Administration of China, with national medicine permission numbers of Z20020074 (FZHY), Z19991011 (FFBJRG), and Z20010098 (ALHX). The fibrosis stage is important in the progression of PBC. If anti-fibrosis treatment is administered in time at this stage, the occurrence of liver cancer or even liver failure will be reversed (Prince et al., 2002). Additionally, a recent study demonstrated that the treatment of PBC with TCM exerts anti-fibrotic effects and helps improve patients' pruritus, fatigue, and the response rate to UDCA (Chen et al., 2018). Meanwhile, several studies have revealed that FZHY, FFBJRG, and ALHX have unique advantages in improving biochemical indices, anti-fibrosis, and quality of life in patients with PBC (Chen et al., 2019;Jiang et al., 2019;Wang et al., 2020). Furthermore, a previous real-world cohort study (Chen et al., 2018) conducted by our research group found that TCM combined with UDCA increased the 1-year biochemical response rate of patients with PBC by 15.1% compared with that of UDCA alone (43.0% vs. 27.9%, p < 0.05). Therefore, it is of great significance to explore the clinical curative effects of CPMs combined with UDCA on PBC. Therefore, a meta-analysis of randomized controlled trials (RCTs) was conducted to measure the efficacy of CPMs plus UDCA for treating PBC.
Standards for inclusion and exclusion of literature
The inclusion criteria were: 1) the studies reported RCTs; 2) the participants were diagnosed with PBC based on the consensus recommendations of the Asian Pacific Association for the Study of the Liver (APASL); 3) the treatment involved anti-fibrotic CPMs plus UDCA; and 4) at least one of the following outcome indices was used: clinical efficacy rate, ALP, GGT, alanine aminotransferase (ALT), aspartate aminotransferase (AST), hyaluronic acid (HA), laminin (LN), collagen type IV (IV-C), type III procollagen (PC-III), immunoglobulin M (IgM), immunoglobulin G (IgG), and clinical symptoms. The primary outcome was the clinical efficacy rate, whereas the secondary outcomes were liver function, hepatic fibrosis, immunological indicators, and clinical symptoms.
The exclusion criteria were: 1) the experimental group used none of the three aforementioned anti-fibrotic CPMs or other TCMs; 2) duplicate studies; 3) studies with incomplete research data; and 4) animal experiments, conferencing articles, reviews, non-RCTs, and other unrelated studies.
Data acquisition and quality evaluation
The literature was independently screened by two researchers (BI and SHI) in terms of the inclusion and exclusion standards. Data Frontiers in Pharmacology frontiersin.org acquisition consisted of 1) general information: title, first author, and publication year; 2) sex, age, sample size, intervention measures, and treatment course; and 3) observed outcome indicators. Literature eligibility was estimated using the Cochrane collaboration tool, including incomplete outcome data, selective reporting, allocation concealment, random-sequence generation, blinding of outcome assessment, blinding of participants and personnel, and other biases. Based on these standards, the literature eligibility was categorized as three levels of risk of bias: high, unclear, and low.
Statistical methods
The statistical analysis was performed using RevMan 5.4 software. According to the type of outcome, continuous data
FIGURE 1
Flow diagram of the literature screening and selection process.
FIGURE 2
Risk of bias graph.
Frontiers in Pharmacology frontiersin.org 04 are depicted as mean difference (MD) or standardized mean difference (SMD), while categorical data are presented as risk ratios (RRs); all are expressed with a 95% confidence interval (CI). Furthermore, the χ 2 and I 2 tests were used for heterogeneity analysis.
The fixed-effects model was used for analysis if p ≥ 0.1 and I 2 ≤ 50% in the subgroup or overall. Conversely, the random-effects model was employed if p < 0.1 and I 2 > 50%. Additionally, we searched for sources of heterogeneity in the results and employed a sensitivity analysis to affirm whether the results were stable. Moreover, subgroup analyses were performed based on the use of three different CPMs. A p-value less than 0.05 was considered statistically significant. Publication bias was analyzed using funnel plots.
Study characteristics
All 22 included studies were RCTs conducted in China and involved 1,725 patients with PBC. All studies were classified into an experimental group and a control group to compare the efficacies of CMPs combined with UDCA and UDCA monotherapy. The basic traits of each study are summarized in Table 2.
CPMs drug composition
These studies used three anti-fibrosis CPMs, including FZHY, FFBJRG, and ALHX, and listed the TCMs used. Dongchongxiacao (Cordyceps sinensis (Berk.) Sacc) is an ingredient in FZHY and
Outcome index 3.5.1 Clinical efficacy rate
A total of 13 clinical trials described the clinical efficacy rate as the main outcome, which was classified as markedly effective, effective, and ineffective grades. A random-effects model was implemented for analysis regarding heterogeneity testing (p = 0.0009, I 2 = 64%), and anti-fibrosis CPMs plus UDCA improved the clinical efficacy rate compared with UDCA alone (RR = 0.11, 95% CI: 0.06, 0.16; p < 0.00001) ( Figure 4A). Subgroup analysis showed that the clinical efficacy rate of FFBJRG was better than that of the other two CPMs and had no heterogeneity (p = 0.96, I 2 = 0%) ( Figure 4B).
Alkaline phosphatase
We found 10 trials that reported the effects of anti-fibrotic CPMs plus UDCA on ALP levels. A random-effects model was implemented based on the p-value and I 2 value. The subgroup analyses showed that anti-fibrotic CPMs plus UDCA were superior to UDCA alone in terms of serum ALP levels (RR = −28.83, 95% CI: −36.57, −21.10; p < 0.00001) ( Figure 5A).
Immunoglobulin M and immunoglobulin G
The IgG data were processed using a fixed-effects model based on the heterogeneity test (p = 0.96, I 2 = 0%), and a random-effects model was used for the IgM data with high heterogeneity (p = 0.03, I 2 = 63%). The MD (95% CI) of IgG and IgM were −2.86 (−3.63, −2.08) and −1.28 (−1.62, −0.94), respectively. Compared with UDCA alone, the additional use of anti-fibrosis CPMs was more effective in reducing IgG and IgM levels ( Figures 7A, B).
Symptom score
Three trials reported Symptom Score Change as the main outcome. The fixed-effects model was adopted because of the
Adverse events
Adverse events were described in seven studies. Three articles reported a total of nine patients who developed mild diarrhea and Frontiers in Pharmacology frontiersin.org 11 nausea after taking CPM; however, these symptoms were not severe and did not require treatment. Four studies reported no adverse reactions.
Publication bias
An inverted funnel plot analysis of the clinical efficacy rate revealed that there may be publication bias in the asymmetric distribution of these publications (Figure 9).
Discussion
PBC is the most common autoimmune cirrhotic hepatic disease, occurring in all ethnic groups worldwide (Lv et al., 2020). The prognosis of patients with PBC mostly depends on the degree of liver fibrosis and its complications (Lammers et al., 2015). Patients with PBC tend to seek additional pharmacological treatments because UDCA is not uniformly effective (Lammers et al., 2015). In recent years, TCM has been widely studied and discussed as a complementary therapy. Many studies have shown that TCM plus UDCA has diverse advantages in relieving the clinical symptoms and improving the prognosis of PBC.
This meta-analysis validated the advantages of FZHY, FFBJRG, and ALHX combined with UDCA in the treatment of PBC. Compared with UDCA treatment alone, anti-fibrotic CPMs plus UDCA improved efficacy rates. Furthermore, liver function tests are widely used in clinical practice as indicators of the degree of liver damage. ALP, ALT, AST, and GGT levels decreased after combined treatment with anti-fibrotic CPMs and UDCA. LN, IV-C, PC-III, and HA are indicators for the detection of liver fibrosis, and the addition of anti-fibrosis CPMs to UDCA resulted in decreased levels of these compared with UDCA treatment alone. Moreover, immunological indicators (IgM and IgG) and clinical symptoms also notably improved with the combined treatment of anti-fibrotic CPMs and UDCA. In conclusion, anti-fibrotic CPMs combined with UDCA in the treatment of PBC effectively relieved various clinical indicators. These results provide hope for the treatment and prevention of liver fibrosis and cirrhosis in the future.
The pathobiology of PBC is characterized by inflammation, bile duct damage, and fibrosis (You et al., 2022), of which fibrosis appears in stage Ⅱ. It is believed that TCM is hepatoprotective and anti-inflammatory and suppresses the activation of hepatic stellate cells, which is advantageous in the treatment of liver fibrosis. Additionally, TCM may have multilevel, multi-pathway, and multi-target pharmacological actions on the comprehensive pathogenesis of PBC. For example, P. notoginseng is an ingredient in FFBJRG and ALHX, and P. notoginseng saponins are its main active constituent, which play an immunomodulatory role by reducing the levels of pro-inflammatory cytokines (Jiang et al., 2013). Furthermore, Ophiacordyceps sinensis, as a duplicate herb in FZHY and FFBJRG, can attenuate liver inflammation and fibrosis by regulating the expression of the TGF-β/MAPK pathway (Fu et al., 2021). Moreover, a mechanistic study has revealed that FZHY can decrease the expression levels of α-SMA, CTGF, TIMP-1, TGF-β1, and Smads, thereby reducing hepatic apoptosis, acute liver injury, and liver fibrosis (Cheng et al., 2013;Xie et al., 2013). An animal experiment reported that FFBJRG ameliorates hepatic disease by reducing the serum collagen levels of LN, HA, and IV-C and downregulating TGF-β-Smad pathway fibroblast signal transduction (Yang et al., 2013). The possible mechanism of ALHX for the inhibition of hepatic fibrosis is related to enhancing MMP2 activity in liver tissue and promoting extracellular matrix degradation by hepatoprotective enzymes (Tan et al., 2010). In brief, certain evidence support the anti-inflammatory and anti-fibrosis effects of FZHY, FFBJRG, and ALHX as the appropriate CPMs for treating PBC.
FIGURE 8
Forest plot of meta-analysis of syndrome score.
FIGURE 9
The funnel plot of total effective rate.
Frontiers in Pharmacology frontiersin.org However, this meta-analysis had some limitations. First, the 22 studies included had a small sample size, all were Chinese, and only a small number of studies reported several outcome indicators. Second, most publications only mentioned random assignment, and only one-third of the studies described a specific randomization method, such as a random number table. Therefore, the findings need to be further evaluated using high-quality trials. Third, different studies had different experimental periods, ranging between 12 and 48 weeks, which may be a source of heterogeneity. Fourth, although anti-fibrosis CPMs were always used in the experimental group, there were three different types-FZHY, FFBJRG, and ALHX, which could also be a source of heterogeneity. Finally, because half of the studies did not mention adverse events, the safety of CPMs as an anti-fibrosis therapy for PBC should be further evaluated, and caution is needed when drawing conclusions.
Conclusion
Our research shows that the combination of anti-fibrosis CPMs and UDCA is more effective than UDCA alone in treating PBC in improving clinical efficacy rate, liver fibrosis, liver function, immune function, and symptom score. This systematic review and metaanalysis provides reliable clinical evidence for PBC treatment. Antifibrotic CPMs are a promising therapeutic approach to supplement the conventional treatment of PBC. However, further evidence from high-quality and multi-center studies with larger samples is warranted to confirm the curative effect of anti-fibrosis CPMs during follow-up periods.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Materials, further inquiries can be directed to the corresponding author.
|
v3-fos-license
|
2023-01-23T16:03:38.755Z
|
2023-01-01T00:00:00.000
|
256100312
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e12932",
"pdf_hash": "428d0fff90275d00a270ecc7c26fe4f9e45222ea",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41967",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "481c7077aef7f81f4c2cdf252e5a06d101268ac5",
"year": 2023
}
|
pes2o/s2orc
|
Challenges of sanitation in developing counties - Evidenced from a study of fourteen towns, Ethiopia
Rapid urbanization and population growth in the past few decades has been worsening the water supply and sanitation problems in Ethiopia putting the current water supply deficit of the country at a staggering 41%. Using Ethiopia as a case of rapidly growing countries in the Global South and struggling with water supply and sanitation management, the objective of this study was to examine the challenges of sanitation in Ethiopia by selecting 14 towns located under different climatic conditions and administrative regions with diversified culture, ethnicity, and religion. Data from these towns were collected through household survey, Focus Group Discussion (FGD), Key Informant Interview (KII) and site visits. The field observation was conducted with representatives from the municipality who have knowledge on the existing sanitation and associated problems. Analysis of the collected data shows that poor water supply, inadequate toilet facilities, poor toilet facility emptying practices, poor community perceptions on sanitation management and inadequate emptying services were major challenges associated with sanitation. Moreover, absence of wastewater dumping site, lack of integration among the different components of sanitation, insufficient collaboration among potential stakeholders and gaps between the existing population and sanitation services were the other key challenges of sanitation exacerbated by inadequate financial resources. From the 14 studied towns the average water deficit was found 35%, and the average households with no toilet facilities were 17%. Only about 20% households have flushed toilets and about 5% practiced open defecation. While 42% of the households use vacuum trucks for emptying wastewaters and about 37% of the households dump wastewater outside of their premises. Among the studied 14 towns, only four towns have their own vacuum trucks, no town possesses wastewater dumping site. The different components of sanitation were managed separately without integration. Moreover, the collaboration among the potential stakeholders of sanitation management was found poor and fragmented. Also, sanitation services have not developed along with the population growth as the finance allocated to sanitation management is much lower compared to other municipal services. Thus, sanitation in the studied towns is poor, though there are progresses when compared with previous decades. To improve the sanitation condition in these towns the water supply should be improved together with raising the perception of the local community. The present study recommends further studies to be conducted on the feasibility of sustainable sanitation and “country-wide comprehensive” study on water supply, sanitation and open defecation in Ethiopian in particular and in developing countries as a whole.
Introduction
Access to improved water supply and sanitation is one of the basic needs and rights of every person. The health of the people and dignified life is ensured through access to improved water supply and basic sanitation. Improved water supply together with proper sanitation increases the health, social, and economic well-being of the people [1]. Ensuring access to improved water supply and basic sanitation services is the first step in eradicating poverty, especially in developing countries [2].
Despite urban areas are recognized as center for development, sanitation is one of the most critical problems facing it across the globe; . Cities contribute up to 5% of gross national product in low-income countries, 73% in middle-income countries and 85% in high-income countries [3]). Conversely, it is estimated that 668 million people globally lack access to improved water supplies and 2.4 billion people still live without access to improved sanitation [4]. There are significant disparities across regions, between urban and rural areas, and between the rich and the poor [5,6]. The disparities could be attributed to differences in economic growth, infrastructure development, awareness, housing investment, government, good governance and nongovernmental organizations interventions. Progress among the poorest is the slowest [7]. Furthermore, progresses witnessed during the United Nation Millennium Development Goals (MDGs) period disproportionately benefited the rich instead of the poor in most countries [8].The MDGs were aimed at reducing by half the proportion of people without access to safe drinking water and basic sanitation at the end of 2015. However, it missed the water and sanitation targets proposed in the MDGs [8]. Lack of access to improved drinking water is still a serious problem in developing countries where an estimated 675 million people have no access to improved drinking water [4]. Sub-Saharan Africa has the least developed sanitation infrastructure when compared to other developing nations. Compared to the global average of 36% without access to improved sanitation, 70% of the people in Sub-Saharan Africa use shared or poor quality sanitation facilities [9,10]. This implies that most of the households in Sub-Saharan Africa use "unprotected" and/or "non-networked" water supply sources [11,12].
To bridge the gaps of access to improved sanitation, significant financial resources, sustainable technological solutions that fit to the context of each local area and political determination are required.
Ethiopia, a sub-Saharan developing country, is the second most populous in Africa, with a population increase from 40 million in 1984 [13] to more than 110 million in 2020 [14]. Furthermore, a study conducted by [15] showed that Ethiopia is exhibiting a high annual rate of urbanization (5.4%). The Ethiopian urban population has more than doubled in the past 20 years, from 7.3 million in 1994 to 18 million in 2022 [14] with an annual growth rate of 6.82% between 2001 and 2019 [16], that is higher than the average in Sub-Saharan Africa (4.07%) [17]. However, the water supply and sanitation provision is not keeping up with the population growth. For instance, 39.74% of households in Ethiopia had limited access to drinking water services [18]. In contrast, [12] in its report revealed that water supply and sanitation facilities have been showing an increasing trend in Ethiopia, even though it still has the lowest water supply (42%) and sanitation coverage (28%) in sub-Saharan Africa. A study conducted by [19] showed that 52.1% of the Ethiopian population used unimproved sanitation facilities while 36% of them practiced open defecation.
Sanitation in Ethiopian urban areas is guided by the Sustainable Development Goals and the national sanitation management policies, but implemented only at an insignificant level with few traditional practices. This limited practice is the reason why significant proportion of the urban dwellers practice open defecation [20,21,22]. Therefore, the objective of this study was to investigate the challenges of sanitation in fourteen towns in Ethiopia located at different geographical locations with diversified culture, ethnicity, religion and socio-economic conditions. Therefore, this study was conducted by surveying fourteen towns in Ethiopia so as to understand the state of water supply and sanitation, which together show the water supply and sanitation situation for the whole country. These towns were selected for a number of reasons among which rapid population and urbanization is one of them. The important challenge of the r growing population and urbanization is sanitation management. The rate of population growth in these fourteen studied towns for the period between 2007 & 2020 was 47%, with an average annual population growth rate of 4.29% (CSA, 2016). However, the provision of water supply and sanitation facilities were by far below the annual population growth.
The biggest challenge to proper sanitation in these towns may fall at the center of inadequate water supply, finance, aging infrastructure, population growth, urbanization, and climate change [23]. Moreover, the inability to link sanitation with other sanitation issues (e.g. solid waste, stormwater management), poor public perception, inadequate consideration of the social, cultural, political and economic factors into sanitation projects, the omission of potential actors, the assumption to implement a single technology that fits to all state of affairs, and the absence of multi-step process of sanitation from waste generation to disposal are the other key challenges. These result in overcrowding, slums, and squatter settlements in different parts of an urban area [24].
Using Ethiopia as a case representative of rapidly developing countries in the Global South, struggling with water supply and sanitation management, the goal of this study was to examine the challenges of sanitation, focusing on fourteen urban areas located under different climatic conditions. For fair representation the study focused at four settlements categories: slum, private residential houses, condominium houses and informal settlements, where little or no attention was given by previous studies [18,20,22,25] to such settlement combinations. This has created opportunities to reach most vulnerable and marginalized groups of the local community such as poorer households with variable socio-economic status, persons with disabilities, elders and women.
The results of this study will inform decision making on water supply and sanitation in developing countries in general and in Ethiopia in particular.
This paper is structured into six main sections: Introduction, Materials and Methods, Results, Discussion, Conclusions and Reference.
Study area
The study was conducted in fourteen towns in Ethiopia (Fig. 1). The fourteen towns were fairly distributed across Ethiopia within various climatic zones which could be representative to other towns of Ethiopia. The studied towns have populations greater than 50,000 which are classified as medium level urban areas in Ethiopia (Table 1).
Data collection methods
This study employed mixed qualitative and quantitative methods. The data were gathered through households' survey, key informant interview, focus group discussion and personal observation from June to September 2020. During the qualitative and qualitative data collection, structured questionnaires were prepared and tested at a pilot town before starting the actual data collection.
Households survey
Data related to water supply, wastewater collection, transportation and disposal mechanisms, sanitation management technologies, the impacts of poor sanitation at households, households perceptions towards sanitation management, coping strategies to management sanitation and challenges of sanitation management were collected through households survey. For the fourteen studied towns, the number of representative households was computed using the Cochran's formula (Eq. (2)). The formula was adjusted to the number of households for the towns at a response rate of 90% as all the questionnaires were administered by trained data collectors. First, a town level representative household sample size (n) was determined using Eq. (1).
After the value of "n" was obtained, the sample was adjusted to get the representaive households sample size, n a ,using Eq. (2).
Where; n a = sample households adjusted for response rate and total number of households n = sample households required at Town level Accordingly, the calculated households size which were surveyed is presented in Table 2. Moreover, for fair representation and to consider the various contexts of the towns the household survey was conducted at four settlements categories: slum 1 , private residential houses, condominium houses 2 and informal settlements 3 . The settlement categories were selected in discussion with the municipality. They were also selected to ensure the inclusiveness of the marginalized groups and settlement patterns in each studied town.
Focus Group Discussion (FGD)
In addition to the households survey, a FGD was performed with a small group of people (as suggested by [26] which comprised of two individuals from influential elders, women's representative, youths' representative, health extension workers, environmental experts, water users forum representatives from the local community, local administration managers, local development 1 Slum refers to unplanned settlements with limited or no access to local roads, the per unit area per person is not to the standard. A number of scanty houses constructed one after the other without a border between the residences. It also generally lacks the basic social services. 2 Condominium houses refer to high rise building residential apartments constructed by the Ethiopian government & transferred to local community through lottery system. Each resident has its own toilet facility, in house piped water, and standard rooms. 3 Informal settlements refer to settlements without title deeds. Thus, they are with no access to legal water and electricity supply. 4 CSA, 2017. 5 m 3 = cubic meter representatives, school officers, water supply and sewerage enterprise experts, and sanitation and green development experts. The discussion was moderated by the researcher. Interview questions were presented to the FGD participants to get a detailed set of data about perceptions, thoughts, feelings and impressions of the local community and stakeholders in their own words [27]. In addition, questions were forwarded to the FGD participants to investigate the local community's understanding [28] related to sanitation management and associated challenges and to give opportunity to marginalized segments of society (e.g. elders, women, youths, extension workers) for exposing their feelings about sanitation and associated challenges.
Key Informant Interview (KII)
A face-to-face KII was performed with main key stakeholders in each town which directly or indirectly involved in sanitation management. This included the town's water supply and sewerage enterprise, environment department, sanitation management and greenery development office, office of health, office of education, office of finance and development, department of infrastructure, department of urban planning and the city administration (or municipality). The KII was performed using structured questionnaires prepared ahead of time. The KII was planned to get data related to sanitation management and challenges, wastewater collection, transportation and disposal, water supply, and integration between the different components (e.g. water supply, wastewater, solid waste and stormwater management) of sanitation management. Furthermore, the environmental impacts of poor sanitation management practices, emptying services, collaboration among the different institutions involved in sanitation management, the availability of integrated sanitation master plan and the allocation of adequate finance for sanitation management were raised during KII. The KII was executed to get the data indicated above from the limited number of well-connected and informed experts, to understand the perception and beliefs of the experts on the issues of sanitation management. Moreover, the KII was planned to acquire data from experts with diverse backgrounds and opinions and be able to ask in-depth and probing questions, to discuss at sensitive issues (e.g. major sanitation management challenges, disposal sites, collaboration among the stakeholders), to get an in-depth data related to the challenges of sanitation, and to create a comfortable environment where the experts can have a frank and open in-depth discussion.
Field observation
Formal observations by the researcher with the help of representatives from the municipality who have knowledge on the different sanitation components and location of sanitation management facilities and problematic areas in each study town were made. The sites included the existing situation of representative sanitation problematic areas, public toilets, communal toilets, liquid waste dumping or disposal sites, schools, health facilities and drains.
Data quality assurance
To maintain the quality of the household data, the survey was conducted by nine graduates with first degree with a degree of civil, water supply, water resources, and hydraulics engineering. A half day intensive training was given to the field survey participants. They collected the data with close supervision by the researcher; and the day-to-day data were captured into a computer. The field survey participants were residents of the respective towns who are well-informed on the local language, tradition, religion, residents' way of life of each town. Table 3 Water supply data and supply deficit of the fourteen surveyed towns.
Results
The major sanitation challenges identified from the fourteen studied towns are presented in following sections.
Water supply challenges
The annual water supply data as a key component of sanitation, collected from the corresponding water utilities of the 14 studied towns are presented in Table 3. The data in Table 3 indicates that the annual water supply of the studied towns as percentage of the annual water demand ranges between 46% and 85%. Correspondingly, the annual water supply deficit ranges between 15% and 64%.
Furthermore, the water supply sources of households both in normal circumstances and in times of water supply interruptions are shown in Table 4. The data revealed that from total surveyed households in the fourteen studied towns, households who got their water supply from protected sources (e.g. springs) range from 1% -22% and from unprotected sources (e.g. springs, rivers) range from 1%-14%, which needs intervention to correct it.
Poor and inadequate toilet facilities
I investigated that, of the total surveyed households from the fourteen studied towns, the majority of the surveyed households depends on dry on-site sanitation systems (66% -94%), the remaining 34% -6% depends on on-site water based sanitation system (i.e. flush toilet system) ( Table 4). The detail results of the toilet facilities of these towns along with the various toilet categories and proportion of each use are presented in Table 5.
As shown in Table 5, of the total number of households surveyed from the fourteen towns who possessed toilet facilities such as flushed system, improved (with concrete slab and ventilation system) and unimproved (without concrete slab & ventilation system) ranges between 72% and 94%. Correspondingly, those households without toilet facilities range between 8% and 25.6%. Based on toilet facility classification, the same households survey revealed that the number of households who uses flush toilets ranges between 2% & 34%, improved toilet facilities between 28% & 57%, and unimproved toilet facilities ranges between 26% & 58%.
Similarly, I surveyed as where those households in the fourteen studied towns without toilets facilities commonly defecate and found out that 3% -17% depended on public toilets, 2% to 6% used communal toilets, 6% to 30% used neighbors' (or relatives') toilets and 1% to 9% practiced open defecation. The number of households who did not want to disclose as where they defecate, due to private reasons, ranges 50% to 80%.
Poor toilet facility emptying practices
I also inspected the practices of households as what did they do when their toilet facilities get filled up. The result of the households' survey is shown in Table 6.
As shown in Table 6, less than half of the households (average 45%) surveyed empty their toilet facility when filled up. The other 28% (average) construct a new toilet facility. The other 23% (average) were not willing to respond.
Poor community perceptions
The findings of this study revealed that the perception of the local community regarding wastewater management was found poor. This was verified by the results of the households (HHs) survey presented in Table 7. According to the results shown in Table 7, of the total households surveyed who dumped grey wastewater within their compound such as dumping on open spaces, soak pits and septic tanks ranges between 34% and 77%. The remaining 8.7% to 58.4% dumped outside of their compound either to stormwater drains or elsewhere. The other 5.3% to 39.3% of the households dumped to undefined places. From this, it is identified that the perception of the local community to manage grey wastewater either on-site or off-site was poor. Table 6 Practices of households when their toilet facility gets filled up.
Inadequate or poor emptying services
The findings of this study showed that none of the fourteen studied towns possessed a sewerage system. They disposed wastewaters from septic tanks into an open dumping site using vacuum trucks. The households in these towns either use municipal or private vacuum trucks (Table 8) within their town or from other neighboring towns through leasing. Table 8 revealed that of the 14 surveyed towns only 57% of them possess municipal vacuum truck. The remaining 43% depends on private vacuum trucks.
Absence of wastewater dumping site
The results of the present study revealed that none of the fourteen towns have a properly designed wastewater dumping site. All these towns dump the wastewaters by the side of solid waste dumping site. They dug a pit hole and dump the wastewaters there; they dug a new one when the other pit filled up. In almost all of the studied towns the pits overflow into downstream environment and contaminate water and land resources. In addition, some vacuum trucks dump wastewaters illegally somewhere else such as open spaces and agricultural fields. Some towns use simple donkey carts (Fig. 2) to transport wastewaters due to inadequate or absence of vacuum trucks. The management of wastewaters in the fourteen towns was found critically poor. As evidence, pictures captured during data collection are shown in Fig. 2.
Absence of integration among the different components of sanitation
The present study discovered that the main components of sanitation such as wastewater, solid waste and stormwater were not managed through integration. These core elements of sanitation managed separately by fragmented institutions. Moreover, the towns' master plan did not include a plan (or reserve spaces) for solid waste, and wastewater management. However, there is a stormwater management plan prepared separately for stormwater management. Generally, none of the fourteen towns had an integrated sanitation master plan to integrate wastewater, solid waste and stormwater management. This was signified that solid waste and wastewaters were excessively dumped into stormwater drains and elsewhere as shown in Fig. 3.
Poor or inadequate collaboration among potential stakeholders
The results of the present study revealed that the different potential stakeholders working with sanitation had poor or inadequate collaboration to manage sanitation through collaboration. The issue of water is managed by the towns' water utility, solid waste is managed by sanitation and greenery development unit, stormwater is managed by the department of infrastructure. The master plan department prepares the towns' master plan without the participation of solid waste, wastewater and stormwater managing institutions. These stakeholders, in many cases, manage their activities separately through a fragmented approach, even if all of them are under same municipality. It was also found out that the towns' health office works with some elements of sanitation but still the issue of collaboration is poor or nonexistent.
Gap between the existing population and sanitation facilities
The existing sanitation facilities (e.g. public toilets, water supply, dumping sites) are inadequate which could not satisfy the present demand of the local community due to the fact that the population growth surpassed the sanitation demand of the current population. In fourteen of the studied towns the population growth is greater than the existing sanitation infrastructure, resulting in a gap between the existing sanitation infrastructure and the population. Most of the studied towns designed the water supply and sanitation infrastructure based on the normal forecasted population growth rate for a planning period of ten to fifteen years. However, in the middle of the designed period, the existing population surpassed the designed population by more than 25% due to the cumulative effects of migration and natural growth.
Inadequate financial resources
I investigated that the finance which was allocated for sanitation management in fourteen of the studied towns was inadequate. The surveyed towns also reported that there was no clear budget line for each of the sanitation management activities. Rather the budget is merged with other municipal services, though the water utility has its own budget line due its vast activities. The present study also revealed that for stormwater drains construction there is a capital budget allocated from external sources, nonetheless the focus is more on construction of drains than on stormwater management. Conventional drains are constructed every year because of the availability of funds from donors.
Poor water supply
The average (85%) water supply to households in the studied towns is consistent with the study [20] conducted in ten Ethiopian urban areas. However, the same study reported that the water supply in Lalibela and Wolkite are 66% and 50% respectively, which are against the findings of the present study. This shows that Ethiopia failed to meet the MDG target, which is in agreement with the 2015 MDG assessment report that the improved drinking water in Sub-Saharan Africa is 42% [8]. Water is basic to sanitation [29]. However, the poor water supply in the majority of the studied towns complicated the situation of sanitation implying that some landlords closed the toilets for longer periods of time due to water shortage. Studies conducted in Sudan [30] and other Sub-Saharan Africa [31,32] are consistent with this study. Subsequently, users of toilets might have been forced to go on open defecation with greater probability for pathogens to be picked up by stormwater and join water resources. Moreover, residents fetching water from unprotected sources both during the time of water interruption and in normal circumstances might be infected by disease causing organisms [33]. This situation is more critical in high rise condominiums where the toilet facilities fully dependent on the municipal water supply system and the cumulative impact of these congested residents per a given block are worse than the situations seen in single residential settlements. Consistent with the present study, [24] reported that lack of constant water supply in many urban areas in the Sub-Saharan Africa is a major challenge of sanitation. Moreover, the findings of [34] agree with this study that municipalities have the responsibility to supply water to citizens.
Poor and inadequate toilet facilities
The majority of the households in the studied towns depend on dry on-site sanitation systems and some (5%, average) practiced open defecation (OD). Almost similar results were reported by [20] in their study of Ethiopia conducted at ten cities: Wolaita Sodo (5%), Wolkite (5%), Nekemt (5%), Kombolcha (5%), Adama (7%), Gondar (7%), Mekelle (7%), and Sebeta (8%), except Batu (10%) and Lalibela (20%). However, the average OD in the present studied towns is by far lower than in Kwale (52%) and Migori (33%) counties, Kenya [35]. A study conducted in Sudan by [36] reported that nationally 26% OD is practiced. A households survey [37] in Tanzania found out 40% OD. Generally, various studies [7,32,38,39] reported that open defecation is a common challenge in Sub-Saharan Africa. Moreover, the average OD in the studied towns is lower than the Ethiopian average OD (26.9%) [40] this might be an indication that OD in the Ethiopian urban areas is declining from what it was thirteen years back. Consistent with this study, [41] reported that the Ethiopia's annual rate of OD reduction exceeds that of more developed and well-resourced countries in sub-Saharan Africa. It is assumed that the declining of OD in Ethiopia could be directly associated with the active involvement of the paid health extension workers. Irrespective of the stated values, the existing practices of OD in the studied towns implying that the possibility of contaminating the local environment is high. This is because pathogens have higher potential to join water resources and contaminate agricultural fields and playgrounds [42]. Subsequently, children and local community depending on unprotected water sources are endangered to disease causing organisms such e-coli. The possibility of fly infestations on feces dropped around pit holes is far above the ground resulting in the spreading of communicable diseases such as diarrhea [43,44] from person to person. Such situations have implications on the residents' economy as they might frequent hospital visits for medication [37]. Moreover, the active workforce may be affected by diseases that reduce their effective working capacity/time leading to deprivation. In consistent with the present study, [45] reported that inadequate sanitation facilities caused by inadequate water supply, waste disposal, sanitation or drainage are the reasons of poor health. Furthermore, deprivations are bottlenecks to ensure sustainable urban social development as earnings of the poor goes to medication and treating the ill person. In agreement with this, studies conducted by [46] and [44] agreed that sustainable development is hardly possible where there is a high prevalence of unbearable illness and poverty, and the health of local community cannot be maintained without healthy environments and integral life-support systems.
Poor toilet facility emptying practices
Myers [47] underlined that fecal sludge management and toilet emptying are necessary for sustainable sanitation management. The lack of emptying dumping trucks forced most dwellers to construct new toilet facilities and reversion to OD when their toilet facilities filled-up, resulting in land and water pollution [39,48,49]. The OD practices in the studied towns could further result in a 'slip' back to OD as opposed to the Ethiopian target to end OD declaration by 2025 [50]. Moreover, the possibility of 'slipping' back to OD is against the Sustainable Development Goal Target 6.2. Poor empting and constructing new toilet facilities might also be the causes of communicable diseases and settlement of the land even after years of contamination. Studies conducted by [51,52] and [44] supported the present study that open defecation is the causes of many communicable diseases [53]. Open defecation could have a greater probability of polluting the nearby surface water resources [54] which then be consumed by those residents who depends on unprotected water sources and downstream community. Studies [44,51] revealed that open defecation is the causes of communicable diseases such as cholera, diarrhea [55] and dysentery that could be arisen from the release or discharge of e-coli, coliform into surface water bodies [21].
If constructing new pits or toilet facilities continue, the residents finish their holdings and re-dug the previous closed or abandoned pits which could further complicate the issue of sanitation and land contamination. Conversely, the abandoned pits might be the causes of soil collapse which could results in falling-in and death of children, elders and other members of the family, which is consistent with studies conducted in Uganda [32] and other 26 countries in Africa [44]. This has a negative implication on the family's economy either for health care or other means of medication. The findings of studies conducted in Ethiopia (Hunachew, 2016) and Kenya [37] are in agreement with this study. Thus, municipalities have the responsibility to introduce local-based sanitation management technologies which is consistent with studies conducted in Uganda [31] and Malawi [56]. This could array from waste generation to disposal and substitute such habitual and unsustainable practices. Dijk [57] revealed that sanitation management should focus on a multi-step process from waste generation to end use and must therefore be viewed in a value chain framework. This is in agreement with the present study.
Poor or weak collaboration among stakeholders
The collaboration among the institutions in the studied towns remains below expectation due to fragmented responsibilities, confusing institutional frameworks and lack of coordination among the multi-level fragmented governance arrangements. Studies [10,58] conducted in Uganda, Rwanda, Tanzania and Burundi are in agreement with the findings of this study. It should be recognized that urban sanitation management poses extraordinary challenges that cannot be solved by individual stakeholders. System failure in these towns is due to a top-down approach which limits involvement of stakeholders and the local community. Finding agreement on what the sanitation challenges are and how to solve them in these towns remains a big challenge, which is consistent with the study conducted by [23].
Another potential reason for system failure is the lack of understanding of the institutional setup in which the urban sanitation system is managed and operated. The methods and techniques developed were not appropriate for the local contexts, which is in agreement with the study conducted by [10]. Moreover, the social, political, cultural and economic factors were not taken into account, and there is no single institution that is best for all state of affairs of sanitation management. The lessons learnt from these experiences, has emphasized the need to recognize institutional arrangements and provide appropriate institutional developments and capacity building programs through well reinforced stakeholder collaboration. Consistent with this, studies [10,59] conducted in eleven Sub-Saharan and Asian countries underlined the need of capacity building and institutional collaboration. Moreover, a study conducted in Ethiopia, Ghana and Rwanda by [25] highlighted the need of cross-sector coordination and communication.
Gap between the existing population and sanitation services
In the studied towns, population growth and urbanization were found to be the most important challenges posed on sanitation management. Compatible with this, the report by the [3] revealed that the higher rate of population growth in developing countries complicates the management of sanitation.
It was investigated that population growth and rapid urbanization created a severe scarcity of water resulting in poor sanitation with substantial impact on the natural environment. This is in agreement with a study conducted by [60] in Sub-Sahara Africa that increasing the size of a household and an urban area decreases the likelihood of using improved water sources. Unless the towns meet their water demand either from ground or surface water sources the existing state of sanitation will further be worsened. Consistently, a study conducted in Sudan by [30] revealed that urban areas in developing countries have already been faced by massive backlogs in shelter, infrastructure and services and confronted with insufficient water supply, deteriorating sanitation and environmental pollution. The rise in population growth in the studied towns demands significant proportion of water to ensure sustainable sanitation which could decrease the burden of ecosystems to provide more regular and cleaner supplies. Therefore, sustaining sanitation and achieving universal coverage in the studied towns represents a major challenge for human settlements, development and management. To bridge the existing gaps, flexible and innovative solutions are needed to deal with unexpected and significant changes in water demand for drinking and sanitation and associated economic activities [61,62,63].
Inadequate finance to complement the sanitation services
The absence of adequate finance to the sanitation sector holds back the development of sanitation infrastructure by the local community. Consequently significant number of the local community lacked the basic sanitation services. Studies [29,64,65] carried out in different parts of the world underlined the need of finance for sustainable sanitation. Moreover, the [50] in its integrated urban sanitation and hygiene strategy accentuated the necessity of financing the sanitation sector to promote sanitation services. However, as evidenced from the studied towns the local administrations paid lower interest on financing sanitation services. Conversely, the local community was fighting to meet their sanitation demand, though significant proportion of the local community was unlikely to meet the challenge of the sanitation needs. Thus, the local administration need to search for potential financers (e.g. Federal Government, development partners/NGOs, banks, micro-finances) to promote sustainable sanitation and reach the marginalized groups in the slum and informal settlements [44,66]; Apanga et al.; 2020; [65].
Conclusion
This study investigated the sanitation challenges in Ethiopia based on 14 representative selected towns across the country. The results of the study showed that the average water supply deficit across the 14 towns is found to be 35%, while Households (HHs) that do not have access to toilet facilities is 17%. It was also investigated that only 20% of the households have flushed toilets. The study further found out that 5% of the Households practiced open defecation and only 42% use vacuum trucks to empty their toilet facilities. Furthermore, nearly 37% of the HHs dump WW outside of their premises, only four towns have their own vacuum trucks. No town possessed WW dumping site. The foremost challenges identified in the studied towns are associated with a) poor water supply, b) poor and inadequate toilet facilities, c) poor and inadequate toilet emptying practices and services, d) poor community perceptions, and e) absence of WW dumping site. These challenges are attributed mainly to absence of integration among the different components of sanitation, fragmented governance arrangements, the inability of keeping up the sanitation services within the growing population due to inadequate financial resources and inadequate collaboration among potential institutions.
The observed sanitation challenges in the studied towns are principally associated with the higher population growth and rate of urbanization. Growing population together urbanization is increasingly creating gap in water supply and sanitation in both quantity and quality, particularly for dwellers in the condominium apartments, slum and informal settlements. Under dynamic increasing population, compared with declining water sources, access to water and sanitation will be decreasing with time. Associated water stress and health problems are extremely expected, particularly among dwellers in the slum and informal settlements. Moreover, it is likely that with limited resources and finances, development of improved water supply and sanitation services with a rate consistent to increasing rate of population and urbanization represents big challenge.
Moreover, the existing challenges happened due to dependence on a) short-term (reactive) measures instead of protective measures b) campaign instead of sustainable and long term measures c) donor-driven sanitation instead of demand-driven systems, d) 'single' approach instead of multiple or 'context-based approaches, and focusing on constructing sanitation facilities before ensuring behavioral changes of the users.
The researcher is expected that 'slipping' back to OD may be happening due to the rise in cost of labor and construction materials. This could be more serious for the marginalized groups such as the poor, women, child and the elders. Moreover, the continuing unrest in different parts of the country may further complicate the sanitation challenges. However, working strongly on behavioral changes and culture in discussion with the local community and influential persons will at least help to maintain the lower status of OD when compared with other countries in Sub-Sahara Africa. Consequently, 'slipping' back to OD may be controlled and the Ethiopian ambition to declare open defecation free urban areas by 2025 and achieving the Sustainable Development Goals 6.2 may be ensured.
The author is of the opinion that this study will help policy makers and municipalities to develop sound sanitation management strategies and context-based sanitation management approaches as the different settlement categories need different interventions. This is because; there is no simple, single solution to all urban sanitation challenges, particularly in developing countries. It is recommended that locally relevant innovative sanitation solutions that put users first be implemented. Some of the limitation of this study are: 1) the absence of data on water quality testing of unprotected sources, 2) lack of original data on the impacts of open defecation on the general environment and associated epidemiological aspects that might be linked with unprotected water supply sources, 3) the assumption that the FGD and KII respondents could have reported what they thought the interviewer wished to hear (courtesy bias) and 4) inability to include the very updated data because of logistical and budgetary constraints, and security issues in some parts of the country. Thus, further comprehensive country-wide study on water supply and sanitation including the level of open defecation and its broad impact is recommended for urgent intervention.
|
v3-fos-license
|
2019-08-29T13:03:58.894Z
|
2019-08-28T00:00:00.000
|
201660337
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.02011/pdf",
"pdf_hash": "5817be6a9b85d9c7401e07c99b7894d6f5cfc494",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41968",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"sha1": "5817be6a9b85d9c7401e07c99b7894d6f5cfc494",
"year": 2019
}
|
pes2o/s2orc
|
Commentary: Complete Genome Sequence of 3-Chlorobenzoate-Degrading Bacterium Cupriavidus necator NH9 and Reclassification of the Strains of the Genera Cupriavidus and Ralstonia Based on Phylogenetic and Whole-Genome Sequence Analyses
Citation: Gan HM (2019) Commentary: Complete Genome Sequence of 3-ChlorobenzoateDegrading Bacterium Cupriavidus necator NH9 and Reclassification of the Strains of the Genera Cupriavidus and Ralstonia Based on Phylogenetic and Whole-Genome Sequence Analyses. Front. Microbiol. 10:2011. doi: 10.3389/fmicb.2019.02011 Commentary: Complete Genome Sequence of 3-ChlorobenzoateDegrading Bacterium Cupriavidus necator NH9 and Reclassification of the Strains of the Genera Cupriavidus and Ralstonia Based on Phylogenetic and Whole-Genome Sequence Analyses
A Commentary on
Complete Genome Sequence of 3-Chlorobenzoate-Degrading Bacterium Cupriavidus necator NH9 and Reclassification of the Strains of the Genera Cupriavidus and Ralstonia Based on Phylogenetic and Whole-Genome Sequence Analyses by Moriuchi, R., Dohra, H., Kanesaki, Y., and Ogawa, N. (2019). Front. Microbiol. 10:133. doi: 10.3389/fmicb.2019.00133 Moriuchi et al. reported a comprehensive reclassification of bacterial strains from the genera Cupriavidus and Ralstonia based on percentage of conserved proteins (POCP), average nucleotide identity (ANI), multilocus sequence analysis and 16S rRNA gene sequence. In the study, conflicting results were repeatedly observed for the taxonomic classification of strain PBA that was initially identified as Ralstonia sp. PBA based on 16S rRNA gene sequence (Gan et al., 2011b;Moriuchi et al., 2019).
Strain PBA was isolated as a co-culture with Hydrogenophaga intermedia PBC from textile wastewater a decade ago. The co-culture could grow on 4-aminobenzenesulfonate (4-ABS), a recalcitrant dye intermediate (Wagner and Reid, 1931), as the sole nitrogen, carbon, and sulfur source to a relatively high cell density (Gan et al., 2011b). In this syntrophic relationship, strain PBA is the sole provider of p-aminobenzoate, an essential vitamin required for the growth of H. intermedia PBC, the main 4-ABS degrader (Gan et al., 2011a. In light of new genomic resources, the initial taxonomic assignment of strain PBA has also been previously questioned by Kim and Gan (2017) given its closer phylogenetic affiliation to the genus Cupriavidus than to the genus Ralstonia. Unfortunately, both recent genome-based taxonomic classifications of strain PBA (Kim and Gan, 2017;Moriuchi et al., 2019) suffered from incomplete and biased taxon sampling (restricted mostly to members from the genus Ralstonia and Cupriavidus) that can result in the misinterpretation of evolutionary relationships (Heath et al., 2008). The taxonomic affiliation of strain PBA should be inferred from a comprehensive phylogenomic analysis that includes all genera with genome availability from the family Burkholderiaceae.
A total of 428 Burkholderiaceae (including strain PBA) and 15 non-Burkholderiaceae genome assemblies were obtained from the NCBI RefSeq database (accessed on 30th May 2019). The genomes were processed using two microbial phylogenomic analysis pipelines e.g., GToTree v1.2.1 (Lee, 2019) and PhylophlAN v0.99 (Segata et al., 2013) that identify single copy bacterial genes (GToTree: n = 203, Betaproteobacteria HMM set; PhylophIAN: n = 400) and produce concatenated protein alignment. Maximum likelihood tree construction from the protein alignments used IQTree v.1.6.8 with 1,000 ultrafast bootstrap replicates (Nguyen et al., 2014). In both phylogenomic trees, the Ralstonia and Cupriavidus clusters received maximal support and are sister taxa to the exclusion of strain PBA (Figures 1A,B). The updated phylogenomic placement of strain PBA in light of extensive taxon sampling precludes its genus assignment to the genus Ralstonia or Cupriavidus and suggests that it is a member of a hitherto undescribed genus within the family Burkholderiaceae. Within the Genome Taxonomy Database (Parks et al., 2018) that inferred standardized bacteria taxonomy from conserved proteins present in 143,512 bacterial genomes (GTDB release R04-RS89), strain PBA was still assigned to its own genus (g__AKCV01) despite an even more extensive taxon sampling of 4,378 genomes from the family Burkholderiaceae (https://gtdb.ecogenomic.org/tree? r=g__AKCV01 accessed on 1st August 2019).
Given the concordance observed from these independent analyses, the taxonomic assignment of strain PBA has been updated from Ralstonia sp. PBA to Burkholderiaceae sp. PBA in the NCBI database (Bioproject: PRJNA78957; BioSample: SAMN02471424) (Gan et al., 2012) pending future genus description. To facilitate future strain description and comparison, strain PBA has been deposited in the German Collection of Microorganisms and Cell Cultures GmbH (DSMZ) under the accession number DSM 106616. Furthermore, the concatenated alignments, uncollapsed phylogenomic trees and genome information are also made available in the Zenodo database (http://doi.org/10.5281/zenodo.3258920).
AUTHOR CONTRIBUTIONS
HG performed data analysis and wrote the manuscript.
FUNDING
This research was supported by the Deakin Centre of Integrative Ecology.
|
v3-fos-license
|
2024-03-07T06:16:46.610Z
|
2024-03-06T00:00:00.000
|
268251119
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/os.14025",
"pdf_hash": "8f4416f7472ce5372888a1b75baecc41c3158f5c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41971",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f4632686471b21785a0a0df1a298915f8895f222",
"year": 2024
}
|
pes2o/s2orc
|
Long‐term Outcome of Isobar TTL System for the Treatment of Lumbar Degenerative Disc Diseases
Objective The Isobar TTL dynamic fixation system has demonstrated favorable outcomes in the short‐term treatment of lumbar degenerative disc diseases (LDDs). However, there is a paucity of extensive research on the long‐term effects of this system on LDDs. This study aimed to evaluate the long‐term clinical and radiological outcomes of patients with LDDs who underwent treatment utilizing the Isobar TTL dynamic fixation system. Methods The study analyzed the outcomes of 40 patients with LDDs who underwent posterior lumbar decompression and received single‐segment Isobar TTL dynamic internal fixation at our hospital between June 2010 and December 2016. The evaluation of clinical therapeutic effect involved assessing postoperative pain levels using the visual analogue scale (VAS) and Oswestry disability index (ODI), both before surgery, 3 months after, and the final follow‐up. To determine the preservation of functional motion in dynamically stable segments, we measured the range of motion (ROM) and disc height of stabilized and adjacent segments preoperatively and during the final follow‐up. Additionally, we investigated the occurrence of adjacent segment degeneration (ASD). Results Forty patients were evaluated, with an average age of 44.65 years and an average follow‐up period of 79.37 months. Fourteen patients belonged to the spondylolisthesis group, while the remaining 26 were categorized under the stenosis or herniated disc group. The preoperative ROM of the stabilized segment exhibited a significant reduction from 8.15° ± 2.77° to 5.00° ± 1.82° at the final follow‐up (p < 0.001). In contrast, there was a slight elevation in the ROM of the adjacent segment during the final follow‐up, increasing from 7.68° ± 2.25° before surgery to 9.36° ± 1.98° (p < 0.001). The intervertebral space height (IH) in the stabilized segment exhibited a significant increase from 10.56 ± 1.99 mm before surgery to 11.39 ± 1.90 mm at the one‐week postoperative follow‐up (p < 0.001). Conversely, there was a notable decrease in the IH of the adjacent segment from 11.09 ± 1.82 mm preoperatively to 10.86 ± 1.79 mm at the one‐week follow‐up after surgery (p < 0.001). The incidence of ASD was 15% (6/40) after an average follow‐up period of 79.37 months, with a rate of 15.38% (4/26) in the stenosis or herniated disc group and 14.29% (2/14) in the spondylolisthesis group; however, no statistically significant difference was observed in the occurrence of ASD among these groups (p > 0.05). Conclusion The Isobar TTL dynamic fixation system is an effective treatment for LDDs, improving pain relief, quality of life (QoL) and maintaining stabilized segmental motion. It has demonstrated excellent long‐term clinical and radiographic results.
Introduction
L umbar interbody fusion is widely acknowledged as the preferred approach for managing various lumbar degenerative diseases, such as disc degeneration and herniation, spinal stenosis, etc. 1,2 The fusion of the lumbar spine, however, may induce alterations in its inherent physiological structure, thereby resulting in a diminished physiological functionality.Consequently, this can significantly augment the load on neighboring intervertebral discs and facet joints, to degeneration of the adjacent segment. 3,4Postoperative complications such as pseudoarthrosis, non-union and internal fixation failure are more common in some patients. 5,6Even if the fusion procedure achieves success, the patient's quality of life (QoL) may still be adversely affected by persistent discomfort in the lower back or leg.This can significantly impact their overall health and well-being.The primary factor contributing to this issue is attributed to the excessive rigidity of the pedicle static fixator.Consequently, in order to effectively address this concern, there has been a development and widespread utilization of pedicle-based stabilization (PDS) devices. 7,8It provides sufficient stability to restore regular segmental movements, prevent instability, and minimize degeneration in adjacent segments.
0][11][12] A long-term follow-up study (The minimal duration of follow-up was 72 months) of 38 patients by Zhang et al. 10 showed favorable long-term clinical and imaging results with Dynesys dynamic stabilized decompression.However, the Dynesys dynamic stabilization system has a limited range of applications and limits the patient indications.Lee et al. 9 found that, in cases of late degeneration and severe instability, interbody fusion is still considered the preferred approach over Dynesys dynamic stabilization.The Isobar TTL system documented by French researchers Lavaste and Perrin in 1993, is a stabilizing system featuring a semi-rigid structure. 23The Isobar TTL system incorporates a built-in damper composed of a semi-rigid titaniumalloy rod, which effectively mitigates stiffness and restricts both axial and angular movements within the transition segments. 13- 15A finite element result by Chen et al. 16 found that the Isobar TTL system provided the effect of maximum allowable displacement (beyond peak axial stiffness) to reduce stresses in the pedicle and at the facet joint compared to Dynesys.The available studies demonstrate positive short-term outcomes of the Isobar TTL dynamic fixation in terms of pain alleviation, improvement in QoL, and preservation of lumbar range of motion (ROM) at stable levels.The clinical results are considered satisfactory. 17However, long-term studies are limited.Therefore, a retrospective study was conducted to: (i) evaluate the effectiveness of utilizing the Isobar TTL system for dynamic stabilization in individuals with lumbar degenerative diseases; and (ii) assess the long-term efficacy of the Isobar TTL system.In this report, we present the clinical results.The clinical outcomes obtained were favorable and demonstrated long-term efficacy.
Patient Selection
The present retrospective study included all patients who underwent surgical intervention for posterior lumbar decompression and Isobar TTL dynamic internal fixation between June 2010 and December 2016.The ethics committee of West China Hospital, Sichuan University, approved this study (No. 2022241).
The inclusion criteria for this study were as follows: (i) patients who were confirmed with diagnosis of degenerative spondylolisthesis, lumbar disc herniation and lumbar stenosis based on imaging and physical examination; (ii) during the presentation, all patients underwent unsuccessful conservative treatment for over 6 months.This included taking nonsteroidal anti-inflammatory medications, attending physical therapy sessions, and receiving chiropractic manipulations; (iii) body mass index (BMI) were within normal limits; and (iv) no history of spine surgery.
Exclusion criteria included: (i) patients who had clear contraindications for surgery; (ii) patients diagnosed with metabolic bone disorders, tuberculosis, tumors or infections; (iii) patients with multi-level degenerative lumbar disease, late degeneration, severe instability and lumbar degenerative spondylolisthesis ≧ Meyerding II ; (iv) patients with incomplete clinical follow-up information or follow up duration less than 5 years after surgery; and (v) patients who exhibited adjacent segmental disc degeneration prior to surgery (with Kellgren-Lawrence grade >3 or Pfirrmann grade > 3).Ultimately, a total of 40 patients with comprehensive clinical data were enrolled.
Surgical Technique
The patient was placed prone, followed by standard disinfection and draping procedures.An incision was made through the skin, subcutaneous tissue, and dorsal fascia to expose the spinal components.Bilateral facet joints at both upper and lower surgical levels were identified, exposed, and protected.Pedicle screws were then inserted and properly positioned.After a partial laminectomy and removal of the ligamentum flavum, gentle traction was applied to the dural sac and nerve root for nucleus pulposus extraction.Finally, Isobar TTL rods were introduced into place and securely connected to the screws.Apply pressure and secure the screw nut while removing the tail cap.An intraoperative C-arm X-ray confirmed the satisfactory positioning of pedicle screws and rods.Following verification of hemostasis, a standard procedure was followed to insert a drainage tube and close the incision.The drainage tube was removed within 72 h postsurgery.Subsequently, patients were instructed to wear a soft lumbar brace for 3 months.
Clinical Assessment
The visual analogue scale (VAS) and Oswestry disability index (ODI) were employed for the assessment of low back pain, leg pain, and neurological status preoperatively, at 1 week postoperatively, 3 months postoperatively, and during the final follow-up the surgical procedure.The collection of all patients' follow-up information was conducted through outpatient visits or telephone-based follow-ups.
Radiographic Measurements
The radiographic parameters of segmental ROM, intervertebral space height (IH) in stabilized segments and the upper adjacent segments were assessed using static and lateral flexion/extension X-rays.Measurements of IH and ROM were preoperative, at 1 week postoperatively, and during the final follow-up visit.IH refers to the average height of the anterior, middle, and posterior intervertebral spaces.At the same time, ROM is determined by calculating the difference in segmental angulations between flexion and extension X-rays.
One junior and one senior spine surgeon participated in the measurement of the imaging data, and if there existed a large difference in results between the two measurers, a third author participated and evaluated the measurements.
Adjacent Segment Degeneration
A clinical determination of adjacent segment disease (ASD) was made by detecting one or more indications suggestive of ASD using X-ray and MRI imaging conducted at the neighboring spinal level 18 : (i) a reduction in the IH exceeding 3 mm on anteroposterior X-ray images; (ii) forward or backward of the vertebral body by a distance exceeding 3 mm can be observed on lateral radiographs; (iii) the presence of sagittal translation exceeding 3 mm or a change in intervertebral angle exceeding 10 on lateral flexion/extension X-rays; and (iv) the progression of disc degeneration at grade 1 or higher according to the Kellgren-Lawrence.While the MRI-based grading system for disc degeneration is a dependable tool, 19 our evaluation of postoperative disc degeneration primarily relied on X-rays due to limited utilization of MRI scans during long-term follow-up after surgery.Due to the frequent occurrence of ASD in the upper disc, this study focused solely on analyzing degeneration in that specific region.Patients were defined as having adjacent segment disease (ASDis) if they had symptoms such as low back pain, leg pain, and intermittent claudication associated with ASD during postoperative follow-up.
Statistical Analysis
Normally distributed data were tested using the Student's t-test, in cases where the data did not follow a normal distribution, the Wilcoxon rank-sum test was utilized instead.Percentages were used to represent categorical data, and comparisons between groups were analyzed using the chisquare or Fisher exact tests.The difference in improvement from preoperatively to the final follow-up assessment was evaluated through a paired t-test.The inter-observer reliability of radiographic measurements was assessed using the intra-class correlation coefficient (ICC), and an ICC of ≥ 0.75 was considered to be of good reliability.Statistical analyses were performed using SPSS 26.0 (SPSS Inc., Chicago, IL, USA), with p values < 0.05 considered statistically significant.
Patient Baseline Characteristics
Forty patients, with an average age of 44.65 (13.13) years, were included in the assessment after being followed up for an average duration of 79.37 (47-105) months.Among them, 24 (60%) were male and 16 (40%) were female.All imaging parameters were measured by two authors and the average ICC for all parameter measurements was 0.917.The preoperative radiographic evaluation confirmed degenerative spondylolisthesis in 14 patients (35%).Both groups exhibited no significant differences in age, BMI, gender, smoking habits, diabetes, hypertension, and follow-up period.Table 1 presents the demographic and baseline features of the patients.
Radiological Outcomes
The ROM of the stabilized segment exhibited a significant decrease from 8.15 AE 2.77 prior to surgery to 5.00 AE 1.82 during the final follow-up ( p < 0.001; Table 2).Conversely,
Clinical Efficacy
The VAS scores for back and leg pain were significantly improved from 5.73 AE 1.13 and 5.08 AE 1.16 preoperatively to 2.75 AE 0.63 and 2.17 AE 0.64 at the 3-month follow-up postoperatively (p < 0.001; Table 3).The ODI scores were significantly improved at the 3-month follow-up postoperatively, as compared to the baseline values (50.23 AE 11.08 vs 30.03AE 6.13, p < 0.001) (Table 3).Then, improvements in VAS and ODI scores continue long at the final follow-up.
The VAS scores for back and leg pain at the final follow-up were 1.35 AE 0.48 and 1.02 AE 0.42, respectively.The ODI scores improved significantly from 30.03 AE 6.13 at the 3-month postoperatively to 12.18 AE 3.92 at the final followup.There were notable enhancements observed in both cohorts regarding VAS scores for back and leg pain and ODI scores.However, no significant disparities were found in the VAS for back and leg pain and ODI scores at any given time point between the two groups (p > 0.05; Figure 3).
Discussion
T his is a retrospective study to evaluate the effectiveness of utilizing the Isobar TTL system for dynamic stabilization in patients with lumbar degenerative disease.The results showed significant improvements in ODI and VAS scores for low back pain and leg pain in patients with and without spondylolisthesis compared with baseline scores at an average follow-up of 79.37 months.Meanwhile, the Isobar TTL system preserved the ROM of the stabilized segments from 8.15 AE 2.77 preoperatively to 5.00 AE 1.82 .The IH of the operated segments was well maintained, with an increase in the IH of the adjacent segments.The prevalence of ASDs and ASDis was significantly reduced.Therefore, the clinical results of dynamic stabilization with the Isobar TTL system in patients with lumbar degenerative diseases were favorable and effective in the long-term.
Application of the Isobar TTL in Lumbar Degenerative Disease
The prevalence of lumbar degenerative disease as a cause of chronic back and leg pain among older individuals is substantial. 20,21Its incidence increases with age and significantly impacts the QoL for seniors.Presently, posterior lumbar interbody fusion (PLIF) and transforaminal lumbar interbody fusion (TLIF) are widely acknowledged as the preferred surgical interventions for various degenerative lumbar pathologies. 1,224][5][6] To address this issue and effectively relieve patients' clinical symptoms, various spinal non-fusion techniques with dynamic stabilization and internal fixation have been developed.The Isobar TTL system, which was initially documented by Perrin in 1993, comprises a universal pedicle screw and two dynamic rods. 235][26] The shock-absorption element exhibits an elastic range of motion that closely resembles the natural movement of the spine.It effectively absorbs shocks by allowing 2 mm three-dimensional range of motion within AE2 .The Isobar TTL dynamic fixation system is designed to provide stability during the treatment of IDD while still preserving some degree of mobility in the treated segment.Isobar TTL has been reported to be particularly suitable for: (i) patients with lumbar disc herniation who are obese, heavy manual workers and have large herniated discs; (ii) patients with lumbar spinal stenosis who have central spinal stenosis, foraminal stenosis and lateral saphenous stenosis bilaterally; and (iii) patients with degenerative lumbar spondylolisthesis and instability.Most patients have reported satisfactory clinical outcomes.Li et al. 27 conducted a retrospective analysis on 37 patients who had lumbar degenerative diseases and were treated with the Isobar TTL system, which is a semirigid pedicle-screw stabilization system.The findings from a period of 24 months indicated that implementation of the Isobar TTL system led to improvements in pain relief, QoL, functional capacity, and overall patient satisfaction.Gao et al. 24 showed that the Isobar TTL system significantly improved preoperative JOA and ODI scores and was not different from patients undergoing posterior lumbar fusion.
Long-term Efficacy of Isobar TTL
Similarly, the study conducted by Qian et al. 17 demonstrated that the Isobar TTL dynamic fixation system provided significant pain relief, improved QoL, and maintained ROM in patients with LDDs at an average follow-up of 18 months.The achieved clinical outcomes were considered satisfactory.However, the follow-up time was short in a few studies, and long-term efficacy remains to be seen and should be further investigated.This study showed significant improvements in ODI and VAS scores for low back pain and leg pain compared with baseline scores in the patients with and without spondylolisthesis at an average follow-up of 79.37 months.This finding is in agreement with previous data. 17,24,27ence, we conclude that the Isobar TTL system might represent a therapeutic alternative against lumbar degenerative diseases.
The Isobar TTL is a dynamic fixation system designed for the lumbar spine, which involves inserting a flexible rod into the intervertebral space and fusing it with the vertebral arch.This innovative approach enables unrestricted movement of the lumbar spine, distinguishing it from conventional fusion techniques that may restrict normal vertebral motion.Compared to traditional lumbar fusion surgery, the Isobar TTL system has the ability to maintain a specific ROM in the stabilized segment and preserve the natural lumbar curvature, thereby ensuring ample spinal stability and preventing degeneration of adjacent segments.However, considerable controversy exists regarding preventing degeneration at adjacent segments, which may be related to the numerous factors that influence degeneration.ASD is usually not clinically significant, and surgical decompression is an option for the few patients with symptoms.Several factors affect ASD, and previous studies have found that the main risk factors include age, BMI, and the degree of preoperative adjacent segment degeneration.Consistent with previous studies, there was no significant difference in the incidence of ASD between lumbar spondylolisthesis and lumbar disc herniation or spinal stenosis.However, many research studies have demonstrated the efficacy of the Isobar TTL system in maintaining ROM stability and mitigating radiological ASD.Our study showed that the Isobar TTL system preserved ROM at the stabilized segments from 8.15 AE 2.77 preoperatively to 5.00 AE 1.82 , at the same time, as with the findings of Guan et al. 13 at a mean followup of 52.23 months, the IH in the operated segment of Isobar EVO and Isobar TTL was well maintained while in the adjacent segment was increased.Therefore, we believe that the Isobar TTL system disperses a portion of the stresses placed on the adjacent segments, while a certain degree of ROM maintenance contributes to disc rehydration, which is more effective in reducing the incidence of ASD. 28ccording to Zhang et al., 29 a comparison was made between lumbar fusion patients and the Isobar TTL system, which revealed that the utilization of the Isobar TTL system effectively prevented adjacent segment degeneration as well as screw breakage.
Similarly, Korovessis et al. 30 and Hrab alek et al. 31 achieved the same conclusion.This research discovered that the occurrence of ASD following the implementation of the Isobar TTL system was 15%, and the ASDis was 5%.The prevalence of ASD and ASDis is significantly lower in comparison to lumbar fusion reported in the literature.In contrast, Li et al. 27 and Fu et al. 32 concluded that the Isobar TTL semi-rigid fixation system was ineffective in preventing adjacent segment degeneration.In a prospective study with a 24-month follow-up, Fu et al. discovered that patients underwent significant clinical improvement.However, there appeared to be ongoing disc degeneration at both the stabilized and adjacent segments: the average Pfirrmann score increased slightly from 2.86 before surgery to 2.92 after 24 months at the stabilized segment, from 1.92 preoperatively to 1.96 after 24 months at the adjacent segment.However, the preoperative and postoperative Pfirrmann scores did not show any statistically significant difference.This raises questions about the conclusion made by Fu et al. regarding the ineffectiveness of the Isobar TTL semi-rigid fixation system in preventing adjacent segment degeneration.Although none of the patients in this series required revision surgery, further research with a larger sample size and longer follow-up periods is necessary to validate the protective effect of the Isobar TTL system on ASD.
Limitations and Strengths
This is the first study to evaluate the long-term outcome of the patients treated with the Isobar TTL dynamic stabilization system for lumbar degenerative diseases.This study demonstrated that the Isobar TTL system was effective in maintaining ROM in stabilized segments; maintaining IH in operated segments, and significantly reducing the prevalence of ASD and ASDis.The clinical results of dynamic stabilization with the Isobar TTL system in patients with lumbar degenerative diseases are favorable and effective in the long-term.However, it is essential to acknowledge the limitations of our study.First, this retrospective study was conducted in a single center with a relatively small sample size.It would be beneficial to conduct further large-scale prospective randomized controlled trials to validate our findings.Second, it should be noted that no control group was included in our study design, adding a control group with conventional lumbar interbody fusion and rigid internal fixation could provide stronger evidence to support the long-term efficacy of Isobar TTL.Despite these constraints, our data provides additional evidence supporting the long-term effectiveness of the Isobar TTL system in treating lumbar degenerative diseases.Third, although there is some consensus on the limited use of dynamic fixation techniques such as Isobar TTL in patients with late degeneration and severe instability of the lumbar spine, and our study excluded patients with severe degeneration and instability, relevant long-term follow-up studies are still needed.Finally, as a long-term follow-up study, more follow-up time points such as 6 months postoperatively, 1 and 2 years postoperatively may provide more information.
Conclusions
T he Isobar TTL dynamic fixation system for LDDs showed satisfactory long-term clinical and radiographic results, which might represent a therapeutic alternative against lumbar degenerative diseases and showed its great potential in avoiding adjacent segment degeneration.
FIGURE 1 FIGURE 2
FIGURE 1 Longitudinal data describing the patient-reported outcome measures IH of stabilized segment (A), IH of adjacent segment (B), ROM of adjacent segment (C) and ROM of stabilized segment (D) obtained preoperatively and during routine follow-up postoperative.Smooth lines represent the mean patient-reported outcome measure scores.Bars represent SD in both directions.ROM, range of motion; IH, intervertebral space height; SD, Standard deviation.*Comparison with preoperative, p < 0.001; **Comparison with postoperative 1w, p < 0.001.
FIGURE 3
FIGURE 3 Longitudinal data describing the patient-reported outcome measures VAS back (A), VAS leg (B), and ODI (C) obtained preoperatively and during routine follow-up postoperative.Smooth lines represent the mean patient-reported outcome measure scores.Bars represent SD in both directions.ODI, Oswestry disability index; VAS, visual analogue score; SD, Standard deviation.*Comparison with preoperative, p < 0.001;** Comparison with postoperative 3 m, p < 0.001.
TABLE 1
Patient demographic data and clinical characteristics between spondylolisthesis and disc disease.
Abbreviation: BMI, Body mass index; there was a slight increase in the ROM of the adjacent segment from 7.68 AE 2.25 preoperatively to 9.36 AE 1.98 postoperatively ( p < 0.001; Table2).There was no statistically significant difference in the postoperative ROM of stable segments and adjacent segments between the two groups.The IH of stabilized segments significantly increased from 10.56 AE 1.99 mm preoperatively to 11.39 AE 1.90 mm at the 1-week follow-up postoperatively (p < 0.001; Table2).Subsequently, a slight decrease was observed in the average IH during the final follow-up.Furthermore, the IH of adjacent segments exhibited a significant reduction from 11.09 AE 1.82 mm preoperatively to 10.86 AE 1.79 mm at the postoperative 1-week follow-up ( p < 0.001;
Table 2
in the spondylolisthesis group, The prevalence of ASDis was 5% (2/40), all occurring in the group with stenosis or herniated disc.No statistically significant difference was observed regarding the occurrence of ASD (p > 0.05).Throughout the follow-up period, no severe complications such as loosening and breaking of the internal fixation, wound infection, or revision surgery related to treatment were reported, and none of the patients experienced moderate to severe intractable radiating pain.Figure2displays radiographs and MRI images from a representative patient.
TABLE 2
Summary of radiographic measurements between spondylolisthesis and disc disease.
Abbreviations: ASD, adjacent segment degeneration; IH, intervertebral space height; ROM, Range of motion.; a At final follow-up.
TABLE 3
VAS and ODI scores between the two groups.
|
v3-fos-license
|
2020-03-19T20:02:18.976Z
|
2020-01-01T00:00:00.000
|
213791317
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2020/21/epjconf_chep2020_05026.pdf",
"pdf_hash": "41c14f993527885e7acbd3ba6b10f859de1ef3fc",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41973",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"sha1": "8b45dd1ac3d30bbc4d14819606ba7e0e044e930c",
"year": 2020
}
|
pes2o/s2orc
|
A gateway between GitLab CI and DIRAC
GitLab’s Continuous Integration has proven to be an efficient tool to manage the lifecycle of experimental software. This has sparked interest in uses that exceed simple unit tests, and therefore require more resources, such as production data configuration and physics data analysis. The default GitLab CI runner software is not appropriate for such tasks, and we show that it is possible to use the GitLab API and modern container orchestration technologies to build a custom CI runner that integrates with DIRAC, the middleware used by the LHCb experiment to run its job on the Worldwide LHC Computing Grid. This system allows for excellent utilisation of computing resources while also providing additional flexibility for defining jobs and providing authentication.
Introduction
The LHCb experiment uses GitLab [1] to manage its physics software lifecycle. The first use of this continuous integration system (GitLab CI) was to run unit tests. Very quickly it became obvious that it could also be extended to validate the configuration for data production jobs, or even to check and run user data analysis scripts. While standard GitLab CI runners are appropriate to run unit tests or small test jobs, data analysis production jobs validation is CPU intensive and exceeds the capacities of standard shared runners with run-times varying from a few minutes to tens of hours. We therefore decided to develop a GitLab CI gateway that would fulfil all use cases, and allow for more flexible use of resources (e.g. by running the jobs on the Worldwide LHC Computing Grid [2] instead of dedicated hosts).
Motivations
As the use of GitLab CI was extended to Physics data analysis and to validate configurations for data production, the standard GitLab runners software forced LHCb to dedicate resources to this use case. We observed the following drawbacks: • Managing similar CI configurations across multiple projects is error prone.
• Dedicated runners are idle most of the time, leading to the under-use of resources.
• Injecting credentials for merge requests (e.g. user's Grid proxy) in the GitLab jobs in a secure way is difficult.
GitLab CI uses a simple REST interface to allow runners to register, request jobs and return results. While this is not publicly documented, it can be easily reverse engineered. We therefore decided to develop a custom GitLab runner [3] to solve the issues encountered with the standard software. As the LHCb middleware, DIRAC [4] (one of the main external packages that should be integrated), is developed in Python [5], it was natural to develop this new tool as a Python package. A scalable way to manage the GitLab CI jobs was required, both at the software level and at the infrastructure level. This needed to be capable of managing tasks with durations varying from a minute up to many days. We decided to use the Celery [6] distributed task management system, a Python tool that requires external software to manage the queues of requests. Celery supports many messaging backends and we chose to deploy the RabbitMQ [7] message queue system. The Red Hat Openshift [8] container platform was chosen in order to have a scalable infrastructure to run the processes required by Celery.
A web frontend is also needed for users to register their GitLab project with the custom GitLab runner. For this purpose, a Flask [9] application was developed and deployed. Figure 1 shows the overall architecture, the Celery processes being split in two parts: the Celery Beat is the scheduling engine that allows triggering periodic tasks, and the Workers perform the tasks of interacting with DIRAC and GitLab.
Runner registration
The first step to run GitLab CI jobs within the Gateway is to register it with the system. This is a crucial step as the owner of a GitLab repository needs to be able to delegate his/her credentials to the Gateway, in order for the service to request jobs. Figure 2 shows the registration process. The user provided secret is registered in the gateway using the web frontend. This is used to obtain a dedicated runner token that is stored in a dedicated database. The frontend then triggers the polling of GitLab for jobs related to this project by the Celery Beat. RabbitMQ to manage the queues, makes the system resilient to being quickly restarted and updated without the need to drain long running tasks out of the system.
GitLab CI job processing
GitLab presents a REST [10] application programming interface [11] that can be used (with the appropriate credentials) to query the jobs to be run for a specific project, and to publish back the results. Updates have to be provided on a regular basis or GitLab considers the runner as dead, and therefore restarts the job. This API is the basis for the interaction between the Gateway and the GitLab system.
The Gateway installs and runs python packages. It is therefore possible to install using standard Python tools (e.g. pip [12] ), and the use of the Python entry points package mechanism [13] allows to decouple the Gateway from the code to be run.
The use of OpenShift (and of the underlying Kubernetes [14] container orchestrator) allows to scale the project with the number of repositories registered: it is possible to increase the size of the group of processes running each task (so-called "pods" in Kubernetes) to adapt to the needs of the application.
Integration with DIRAC
The GitLab CI gateway is generic, and can be integrated with many systems. The integration with the LHCb instance of DIRAC is however a crucial use case for the experiment, as it is the way to run jobs on the WLCG.
Running GitLab CI jobs on the Grid is possible because the LHCb software is deployed on the CernVM file system [15] (CVMFS) which is easy to access from GitLab jobs but also from all WLCG nodes. One issue requires caution, it is the one of user authentication and authorisation.
Trust is required between the GitLab system, the team running the GitLab gateway and the Grid team. Indeed, the Gateway has to trust the identity of the user submitting the job fetched from GitLab itself. Special attention has to be taken to the security of the system, as running jobs from any GitLab merge request (if this is possible publicly) would potentially allow any user to run code from their branch. Two options are possible on the LHCbDirac side: either running the jobs as a specific user or run the jobs on behalf of the user triggering the CI job. The latter implies a mapping between the GitLab accounts and the Grid accounts as well as the right to impersonate Grid users to start jobs on their behalf. This has not been implemented yet as it implies discussion with the involved parties in order to limit and audit the code.
GitLab CI gateway prototype
The current prototype allows testing running test productions on the Grid using LHCbDirac. It allows a dynamic number of jobs to be launched, one per dataset processed whereas this is not possible with the standard GitLab runner. This has been used to dynamically spawn many hundreds of tasks from a single CI job. The status summary is reported to GitLab CI and additional logs and output for each production are accessible via the web frontend as shown in figures 4, 5 and 6.
Automating ntuple production using LHCbDirac
In many cases, LHCb Data Analyses start with the extraction of the relevant quantities from the LHCb dataset. This stage is performed manually by the analysts, who have to monitor its progress and make sure it concludes successfully. Automation of this task is possible but requires strong quality checks to avoid wasting Grid resources. This is where GitLab CI can play a role, and a prototype [16] was developed that functions in the following manner: • The LbAnalysisProductions.ci.pull_job(runner), pull jobs from a specific Git-Lab repository(lhcb-datapkg/AnalysisProductions), in which each folder corresponds to an analysis (e.g. Charm/d2hll_Run2). It then -finds folders which have changed, these are "productions" -iterates those folders to check which testing "steps" they define -generates signed URLs so jobs can directly upload their output to S3 • Use LbAnalysisProductions.ci.check_status to check the status in LHCbDirac and update the log in GitLab CI every 30 seconds A separate website is available to display detailed information about previous tests. It provides a read only view of the database and gives access to signed URLs to retrieve files from storage (using the Amazon S3 interface).
Combining the ease of use of GitLab and its capability to manage secure workflows to update the analysis code, with the GitLab CI to LHCbDirac gateway provides LHCb with a very powerful tool to manage ntuple extraction from the LHCb dataset in an organized and efficient manner.
Deployment to CVMFS
The deployment of newly built/released software to CVMFS is also a good candidate for the use of the GitLab Gateway: it is easy to write a LbCVMFSDeployment.ci.pull_job(runnner), that has the credentials to pull jobs from repositories like LHCbDirac, AnalysisProductions, LbEnv (or a deployment repository) and trigger the installation on CVMFS. It should also be able to check for feedback and report to GitLab. As LHCb refactors its CVMFS deployment installation, the plan is to develop such a runner.
Analysis preservation workflows
Analysis preservation workflows can also profit from using the GitLab CI Gateway: such use cases rely on having the credentials to access LHCb Grid data, and on being able to access significant CPU resources to process their data.
Several physics groups within the LHCb experiment already use GitLab CI for some analyses, with runners dedicated to their projects. The LHCb tutorials recommend automation with workflow management systems such as Snakemake [17][18], which allow re-running only part of the analysis of the data for which the input data or code has changed. Such workflows however rely on a cache of intermediary artefacts that can be reused between executions, to avoid re-computing everything from scratch.
Such a caching is not available at this stage but could be added to the system in a generic fashion, saving the local files after each job (to a scalable storage such as the CERN EOS or Ceph systems), and recovering them before the next one. We are investigating ways to integrate this into the system.
Conclusion
The GitLab CI Gateway demonstrated that it is possible replace standard GitLab runners by a custom one that allows integrating in a smoother manner with experiment resources. The current system is appropriate for some use cases (e.g. the Production Configuration validation) while more features are needed to handle other use cases, such as Analysis preservation. The system nonetheless proved its worth and its development will continue as the basis for LHCb GitLab runners.
|
v3-fos-license
|
2017-10-30T22:35:45.541Z
|
2017-09-14T00:00:00.000
|
25067248
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.4172/2165-7831.1000178",
"pdf_hash": "97bc9cbfd1e11b65cc645b008f0c52057ccc1dfa",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41976",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "97bc9cbfd1e11b65cc645b008f0c52057ccc1dfa",
"year": 2017
}
|
pes2o/s2orc
|
Predictors of Sub-Optimal CD 4 Recovery during the First Six Months of Anti-Retroviral Treatment ( ART ) in HIV Infected Children : A Retrospective Cross Sectional Study from Tikur Anbessa Tertiary Hospital , Addis Ababa , Ethiopia
Background: Highly active anti-retroviral therapy (HAART) has brought significant change in reducing morbidity and mortality among children living with HIV/AIDS. Decisions concerning initiation and/or shifting of antiretroviral therapy (ART) are guided by monitoring the laboratory parameters of plasma HIV RNA (viral load) and CD4+ T cell count in addition to the patient’s over all clinical response. The demonstrations of the prognostic value of the CD4 cell count was of major importance in the development of therapeutic strategies. Therefore, the objective of this study was to assess factors predicting suboptimal CD4 cell recovery during first six months of ART. Methods: The study is retrospective cross sectional study to assess factors predicting suboptimal CD4 cell recovery. Medical records of patients’ were retrieved and important variables are captured to standard questionnaire tool. T-test is used to assess changes in CD4 cell count after initiation of ART. Binary logistic and multiple regressions were used to assess factors predicting CD4 cell recovery. Results: Data of 360 children were analyzed. CD4 cell count at the start of HAART ranged from 3-2003 cell/mL with an interquartile range of 231-317 cell/mL. After 6 months of HAART, the CD4 cell count has increased ranging from 71-2300 cell/mL with inter quartile range of 458-612 cell/mL and mean CD4 cell count difference of 230, 95%CI (199.414-260.613); P<0.001. Advanced clinical stage of the disease, severe degree of immunosupression, presence of anemia, presence of chronic diarrhea at base line, poor weight gain during first six months of HAART adversely affect the trends of CD4 recovery. Conclusion: Our study demonstrated that advanced clinical stage of the disease, severe degree of immunosupression, presence of anemia at baseline and presence of chronic diarrhea, poor weight gain during first six months of HAART were factors adversely affect the trends of CD4 recovery.
Introduction
More than 2 million children are living with HIV/AIDS worldwide and more than 90% of them are living in sub-Saharan Africa [1].More than 90% of children acquire the infection through mother to child HIV transmission (MTCT).Despite this, only 10% of HIVinfected pregnant ladies are offered any form of prevention of mother to child HIV transmission (PMTCT) in sub-Saharan countries [1,2].Ethiopia has an estimated population of close 100 million people, of whom 44% are under 15years of age [2].Currently 367,000 patients including 23,400 children under the age of 15 are taking ART.Based on the 2014 estimate, the 2014 ART need is 542, 121 for adults and 178,500 for children under 15 years of age.On the basis of the 2010-2014 strategic plan, ART coverage for adults has reached 76% but the coverage remains low (23.5%) for children.If HAART is not started promptly, a third of children infected prenatally will not survive to their first birthday, and more than half will succumb to death by their second birthday [3][4][5].Highly active anti-retroviral therapy (HAART) has brought significant change in reducing morbidity and mortality among children living with HIV/AIDS.Decisions concerning initiation and/or shifting of antiretroviral therapy (ART) are guided by monitoring the laboratory parameters of plasma HIV RNA (viral load) and CD4+ T cell count in addition to the patient's overall clinical status.Monitoring clinical and diagnostic progression of patients on anti-retroviral treatment (ART) is important to examine responses to the treatment and for clinical decision-making.The demonstrations of the prognostic value of the CD4 cell count was of major importance in the development of therapeutic strategies.Children's immune response to ART differs based on age at ART initiation and degree of viral levels.As shown in recent adult studies, there are likely other baseline factors contributing to differential immunological responses.Baseline clinical and demographic factors have been used to predict mortality in HIV-infected children on ART [2].However, few baseline clinical characteristics other than age, CD4% and viral response have been examined as potential predictors of weight and immune response in HIV-infected children.Children in the lower percentile of weight, CD4 cell count, and CD4 cell per cent gain at 6 months of HAART were likely to have unsatisfactory immune response [6][7][8][9][10][11][12][13].Studies are, however, scarce in assessing these potential clinical parameters such as the presence of baseline underweight, chronic diarrhea, tuberculosis, opportunistic infections or anemia as predictors of early immunologic response.Clinicians could possibly use these factors as an early alarming sign to identify children at higher risk of poor immune response to ART and target them for more intense monitoring and closer followup.Patients with insufficient CD4 cell recovery at six months of ART should undergo extensive clinical and laboratory assessments, as it could indicate poor adherence, which is believed to be the key element for the acquisition of antiretroviral drug resistance and finally lead to treatment failure [10,11].In resource-limited settings, in which the number of drugs available is limited, maximizing the duration of existing lines of treatment, and identifying and addressing the reasons for poor response worth emphasized [11,12].In this study, we mainly focused on factors predicting poor CD4 cell recovery during the first six months of ART.
Objective
To assess factors predicting suboptimal CD4 cell recovery during the first six months of ART in HIV infected children.
Methods
A retrospective cross sectional study was conducted by reviewing medical records of children living with HIV who are on ART at Tikur Anbessa Specialized Tertiary Hospital.The hospital is a teaching specialized and national tertiary hospital.Pediatric and child health department is one of the major departments delivering patient caring service, teaching and research activities under different unit categories.Pediatric infectious clinic is one of the functioning units of the department and highly overburden subspecialty clinics mainly delivering pediatric HIV/AIDS care and treatment.
Sample
There were a total of 1145 children living with HIV registered in pediatric HAART clinic at pediatric infectious Clinic at the time of data collection.About 503 patients have already been on HAART and around 405 patients were taking HAART for six months and above.Of factors affecting CD4 cell recovery, baseline under weight is one of the most important clinical variables occurring in about 50% of the cases as shown in most studies [5,6].So, using single proportion formula, the sample size found to be 384.Only 360 medical records with complete clinical and laboratory parameters were accessed for data extraction.
Definition of suboptimal CD4 cell recovery
Different literatures and guidelines define suboptimal CD4 recovery using different values.However, most guide lines including national guideline define suboptimal CD4 recovery as CD4 increment of less than 20% or 50 cell/mL of baseline [2,3].
Data Extraction and Statistical Analysis
Data were extracted from the patients' medical records and enumerated to data retrieval form, and then were entered to EPI info software for clean-up and pre and post-HAART anthropometric interpretation and directly transferred to SPSS version 17 for further analysis of important variables, significance testing.Paired-t-test was used to compare pre and post-HAART CD4 cell count differences.Using logistic regression, we first determined univariate associations between demographic and clinical variables, which, based upon clinical observations and prior studies.The independent variables included: baseline CD4%/count, baseline hemoglobin, Weight at baseline, at three and six months of and WHO clinical staging, presence of chronic diarrhea, ART regimen.We disaggregate age into two categories (under 5 and 5-14 years old) to control for the age-related difference in CD4 count/proportion.Multivariate logistic regression was used to assess factors possibly contributing for poor CD4 recovery.P value of less than 0.05 considered statistically significant.
Results
Medical records of 360 patients which were eligible for the study were retrieved from medical record office and data extracted.Males constitute 51% (183/360) while females were 49% (177/360).The minimum age at the start of ART was 4 months and the highest being 168 months (Interquartile age range 87-120 months).
Those who started on ART at the age of less than 12 months constitute the least number, 4.1% (15/360) and those who started treatment between 60-120 months were by far the largest 54.6% (197/360).Baseline anthropometric interpretation showed that total patients with wasting were 42.6% with moderate and severe wasting being 20.4% and 1.5%, respectively.Anemia was detected in 16.3% (61/360) of patients at the start of HAART with the majority of the cases having a mild degree of anemia accounting for 81% (49/61) of the cases of anemia (Table 1).
About 206 (57.1%) of patients were started on AZT-based nucleoside reverse transcriptase inhibitors (NRTI) while 132 (36.7%) patients were taking d4T based regimen.There is no difference in degree of CD4 recovery among patient taking the two regimen (p value is 0.134).(NNRTIs) as backbone about 72.4% (261/360) were taking NVP containing regimen while 21.4% (77/360) were on EFV-containing regimen.Still there is no difference in degree of CD4 recovery among patient taking the two regimen (p value is 0.34).There were 57/360 (15.8%) cases had documented chronic illness including new development of tuberculosis and chronic diarrhea, which accounts for the majority of cases and others were seizure disorder and chronic otitis media.About 10.7% (39/360) of patients were treated for tuberculosis during first six months of HAART.Majority of patients (53.1%) were having moderate immune suppression (CD4 cell count of 200-500 or CD4 cell per cent 15-25%) at start of HAART while 39.3% of cases were having severe immunosuppressant (CD4 cell count<200 or <15%) and (7.6%) were having mild immune suppression while baseline WHO clinical stage II, III, and IV were 31.6%,51% and 18.8% respectively (Table 2 and Figure 1).Baseline CD4 cell count ranged from 3-2003 cell/mL with inter quartile range of 231-317 cell/mL and average CD4 cell count of 261 cell/mL.After six months of HAART, about 85% (306/360) of patients were having CD4 cell increment of greater than 20% or 50 cell/mL of baseline value.Mean of CD4 cell count pre and post-HAART was compared; the mean CD4 cell count difference was 230, 95% CI (199.414-260.613),and count ranged from 71-2300 c/mL with inter quartile range of 458-612 c/mL at six month of initiation of HAART, P-value <0.001.The mean weight gain at three and six months of HAART were found to be 1.00 and 1.50 kg with the mean weight difference of 0.772 and 1.80 kg, 95% CI (0.588-0.957) and (1.60-2.0)P<0.001respectively.Advanced WHO clinical staging and severe immunosuppression at start of HAART were adversely related CD4 recovery with statistically significant value.Similarly, Low baseline weight, presence of, presence of chronic diarrhea and poor weight gain at three and six months of ART found to have significantly detrimental impact on CD4 cell recovery (Table 3).
Discussion
In our study, the mean CD4 cell count increment is 230 cell/mL with count range of 71-2300 c/mL and inters quartile range of 458-612 c/mL at six month of ART.The probability of having CD4 cell % or count of (>25% or >500) at six month of ART is 40%.This finding is higher than previous studies results with possible explanation that most previous studies were carried out on adult patients where CD4 lymphocytes physiologically decline with increasing age [1][2][3][4].For instance the study conducted in South Africa which involved about 4000 adult patients showed that the probability of having CD4 count of >200 cell/mL at twelve months of ART is about 51%.The association between older age groups and poor immune recovery demonstrated in our study which is in congruence with other previous pediatric and adult reports [5,11].This might be justified by too depleted thymus and other lymphoid tissues to repopulate the T lymphocytes after initiation of ART in older age groups.On the other hand, in our study there is no relationship between gender of the patient and risk of adverse immune restoration though other studies were with discordant results; some reported male gender is associated with poor immune recovery others showed no gender difference [14][15][16].Presence of chronic diarrhea during ART treatment is associated with poor CD4 cell recovery independent of nutritional status of the patient which is in agreement with most previous studies.The reason is not clearly identified so far but could be because of associated mal-absorption and poor adherence during the illness.Our study also showed that the presence of anemia at baseline (Hgb<10 g/dL) is related with poor immune system regeneration which is in concordant with similar studies from other sites [13,15,17,18].For example, study from South Africa showed that children living with HIV who started on ART with lower baseline hemoglobin had significant risk of adverse CD4 cell recovery (OR=0.87 for each 1 g/ dL decrease in hemoglobin; 95%CI 0.75-0.99)[6,7].This association is scientifically plausible as there is clear interaction between elemental iron in immune system.Our research findings of lower baseline CD4 cell value predicts poor six month CD4 cell recovery is supported with other cohort studies.A large cohort involving 861 adult patients living with HIV in Spain showed that patients with baseline CD4 count of 200 and of 201 to 350 cells/mm3 had a significantly lower chance of achieving CD4 count of 500 cells/mm 3 compared with patients with baseline CD4 350 cells/mm 3 and above [14,19].
This study also revealed that weight gain of less 10% at three and six months of HAART were independently predicting poor CD4 recovery which is also in compatible with other research findings.The poor weight gain could indicate poor adherence, an advancing disease stage or underlying opportunistic infection implying that the patient should undergo extensive clinical and laboratory evaluation.Similar study in South Africa and elaborated lower percentiles of weight gain after 6 months of ART were associated with poor subsequent treatment outcomes and higher risk of mortality independent of other baseline characteristics [17,20,21].Likewise, a low baseline weight was an independent predictor of an adverse outcome in our study, which is in agreement with other observational studies where patients with weight less than 3 rd centile had a two-fold increased risk of dying [22].Our study didn't depict association between ART regimen and risk of poor CD4 restoration which could be explained by almost all (94% of the patients) were taking two NRTI and one NNRTI to see the difference between regimens.However, previous studies were reporting discordant results.Some studies reported that AZT or d4T based regimen were associated with adverse immune reconstitution others showed NRTI regimens were associated with poorer recovery of CD4 count [15].
Conclusion
Our study demonstrated that advanced clinical stage of the disease, severe degree of immunosupression, presence of anemia at baseline and presence of chronic diarrhea, poor weight gain during first six months of HAART were factors adversely affect the trends of early CD4 recovery.
Table 1 :
Demographic and base line values of some variables at initiation of HAART, Tikur Anbessa Tertiary Hospital, Addis Ababa Ethiopia, 2014.
Table 2 :
Patterns of weight gain and CD4 cell recovery at three and six month of HAART, Tikur Anbessa Tertiary Hospital, Addis Ababa Ethiopia, 2014.Distribution of CD4 cell category at baseline and after six months of HAART; Tikur Anbessa Tertiary Hospital, Addis Ababa Ethiopia, 2014.
|
v3-fos-license
|
2021-06-10T20:03:14.469Z
|
2021-01-01T00:00:00.000
|
235387460
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1088/2053-1591/ac04ec",
"pdf_hash": "2f95e7135e5505ef7f707d9c1740039b70761bdc",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41978",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"sha1": "2f95e7135e5505ef7f707d9c1740039b70761bdc",
"year": 2021
}
|
pes2o/s2orc
|
Mechanical and electrical properties of borophene and its band structure modulation via strain and electric fields: a first-principles study
The basic electronic and mechanical properties of 2-Pmmn borophene and their strain and electric field-dependence are studied by the first-principles calculations. The Young’s moduli are 236 and 89 GPa in the armchair and zigzag directions, respectively, indicating that the borophene has giant mechanical anisotropy. We also find that the borophene presents anisotropic electronic properties. The borophene is electroconductive in armchair direction but has a bandgap in the zigzag direction. To modulate the band structure, we applied strain and electric fields on borophene, and find that, the resistance of borophene decreases with the increase of applied strain, while the applied electric field has almost no effect on its band structure. The enhanced conductivity of borophene upon applied strain is ascribed to the expansion of the buckled structure through the analysis of the charge density of the strained borophene.
Introduction
As a new type of two-dimension (2D) material, borophene is reported to have a serial marvelous nature in electronic, mechanical, optical and thermodynamic properties and can be a powerful competitor to graphene [1][2][3][4][5][6][7]. For 2-Pmmn borophene, boron may bond each other neither in a purely covalent nor a purely metallic manner [8]. And due to its unique properties, 2-Pmmn borophene has received widespread attentions in the last five years [9]. The atomic configuration of boron monolayer had been researched and discussed for a long time [10][11][12][13][14]. The earliest study on this issue could be traced back to Ihsan Boustany's work in 1997 [15], in which a series of quasi-planar monolayer boron were proposed and analyzed by theoretical calculations. This method has been developed to evaluate the stability of materials [16]. Besides the binding energy per atom, the phonon dispersion spectrum [17] is another method that we adopted in this work. After the remarkable synthesis of borophene in 2015 [18], the potential of the 2D boron sheet drew the researcher's eyes again. The first question is whether other phases of borophene, except for 2-Pmmn, could be synthesized and the evaluation of the stability. Much more boron allotropes have been proposed [19][20][21][22][23], and the vacancy, such as the honeycomb phase, was found to be an important factor affecting the stability of borophene structures [8,24,25]. Borophene structure at a certain vacancy concentration can reach its best stability and flat the quasi-planar buckled triangular sheet [26]. Due to its giant anisotropy in structure and properties, which possesses unique applications, we mainly focus on 2-Pmmn borophene in this work. In further studies, Zhong et al [27] systematically investigate the mechanical and electronic properties of a few-layers 2-Pmmn borophene. For monolayer 2-Pmmn borophene, the Young's modulus in armchair direction can be as high as that in graphene, while it is relatively lower in zigzag direction and a negative Poisson's ratio could be another unusual feature [18]. The high critical strain shows that borophene has good extensibility. Both the mechanical and electronic properties present a giant anisotropy. The Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
investigation of the band structure of borophene indicates that bands that cross the fermi level only exist in armchair direction and may show metallic behavior on the whole. These results are supported by other studies [25][26][27]. However, for practical applications, the characteristics can usually be optimized and promoted by some methods, such as strain engineering [28][29][30][31] or applied electric field [6]. Although such a detail that the electronic properties of borophene have been found out, there is still a lack of this issue.
In this work, the first-principles calculations are used to investigate the structure stability of 2-Pmmn borophene by phonon dispersions. Then, we use both the stiffness constants and stress-strain response to study its mechanical properties. The electronic properties are described by the band structure and density of state (DOS). Finally, we regulated the band structure of 2-Pmmn borophene by the mechanical strain and electric fields and look into the details of the charge density, as well as analyzed the behavior of strain-controlled borophene.
Model and method
We performed the calculations using the Vienna ab initio package (VASP), which is based on the Density Functional Theory (DFT) method. The generalized gradient approximation (GGA) using Perdew-Burke-Ernzerhof (PBE) [32][33][34] was adopted to describe exchange-correlation functional. The GGA-PBE has been found be more suitable for the density inhomogeneity structures and widely applied for the low-dimensional systems [35][36][37]. The Projector augmented wave method was employed with a cutoff energy of 500 eV. The conjugate gradient method was used for geometry optimizations. A vacuum layer with a thickness of 10 Å was built to avoid image-image interaction in the Z direction. For the evaluation of mechanical properties, the Brillouin zone was sampled 30×15×1 k-point mesh using the Monkhorst-Pack method. Uniaxial tension condition was applied to evaluate the mechanical properties of the borophene. The lattice size along the loading direction was increased in multiple steps with small engineering strain steps of 0.004. As for the calculation of the band structure of borophene, a k-point path along Γ-Y-M-X-Γ in the Brillouin zone was set and sampled 50 points in each line. The set of strain engineering is the same as that of mechanical properties. The electronic field in the Z direction was applied, which is ranging from 0 to 0.5 V/ Å, step by every 0.1 V/ Å, to investigate the relationship of the applied electronic field and the electronic properties of borophene.
As shown in figure 1, the 2-Pmmn borophene has an anisotropic structure in the two orthogonal directions. The black arrows, marked as X and Y, are the relative Cartesian coordinate. Along the Y direction, a buckled line can be observed, due to which, the Y direction is usually called the zigzag direction. And the non-corrugations X direction is called the armchair direction. The primitive cell of borophene is marked by the red arrow, and the lattice constants a and b are calculated to be around 1.6136 and 2.8654 Å respectively, which well consists with the experimental result with a trivial discrepancy of about 1% [18]. As shown in figure 1(b), the borophene presents a quasi-planar structure with corrugations, which is different from graphene. The buckling height is 0.911 Å, The benchmark calculations of borophene are consistent with the references [27]. For the tensile simulation, two types of in-plane strains are considered, i.e., the biaxial and the uniaxial loading, respectively. For the uniaxial tension, a small incremental strain is applied along the stretch direction, while the internal atomic positions along other two directions are fully relaxed. The basic electronic and mechanical properties of 2-Pmmn borophene under applied strain and electric fields will be demonstrated in the next section.
Results and discussion
3.1. Structural stability of 2D borophene Structural stability of borophene is the first issue discussed before further investigations. Herein, we performed the phonon dispersion calculation to confirm the structural stability of 2-Pmmn borophene. As shown in figure 2, the phonon spectrum and projected phonon density of state (PDOS) for in-plane optical branch B(X) and B(Y) as well as out-of-plane acoustic branch B(Z) were calculated by applying the PHONONPY code [38]. In the phonon spectrum of borophene, two small imaginary frequencies, one in near-gamma point and another in G-X, can be found, which indicates that 2-Pmmn borophene could not be dynamically stable against out-ofplane vibrations. This result is different from the previous study of monolayer boron sheet by Peng et al [2], but similar to the investigation of bilayer borophene by Zhong et al [27]. Since 2-Pmmn borophene has been synthesized, a reasonable explanation for this paradox is that the substrates could sustain borophene structure in a certain condition, such as low temperature and a small enough imaginary frequency value. As for projected PDOS, the B(Z) mainly contributes to the three low acoustic phonon branches, indicating that the whole borophene has a trend to translation along the Z direction because there is no constrain in this direction. Then, the B(X) mainly contributes to the high optical phonon branches, suggesting that the interaction between boron atoms in the X direction is stronger than the other two directions. Last, the vibrations in the Y direction are stronger than that in the Z direction but weaker than the X direction which can be ascribed to the buckled structure of borophene. Through the discussion of phonon spectrum and projected PDOS, we can conclude that the 2-Pmmn phase can be not the most energetically favorable structure among allotropes of borophene. However, on one hand, the free-standing 2-Pmmn borophene has not been studied clearly in experiments; on the other hand, for borophene on substrates, applications in microelectronics equally have a promising future. Besides, the phonon dispersions present an anisotropic behavior, which, in kinetics, provides the basis for mechanical and electronic properties.
Mechanical property of borophene
After discussing the structure stability, mechanical properties of borophene are further studied. Mechanical properties can be defined as the performance that materials behave when it is loaded by an external force. Meanwhile, the performance can be characterized by elastic stiffness constants. For orthogonal symmetry under plane stress, the four nonzero elastic stiffness constants can be written into the stress-strain formula as followed [39]. where s ii and e ii are the stress and strain components, respectively. In our calculations, these elastic stiffness constants C , 11 C , 12 C , 22 and C 44 are 247.1, 4.08, 106.5, 525.5 GPa, respectively. The main engineering constants, Young's modulus E and Passion's ratio u, can be derived as [40].
x y x y y x 11 where E x = 247 GPa, E y = 106.4 GPa, v xy = −0.038, v yx = −0.016 in this work. Besides, according to the estimation formula for Young's modulus and Poisson's ratio, we calculated the constants for all angles. For the model of borophene in this work, which possesses orthogonal symmetry, the Young's modulus and Poisson's for all angles can be estimated by the formula followed [41,42].
cos sin sin 11 where the A = - 22 11 22 12 2 66 In figure 3, the calculated angle-dependent Young's modulus and Poisson's ratio , which vary from 106.4 GPa to 305 GPa and from −0.816 to −0.017, respectively, suggesting the highly anisotropic mechanical properties of borophene. The negative Poisson's ratio is mainly owing to the quasi-planar structure. When the quasi-planar structure is stretched in one direction, and the corrugations are unfolded, leading to the expansion in all the in-plane directions.
Furthermore, we investigated the strain-stress response by applied uniaxial and biaxial strain, respectively. Calculated biaxial and uniaxial stress-strain responses of borophene along the armchair and zigzag loading direction are illustrated in figure 4. In figure 4(a), the biaxial stress-strain curves for both armchair and zigzag directions include an elastic stage, yield stage and break stage. The lines gradually trend up to the ultimate tensile strength at which the material yields its maximum load-bearing ability. The critical stain could be easily found to be about 12%. By the linear fitting from 0 to 0.02 in the stress-strain curve, the Young's modulus along the armchair direction is estimated to be about 211 GPa, which is two times larger than that along the zigzag direction (84 GPa), indicating the anisotropic mechanical properties of borophene. Figures 4(c)-(d) shows the uniaxial stress-strain curves. Under uniaxial strain, the Young's moduli are 236 and 89 GPa along the armchair and zigzag directions, respectively, which is slightly larger than those under biaxial strain in both directions.
To investigate the dynamical stability of strained borophene, we investigate the phonon spectrum of borophene with both tension and compression strains, as shown figure 5. For tension and compression strains smaller than 4%, no imaginary frequency is clearly identified, indicating the structural stability of borophene with relatively low strain. However, the tension and compression strains increase and become larger than 8%, obvious phonon imaginary frequency appears in Γ-X and Y-Γ. Therefore, the borophene remains its structural stability with applied strain lower than 4%, which becomes instable when the strain increases and exceeds 8%.
Electronic property of borophene
The electronic band structure, the density of states (DOS) and project DOS (PDOS) of borophene are shown in figure 6. In figure 6(a), there are two bands cross the Fermi energy level in Γ-Y and M-X, together with DOS ( figure 6(b)), in which forbidden band does not exist, indicating its metallic behavior along the armchair direction. On the contrary, band gaps, marked as BG1 and BG2, between the valence and conduction bands exist in Γ-Y and M-X. The anisotropy in electronic property indicates that borophene may behave as a semiconductor in this direction. The electronic properties of borophene are unusual and attractive.
Strain and electric field modulation of borophene
Borophene shows conductivity in the armchair direction, while semi-conductivity along the zigzag direction due to the existence of the band gap. The electronic band structure of borophene with applied uniaxial strain is shown in figure 7. With the increment of tensile strain, both BG1 and BG2 decrease slightly, whereas the overall properties remain unchanged.
For the utilization in the future electronic application of borophene, its band structure regulation via applied strain and the electric fields was further studied to get a better knowledge of the electronic properties of borophene. The tendency of BG1 and BG2 under biaxial strain, armchair uniaxial strain and zigzag uniaxial strain ranging from −0.1 to 0.1, were detailed studied and shown in figure 8. For biaxial strain in figure 8(a), both the BG1 and BG2 decrease when biaxial strains increase. As for uniaxial strain, the bandgap has an orientation dependency. In figure 8(b), with the armchair uniaxial strain increasing, the BG2 decreases, and BG1 nearly remains still. In figure 8(c), zigzag uniaxial strains can lower the BG1 more obviously than BG2. On the other hand, in figure 8(d), when applying an electric field in the Z direction, ranging from 0 V/Å to 0.5 V/ Å, the BG1 decreases from 4.03 to 4.06 eV, while BG2 decreases from 9.03 to 9.05 eV. Since the range of change is very small compared to strain engineering, we could conclude that the applied electric field has almost no effect on the borophene band structure.
As discussed above, the applied strain can be regarded as an effective method to regulate the electronic band structure of borophene. Here, the charge density in two specified cross-sections [010] and [−110], presenting the bonds along the armchair and zigzag directions, respectively, was analyzed to look into stain engineering under biaxial strain. In figures 9(a)-(c), the bonds were gradually elongated and the charge density in the center of the bond decreases, indicating the weaker B-B bonds. In figures 9(d)-(g), the charge density in the B-B bond decreases along the armchair direction. Thus, as the borophene is stretched, the bonds are weakened. This phenomenon can be attributed to corrugations in the zigzag direction. When the corrugations are unfolded, the localized bonds could be changed to a delocalized bond. On the whole, the bandgap can be decreased with the increased strain, and thus the enhancement of borophene conductance by strain engineering.
Conclusion
In this work, we investigate the mechanical and electronic properties of 2-Pmmn borophene using the firstprinciples calculations. The dynamic stability was studied by the calculations of the phonon spectrum and Projected PDOS, which shows that the 2-Pmmn borophene is not dynamically stable. Then, the mechanical constants were discussed, finding that borophene possesses strong strength in armchair direction and highly anisotropic mechanical properties that the Young's modulus in armchair direction is two times larger than that in the zigzag direction. The band structure and DOS show that borophene exhibits highly anisotropic metallic behavior. And the bandgap has an orientation dependency. In the inspiration of the changes in charge density, we bring together the structural, mechanical and electronic behavior by the language of the valence bond theory. The enhanced conductivity of borophene upon applied strain is ascribed to the expansion of the buckled structure through the analysis of the charge density of the strained borophene.
|
v3-fos-license
|
2024-05-19T06:17:25.959Z
|
2024-05-17T00:00:00.000
|
269838246
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "a76af91b9ca03dd269d8660dc4df03e6270f6aeb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41980",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "af1335983d377486b5047af9b440daa1341bba02",
"year": 2024
}
|
pes2o/s2orc
|
Future perspective and clinical applicability of the combined use of plasma phosphorylated tau 181 and neurofilament light chain in Subjective Cognitive Decline and Mild Cognitive Impairment
We aimed to assess diagnostic accuracy of plasma p-tau181 and NfL separately and in combination in discriminating Subjective Cognitive Decline (SCD) and Mild Cognitive Impairment (MCI) patients carrying Alzheimer’s Disease (AD) pathology from non-carriers; to propose a flowchart for the interpretation of the results of plasma p-tau181 and NfL. We included 43 SCD, 41 MCI and 21 AD-demented (AD-d) patients, who underwent plasma p-tau181 and NfL analysis. Twenty-eight SCD, 41 MCI and 21 AD-d patients underwent CSF biomarkers analysis (Aβ1-42, Aβ1-42/1–40, p-tau, t-tau) and were classified as carriers of AD pathology (AP+) it they were A+/T+ , or non-carriers (AP−) when they were A−, A+/T−/N−, or A+/T−/N+ according to the A/T(N) system. Plasma p-tau181 and NfL separately showed a good accuracy (AUC = 0.88), while the combined model (NfL + p-tau181) showed an excellent accuracy (AUC = 0.92) in discriminating AP+ from AP− patients. Plasma p-tau181 and NfL results were moderately concordant (Coehn’s k = 0.50, p < 0.001). Based on a logistic regression model, we estimated the risk of AD pathology considering the two biomarkers: 10.91% if both p-tau181 and NfL were negative; 41.10 and 76.49% if only one biomarker was positive (respectively p-tau18 and NfL); 94.88% if both p-tau181 and NfL were positive. Considering the moderate concordance and the risk of presenting an underlying AD pathology according to the positivity of plasma p-tau181 and NfL, we proposed a flow chart to guide the combined use of plasma p-tau181 and NfL and the interpretation of biomarker results to detect AD pathology.
Participants
Between July 2018 and September 2023, we consecutively enrolled 105 white Italian patients (43 SCD, 41 MCI and 21 AD demented) referred to the centre for Alzheimer's Disease and Adult Cognitive Disorders of Careggi Hospital in Florence.
Patients met the following inclusion criteria: • Receiving a clinical diagnosis of AD dementia according to the NIA-AA criteria, including the atypical variant 15 .• Receiving a clinical diagnosis of MCI according to NIA-AA criteria 16 .
• Receiving a clinical diagnosis of SCD according to SCD-I criteria 17 .
Exclusion criteria were: history of head injury, current neurological and/or systemic disease, symptoms of psychosis, major depression, substance use disorder.
At baseline, patients underwent comprehensive family and clinical history, neurological examination and extensive neuropsychological investigation (described in detail elsewhere 18 ), blood collection for measurement of plasma NfL and p-tau181 concentration and genetic analysis.
We defined age at baseline as the age at the time of plasma collection, disease duration as timeframe of onset of symptoms relative to baseline examination, positive family history of dementia as having one or more firstdegree relatives with documented cognitive decline.
Renal function was categorized as either impaired or not impaired based on estimated glomerular filtration rate (eGFR; considered impaired if < 60 mL/min/1.73m 2 ).eGFR was recorded only in patients with impaired renal function.
Plasma p-tau181 and NfL were dichotomized considering the cut-off previously identified for discriminating AP+ from AP− patients in SCD and MCI,: for plasma p-tau181 positive if ≥ 2.69 pg/mL, negative if < 2.69 pg/ mL; for plasma NfL, negative if < 19.45 pg/mL in SCD and < 20.49 pg/mL in MCI, positive if ≥ 19.45 pg/mL in SCD and ≥ 20.49 pg/mL in MCI 11,12 .
Study procedures and data analysis were performed in accordance with the Declaration of Helsinki and with the ethical standards of the Committee on Human Experimentation of our Institute.The study was approved by the local Institutional Review Board (Comitato Etico Regione Toscana-Area Vasta Centro) (reference 15691oss).All individuals involved in this research agreed to participate and agreed to have details and results of the research about them published. www.nature.com/scientificreports/
Classification of patients according to ATN system
Based on biomarker results, patients were classified according to the NIA-AA Research Framework (amyloid/ tau/neurodegeneration A/T/N system) 1 .Patients were rated as A+ if at least one of the amyloid biomarkers (CSF or amyloid PET) revealed the presence of Aβ pathology, and as A− if none of the biomarkers revealed the presence of Aβ pathology.In the case of discordant CSF and amyloid PET results, we considered only the pathological result.Patients were classified as T+ or T− if CSF p-tau concentrations were higher or lower than the cut-off value, respectively.Patients were classified as N+ if at least one neurodegeneration biomarker was positive (CSF t-tau higher than the cut-off value or positive FDG-PET).Patients were further classified as carrier of AD pathology (AP+) when A+ was associated with T+ (regardless of N classification), or non-carriers (AP−) when they were classified as A− (regardless of T and N classification), or A+/T−/N−, or A+/T−/N+ 11 .Using a previously described procedures, patients were furtherly classified according both to diagnosis (SCD, MCI, AD-d) and ATN classification (AP− and AP+) as follows: SCD AP−, SCD AP+ , MCI AP−, MCI AP+ , AD-d (all the AD-d patients were AP+) 11 .
Plasma p-tau181 and NfL analysis
Blood was collected by venipuncture into standard polypropylene EDTA test tubes (Sarstedt, Nümbrecht, Germany) and centrifuged within 2 h at 1300 rcf at 4 °C for 10 min.Plasma was isolated and stored at − 80 °C until testing.Plasma NfL analysis was performed with Simoa NF-Light SR-X kit (cat.No. 103400) for human samples provided by Quanterix Corporation (Lexington, Massachusetts) on the automatized Simoa SR-X platform (GBIO, Hangzhou, China), following the manufacturer's instructions.The lower limits of quantification and detection provided by the kit were 0.316 and 0.0552 pg/mL, respectively.The plasma NfL concentrations in all samples were detected in a single run.Quality controls with a low NfL concentration of 5.08 pg/mL and a high NfL concentration of 169 pg/mL were included in the array and assessed with samples.The NfL assay results are consistent with the expected values, exhibiting a coefficient of variation below 20%.
The Simoa Human p-tau181 Advantage V2 kit (item #103714, provided by Quanterix Corp.-Billerica, MA, USA) was used for the quantitative determination of p-tau181 in plasma sample.The kit analytical lower limit of quantification (LLOQ) value was 0.085 pg/mL, instead the kit limit of detection (LOD) was 0.041 pg/mL (range 0.018-0.060pg/mL).For the run setup, 7 calibrators and 2 controls, provided by Quanterix, were required for the analysis.Calibrators were used to set a calibration curve of serially measurements, controls were the lower and higher target concentration.Plasma samples and controls were diluted 4×.Calibrators, controls and samples were run in duplicate, detected in a single run basis 11,12,22 .
Statistical analysis
All statistical analyses were performed using IBM SPSS Statistics software version 25 (SPSS Inc., Chicago, Illinois) and the computing environment R4.2.3 (R Foundation for Statistical Computing, Vienna, 2013).All p values were two-tailed and the significance level for all analyses was set at p = 0.05.Distributions of all variables were assessed using the Shapiro-Wilk test.As both plasma p-tau181 and NfL were not normally distributed, we applied log 10 transformation.This transformation resulted in a more normally distributed dataset that met the assumptions of the statistical tests that we planned to use.We conducted descriptive statistics using means and standard deviation for continuous variables and frequencies or percentages and 95% confidence intervals (CIs) for categorical variables.We used the t-test for comparison between two groups, one-way analysis of variance (ANOVA) with Bonferroni post hoc test for comparisons among three or more groups, Pearson's correlation coefficient to evaluate correlations between groups' numeric measures, and chi-squared tests to compare categorical data.To adjust for possible confounding factors, we used multiple regression analysis.We performed a logistic regression analysis to define a combined model including plasma p-tau181 and NfL.We constructed receiver-operating characteristic (ROC) curves to evaluate the performance of plasma p-tau181, NfL and the combined model (NfL + p-tau in predicting ATN status.We used binomial logistic regression to ascertain the effect of plasma p-tau181 and NfL on the risk of presenting AD pathology.We calculated the size effect using Cohen's d for normally distributed numeric measures, η 2 for ANOVA and Cramer's V for categorical data.Cohen's k was used to explore concordance between plasma p-tau181 and NfL.
Ethics approval and consent to participate
Local ethics committees (Comitato Etico Regione Toscana-Area Vasta Centro) approved the study at each site, and all participants provided written informed consent.The study was conducted according to the Declaration of Helsinki.
Distribution of plasma p-tau181 and NfL across diagnostic groups
Demographic features and differences among diagnostic groups are summarized in Table 1.An MCI patient and two AD-d patients had impaired renal function (eGFR 47.7 mL/min/1.73m 2 , 58.3 mL/min/1.73m 2 , 53.0 mL/ min/1.73m 2 ), with no differences in terms of proportion of renal impairment among the SCD, MCI and AD-d groups.
Accuracy of plasma p-tau181 and NfL in predicting AP status
We performed logistic regression analyses considering, in each analysis, A, T, N and AP status as dependent variables and plasma p-tau181 and NfL levels as covariates to obtain a combined model (NfL + p-tau181) in SCD and MCI, both taking separately and together (Supplementary materials).We did not consider AD-d patients since they were all AP+ .All the regression models were statistically significant and are described in Supplementary materials.
We performed ROC curve analyses to evaluate the diagnostic accuracy of plasma p-tau181, NfL and the NfL + p-tau181.AUCs for p-tau181, NfL and the combined model NfL + p-tau181 are reported in Table 3.
Both plasma p-tau181 and NfL presented a good accuracy in discriminating A+ from A−, T+ and T−, N+ and N−, AP+ and AP− patients in SCD and MCI separately and the whole SCD and MCI group.The combined model did not significantly improve the accuracy of p-tau181 and NfL.However, despite not reaching the statistical significance, the combined model showed an excellent accuracy, with an AUC of 0.93 in SCD and MCI separately, and of 0.92 in the whole SCD + MCI group (Fig. 2).4).
Concordance between plasma p-tau181 and NfL in SCD and MCI
Cohen's K was significant (p < 0.001) with a value of 0.50, indicating a moderate concordance between plasma p-tau181 and NfL.www.nature.com/scientificreports/
Effect of plasma biomarkers on the risk of presenting AD pathology
To estimate the risk of being AP+ based on positivity of p-tau181 and/or NfL in SCD and MCI patients, we performed a logistic regression analysis using p-tau and NfL as dichotomized (positive or negative) variables.
Using the regression coefficients associated with the two covariates in the logistic model (p-tau181 and NfL), we defined the regression equation to estimate the risk of presenting an underlying AP status for each risk factor combination.The following equation describes the regression model: logit p = ln p 1−p is the logit function where p represents the probability that the event (i.e."presenting an underlying AP status") might happen, and β is the corresponding regression coefficient associated to each risk factor x. Entering the constant and the coefficients found in our logistic model, we obtained the following equation which enabled us to estimate the probability that the event "AP status" might happen.For each risk factor, value was "1" if the condition was satisfied (positive p-tau181 or positive NfL), "0" if the condition was not satisfied (negative p-tau181 or negative NfL).
Discussion
There is an urgent need to move the use of plasma biomarkers from research settings to clinical practice, to define the correct application and to determine if the combined use might increase the accuracy for the early detection of AD.Our study fits into this scenario, trying to explore the combined use of plasma p-tau181 and NfL in SCD and MCI patients.First, differences in both plasma p-tau181 and NfL levels among patients depended on the underlying pathology, not on diagnostic categories and the severity of cognitive decline.In more detail, plasma p-tau181 and NfL levels were similar in AD-d patients, MCI and SCD with underlying AD pathology.These finding suggested that differences between plasma biomarker levels in SCD, MCI, and AD-d patients were not driven by cognitive levels but rather by the underlying pathological substrate, as previously proposed by other studies [10][11][12][23][24][25][26] .
The accuracy of plasma p-tau 181 and NfL in detecting AD pathology were substantially similar in SCD and MCI patients.This might suggest that the accuracy of blood biomarkers is similar in both prodromal and preclinical AD, thus they might be promising tools in the very early stages of cognitive decline.Both plasma p-tau181 and NfL showed a good accuracy in detecting AP status in SCD and MCI both separately and together.Our results are in line with previous works showing a good accuracy of plasma p-tau181 in discriminating Aβ+ from Aβ− MCI and also Aβ+ from Aβ− cognitively unimpaired subjects 23 and Aβ− healthy controls from Aβ+ "objectively defined" SCD 27 .However, the accuracy that we found was higher than those previously detected, probably because we did not consider only isolate Aβ positivity, but in combination with T biomarkers of A/T(N) system to define the presence of AD in our SCD and MCI patients.
Other studies had reported poorer accuracy of plasma NfL than those found in our work.Several works had demonstrated that plasma NfL had only fair accuracy in discriminating AD dementia from other neurodegenerative conditions, such as FTD [28][29][30] , and a moderate performance in discriminating AD-d patients and MCI due to AD from cognitively unimpaired subjects 30 .However, it has been reported that NfL accuracy is higher in SCD and in MCI than in patients with dementia 12 .
The combined model (NfL + p-tau181) allowed to obtain an excellent accuracy, reaching 0.92, even if this value was not significantly higher than accuracy of each single biomarker.This is not surprising, as plasma p-tau181 has already showed a high accuracy in detecting AD pathology 24 .On the other hand, the combined model NfL + p-tau181 was the only one presenting an AUC which exceeded 0.90.This might suggest that the combination of plasma biomarkers might give added value in predicting AD.
Despite other works proposed cut offs for plasma biomarkers, to the best of our knowledge, no previous studies tried to explore concordance and discordance of plasma biomarkers in predicting AD pathology.We investigated concordance between plasma p-tau181 and NfL in discriminating SCD and MCI patients carrying AD pathology from non-carriers according to cut offs previously defined by our group 11,12 dichotomizing plasma biomarkers values in "positive" and "negative".Interestingly, plasma p-tau181 and NfL showed only a moderate concordance.Therefore, we descriptively evaluated our cohort of patients who had undergone CSF analysis in order to shed light on the concordant and discordant cases, and thus on how to interpret the results of plasma biomarkers.First, the double positive concordance (both plasma p-tau181 and NfL positive) led to detection of AD pathology in 94% of cases.Nevertheless, the double negative concordance presented a risk of false negative of 10%, thus not allowing to completely exclude AD pathology.The most intriguing cases were those with discordant plasma biomarkers.Indeed, the only case with negative p-tau181 and positive NfL had a CSF AD profile.So, we can hypothesize that, despite negativity of plasma p-tau181, positive plasma NfL raise suspicion of an underlying neurodegenerative disease, warranting further investigation.On the other hand, discordant plasma biomarkers with positive p-tau181 but negative NfL identified AD pathology in 43% of SCD and MCI patients, but the remaining 57% were non-carriers of AD pathology.Consequently, isolated positivity of plasma p-tau181 was able to detect an underlying AD pathology in SCD and MCI patients, despite having a high risk of false positivity.
Considering the moderate concordance and the risk of false positives and negatives, these results support the idea that the combined use of plasma biomarkers may provide a better and more accurate detection of AD.
Finally, to further estimate the risk of presenting an AD pathology based on positivity of p-tau181 and/or NfL, we performed a logistic regression analysis using p-tau and NfL as dichotomized variables and we estimated the risk of presenting an underlying AD pathology for each risk factor combination.In line with the interpretation of concordant and discordant cases, we found that the highest risk of presenting AD pathology is present in case of the concurrent positivity of plasma p-tau181 and NfL (94%).The risk decreases to 76% in case of the isolated Table 6.Risk of presenting AP status for each risk factor combination.Overall risk was derived from the following regression model equation: logit p (presenting AD pathology) = −2.10+ 174 × (plasma p-tau181) + 3.28 × (plasma NfL).For each risk factor, value was "1" if the condition was satisfied (+), "0" if the condition was not satisfied (-).positivity of plasma NfL and to 43% in case of isolated p-tau181.These "discordant cases" are a challenging question and need to be further investigated via other biomarkers.First, the risk is significantly lower in case of isolated positive p-tau181 than in isolated NfL.Despite it has been widely demonstrated that plasma p-tau181 is a specific biomarker of the typical AD tauopathy, our results showed that its isolated positivity indicates a moderate risk of presenting an underlying AD pathology in MCI and SCD patients.These findings might be explained by the fact that our study is based on a relatively small cohort, and the cut offs proposed by our group need to be validated in further studies and in other populations; moreover, current research is trying to compare several isoforms of p-tau (p-tau217, p-tau181, and p-tau231) and several types of measures (mass spectroscopy assay vs Simoa immunoassay).It has been recently demonstrated that mass spectroscopy-based measures of p-tau217 showed the best performance and accuracy in discriminating Aβ+ from Aβ− MCI and progressors to dementia from non-progressors.Consequently, we might speculate that the risk of presenting an underlying AD pathology might be higher if another isoform of plasma p-tau (with higher specificity) were used 31 .The higher risk of AD pathology in case of isolated positivity of NfL compared to isolated positivity of p-tau181 might be due to two factors: first, the high sensitivity of NfL in detecting a neurodegenerative disease 28 , in particular in raising the suspicion of AD pathology in SCD and MCI patients 12 ; second, the lower specificity of plasma p-tau181 as compared to other promising isoforms, such as p-tau217 32 .
Finally, the risk doesn't reduce to zero, but remain at 10% even if both plasma biomarkers were negative.This suggests that patients who are 'double positive' may reliably exhibit an underlying AD pathology, although we cannot exclude the presence such pathology in those who are 'double negative' .The inability to exclude AD pathology in case of double plasma biomarkers negativity is an intriguing and interesting finding.This may be linked to the risk of false negatives associated with the cut-off values proposed by our laboratory.Perhaps cutoff harmonization could reduce such risk.On the other hand, this data could be interpreted in the context of the selected patient population, since patients included in this study are not healthy controls but patients with a cognitive disorder, either objective or subjective.
Based on our results, we propose here a flow chart to guide the possible use of combined plasma biomarkers, in particular p-tau181 and NfL, in the clinical setting, considering patients in early stages of cognitive decline (i.e., SCD and MCI) (Fig. 3).
• In case both plasma p-tau181 and NfL are negative (concordant negative), an underlying AD pathology is not excludable.Therefore, close clinical and neuropsychological follow-up is recommended to assess any potential progression of the disturb.• If both plasma p-tau181 and NfL are positive (concordant positive), it is highly suspicious that the cognitive impairment reported by the patient is due to AD.At the present time, patients would undergo more accurate investigations to confirm the diagnosis, in particular CSF Aβ1-42/Aβ1-40, t-tau and p-tau.If cut-offs were validated, and a high accuracy, sensitivity, and specificity of plasma biomarkers were established, we could hypothesize that CSF analysis might no longer be necessary.• In cases of biomarker discordance, with positive NfL and negative p-tau181, a neurodegenerative disease is highly suspected, thus warranting further invasive investigations.In particular, we might suggest performing FDG-PET in order to identify potential hypometabolic patterns indicative of neurodegenerative disease 33 : • If FDG-PET were to show an hypometabolism suggestive (or at least partially indicative) of AD, CSF biomarker analysis (Aβ1-42/Aβ1-40, t-tau and p-tau) may be recommended.• If the FDG-PET were to indicate hypometabolism consistent with another neurodegenerative disease, in the future, if validated, other biomarkers could be used (i.e.α-synuclein from olfactory mucosa swabbing 34 ); at present, we advise proceeding with clinical follow-up.
• In cases of discordance with negative NfL and positive p-tau181, the data are inconclusive, therefore, it is also advisable to continue the diagnostic process with additional investigations, particularly CSF biomarkers analisys (Aβ1-42/Aβ1-40, t-tau and p-tau), to confirm the suspicion of an underlying AD pathology due to plasma p-tau181 positivity or potentially rule out false positives.
Considering these premises, as plasma biomarkers continue to advance toward approval for clinical use, it is crucial to recognize the ethical implications that may arise.In a scenario where effective treatments are not yet available, the potential for diagnosing or strongly suspecting AD solely from blood analysis raises significant ethical considerations 35 .Such a diagnosis could have diverse consequences across different stages, raising questions about psychological well-being, personal autonomy, and societal implications.For instance, individuals with high plasma biomarker levels and only subtle cognitive disturbs may face uncertainties regarding activities such as driving or maintaining employment.Therefore, as plasma biomarkers transition into clinical practice, it becomes imperative to carefully consider the ethical dimensions and potential impact on patients' lives.
Our work presents some limitations.First, the relatively small number of patients, which might reduce the power and generalizability of our study.Indeed, the second limitation is that, being a single-center study, there may be biases related to assessment and diagnosis procedures.Third, we did not include a sample of healthy control individuals.Fourth, the design of this study is cross-sectional: a longitudinal study should be performed in order to evaluate how plasma biomarkers levels change over time.
On the other hand, our study has some remarkable strengths.First of all, to the best of our knowledge, this is one of the first studies that tried to explore concordance and discordance of plasma biomarkers in detecting AD pathology in MCI and SCD.Secondly, patients were classified as carriers or non-carriers of Alzheimer's pathology considering not only A status, but also the positivity of T and/or N biomarkers, while previous studies have considered the positivity of amyloid biomarkers alone.Our approach will increase the probability that patients with mild objective or Subjective Cognitive Decline are real carriers of Alzheimer's pathology.Indeed, despite A+ /T−/N− patients are considered part of the Alzheimer's continuum, they are properly classified as carriers of "Alzheimer's pathological changes" and not Alzheimer's Disease patients.Moreover, the presence of amyloid pathology alone in early stages of cognitive decline might not be specifically prognostic of conversion to dementia 36 .Third, this study clearly focused not only on MCI, but also on SCD, since it is an even earlier stage of cognitive decline, thus representing an intriguing target population to study and to identify those people at greatest risk of developing AD dementia.Finally, we suggested a flow chart for the potential use of plasma biomarkers in patients who concern with early, mild cognitive symptoms to illustrate the possible scenario for the clinical applicability an interpretation of plasma biomarkers.
In conclusion, our work perfectly fits in the current research landscape adopting a new paradigm that integrates peripheral biomarkers for a prompt diagnosis of AD focusing on pre-dementia stages, such as SCD 37 .Indeed, our work provides clinical insights into the use of plasma biomarkers for the early detection of AD, thus having potential implications for the clinical management of patients with SCD and MCI using these noninvasive tools.Moreover, we suggested that the combined use of plasma p-tau181 and NfL may give added value, thus providing more information for the correct interpretation and the detection of AD pathology.The combined use of plasma biomarkers may be potential applicable in clinical practice, particularly in SCD patients, leading to the identification of those individuals at greatest risk of developing AD dementia, which seem to be the ideal group in which to intervene with a specific treatment in order to stop neurodegeneration.
Figure 2 .
Figure 2. ROC curves for accuracy of plasma p-tau181, NfL and the combined model (NfL + p-tau181) in distinguishing A+ from A−, T+ from T−, N+ from N−, and AP+ from AP− patients in SCD and MCI, considering both separately and together.
Figure 3 .
Figure 3. Flow chart for the potential use and interpretation of biomarkers in clinical setting for the early detection of Alzheimer's Disease.
Table 1 .
Demographic features of Subjective Cognitive Decline (SCD), Mild Cognitive Impairment (MCI) and Alzheimer's Disease dementia (AD-d) groups.Values are reported as mean and standard deviation or frequencies or percentages for continuous variables and categorical variables respectively.Statistically significantly different values between the groups are reported as bold.M males, F females, MMSE mini mental state examination.
Table 2 .
Demographic features of diagnostic and biomarkers groups.Values are reported as mean and standard deviation or frequencies or percentages for continuous variables and categorical variables respectively.Statistically significantly different values between the groups are reported as bold.M males, F females, MMSE mini mental state examination.
Table 3 .
Diagnostic accuracy of p-tau181, NfL and the combined model NfL + p-tau181 in predicting A, T, N and AP status.Values quoted in table are accuracy (in percentages %) and C.I., between brackets.
Table 4 .
Concordance between plasma p-tau181 and in SCD and MCI.Values quoted are frequencies of negative and positive patients for each biomarker.
|
v3-fos-license
|
2019-03-21T13:03:35.853Z
|
2011-04-20T00:00:00.000
|
84697561
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=4711",
"pdf_hash": "d3074c51826fd5b46cc829d57bb5861952465045",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41981",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"sha1": "d3074c51826fd5b46cc829d57bb5861952465045",
"year": 2011
}
|
pes2o/s2orc
|
Theoretical modeling of airways pressure waveform for dual-controlled ventilation with physiological pattern and linear respiratory mechanics
The present paper describes the theoretical treatment performed for the geometrical optimization of advanced and improved-shape waveforms as airways pressure excitation for controlled breathings in dual-controlled ventilation applied to anaesthetized or severe brain injured patients, the respiratory mechanics of which can be assumed linear. Advanced means insensitive to patient breathing activity as well as to ventilator settings while improved-shape intends in comparison to conventional square waveform for a progressive approaching towards physiological transpulmonary pressure and respiratory airflow waveforms. Such functional features along with the best ventilation control for the specific therapeutic requirements of each patient can be achieved through the implementation of both diagnostic and compensation procedures effectively carried out by the Advance Lung Ventilation System (ALVS) already successfully tested for square waveform as airways pressure excitation. Triangular and trapezoidal waveforms have been considered as airways pressure excitation. The results shows that the latter fits completely the requirements for a physiological pattern of endoalveolar pressure and respiratory airflow waveforms, while the former exhibits a lower physiological behaviour but it is anyhow periodically recommended for performing adequately the powerful diagnostic procedure.
INTRODUCTION
The clinical applications of assisted/controlled ventilation are mainly devoted to patients treated with anaesthesia or in Intensive Care Units or affected by the respiratory insufficient syndrome [1][2][3].
When spontaneous breathing of such patients is absent or forbidden for the entire time of treatment, controlled ventilation is required.The respiratory pattern during controlled ventilation shows only controlled breathings, i.e. breathings for which the control of lung ventilation is completely carried out by an external ventilator, in series with time [4,5].
Otherwise, when spontaneous breathing is present, even if partially in time or below the standard physiological level, assisted/controlled ventilation is recommended.Allowing the patient the possibility of spontaneous breathing at his will or capability, assisted/ controlled ventilation is so called because, it includes all that modalities or techniques in which the ventilator supplies the patient with controlled breathing only after a long lasting interval of apnea (assisted ventilation) or at detection of a very weak effort of spontaneous breathing (triggered ventilation).The respiratory pattern during assisted/controlled ventilation shows both controlled and spontaneous breathings in random series with time [6][7].
The controlled breathings supplied to patient during controlled or assisted/controlled ventilation can be properly classified considering the primary physical parameters controlled during the inspiration by the ventilator irrespective of load (respiratory characteristics of patient) variations or fluctuations as well as ventilator settings [8,9].
Volume-controlled ventilation (VCV) or pressurecontrolled ventilation (PCV) refer to different modalities in which during the inspiration the ventilator supplies the load (lungs) with the pre-established volume (tidal or minute volume) through the selected respiratory airflow waveform or applies to the load the pre-established airways pressure waveform, respectively [8,9].
The historical background of both VCV and PCV as well as their advantages and disadvantages in different clinical applications of assisted/controlled ventilation have been extensively described elsewhere [9,10].In summary, considering the higher physiological character along with lower level of intrinsic pathological risks and functional failure involved, nowadays PCV is certainly the most adopted in the clinical practice [9][10][11][12].
The functional disadvantages of PCV which does not provide for the control of lung volume (tidal or minute) has been overcome with the implementation of dualcontrolled ventilation (DCV), i.e.PCV with ensured tidal or minute volume [13,14].In detail, DCV is an advanced form of PCV in which the magnitude of selected airways pressure waveform is automatically regulated by feedback control for delivering during the inspiration time either the tidal volume required or, considering the current breathing frequency, the minute volume preestablished [9,15].This is the so called DCV "breath to breath" mode, representing the most diffused form of DCV in the clinical practice [15,16].In a different way, the so called DCV "within a breath" mode is a DCV mode in which the ventilator switches from pressure to volume control in the middle of the breath [15,16].
In most cases during assisted/controlled ventilation, the respiratory system of healthy anaesthetized or severe brain injured patients exhibits a steady and reproducible response to controlled breathings, if evaluated as a whole.Moreover, the breathing dynamics involved is considerably reduced on account of small tidal volume required.Therefore, the respiratory mechanics of such patients can be properly assumed steady and linear [9].According to PCV excitation hypothesis along with to steady and linear respiratory mechanics assumption, only DCV "breath to breath" mode will be considered in the present work.Moreover, DCV "breath to breath" mode is perfectly compatible with the feedback control adopted for the ventilation process [9] which regulates the operative parameters only between different breathings evaluated as a whole, i.e. in steady conditions and does not within the transient time of each breath [9,15,16].
Until today, PCV or DCV have been mainly implemented with square waveform as airways pressure excitation, i.e. two different constant levels of airways pressure applied to patient during both inspiration and expiration [12,14,16].Such strong limitation in waveform modeling of airways pressure controlled by the ventilator, resulting from simplified hardware and software design, reduces drastically the functional versatility of the venti-lator performances.
Among the different systems proposed for removing this limitation [8,9,[17][18][19][20][21][22][23][24][25][26][27][28] and thus for evaluating the effect of varying inspiratory airflow waveforms on clinical parameters of mechanically ventilated patients [29][30][31], the Advanced Lung Ventilation System (ALVS) has been conceived and designed for the waveform optimization of airways pressure excitation when controlled breathings have to be apply during assisted/ controlled ventilation to anaesthetized or severe brain injured patients, the respiratory mechanics of which can be assumed steady and linear [9,[32][33][34][35][36].The functional flexibility and versatility of ALVS are both extremely useful for the research activity with an optimal and advanced ventilator as well as for its laboratory and clinical development and testing [9].
The present work deals with the description of both theory and ALVS settings performed for modeling a more realistic approximation of airways pressure excitation to physiological transpulmonary pressure waveform.The optimization of such excitation for patient, i.e. airways pressure waveform, has been carried out in order to reach a more physiological reaction of patient, i.e. respiratory airflow and endoalveolar pressure waveforms.
METHODS
The optimization of controlled breathing during assisted/ controlled ventilation obtained by the functional features of ALVS, has been extensively reported and discussed elsewhere [9].Concerning the controlled breathings applied to patient during assisted/controlled ventilation, in order to improve over the conventional PCV with ensured tidal or minute volume, i.e. dual-controlled ventilation (DCV), ALVS has been designed for performing two subsequent functional steps.
The first step consists in the optimization of the ventilation control with conventional square waveform as airways pressure excitation applied to patient.This result has been already reached by means of two effective functional procedures: The diagnostic and the compensation procedures.
The theoretic approach on which the optimization of the ventilation control as well as both the diagnostic and the compensation procedures found, have been extensively reported in a previous paper [9] in which the respiratory mechanics of considered patients, i.e. anaesthetized or severe brain injured patients, has been properly assumed linear.Moreover, the ventilation control works by feedback regulation acting after the acquisition of each controlled breathing accounted as a whole, i.e. in steady conditions.
The diagnostic procedure establishes the optimal time of both inspiration and expiration taking into account the current respiratory characteristics (airways resistance and lung compliance) of patient and his diagnostic evaluations.Practically, the procedure sets the time of both inspiration and expiration as about five times the current inspiratory and expiratory time constants, the determination of which, along with other useful diagnostic parameter, is obtained real-time by ALVS monitoring system [37][38][39].The determination of both airways resistance and lung compliance of patient is currently performed by the diagnostic procedure with high accuracy and without any unfavourable deformation of respiratory pattern otherwise introduced with the required artificial respiratory airflow interruption [40,41].The high accuracy results from the application of the compensation procedure, described as follows, since it allows the correct implementation of the results available from the theory developed assuming a real square waveform as airways pressure excitation [9,[42][43][44].
The compensation procedure stabilizes the airflow across the external resistance which controls the airways pressure applied to patient during the whole respiratory time.The procedure is performed through the variation during both inspiration and expiration of ALVS generator's internal resistance around its steady equilibrium value assumed during apnea, according to the respiratory airflow waveform resulting from patient's breathing activity and characteristics.The determination of the respiratory airflow waveform is obtained real-time by AL-VS monitoring system.In such a way, ALVS behaves like an ideal airways pressure generator, making possible a real square waveform as airways pressure excitation through a proper square waveform as external resistance of ALVS controlling the airways pressure applied to patient, eliminating the airways pressure distortion induced by the dependence on current value of load (airways resistance and lung compliance) and its variations.
The experimental results obtained by ALVS connected with a well suited and versatile lung simulator performing the implementation of both diagnostic and compensation procedure for advanced square waveform as airways pressure excitation are completely in agreement with the theoretical ones, showing clearly that the ventilation control optimization has been reached.In particular, concerning the lung volume control, the results point out that the tidal or minute volume are independent on airways resistance or lung compliance, respectively [9].
The last results are very interesting from both clinical and engineering point of view since an increase of airways resistance (obstructive process) or a reduction of lung compliance (restrictive process) does not affect the control of tidal or minute volume, respectively, avoiding a critical regulation of the airways pressure levels applied to patient.
The second step consists in the optimization of the ventilation control with waveforms of improved shapes as airways pressure excitation applied to patients considering their current clinical conditions and specific therapeutic requirements.Improved shapes means more realistic approximation of airways pressure waveform to physiological transpulmonary pressure waveform inducing a more physiological reaction of patient, i.e. respiratory airflow and endoalveolar pressure waveforms.
The implementation of the diagnostic procedure in these cases also ensures that the optimal time of both inspiration and expiration is retained taking anyhow into account the current respiratory characteristic of patient (airways resistance and lung compliance) and his diagnostic evaluations.Moreover, the implementation of the compensation procedure in these cases also, making the selected airways pressure waveform insensitive to patient's respiratory characteristics, allows any airways pressure waveform of clinical interest during both inspiration and expiration through an identical shape of external resistance waveform which controls the airways pressure applied to patient.
In the present work, from a theoretical point of view, two waveforms of increasing geometrical shape with respect to conventional square waveform have been considered as airways pressure excitation applied to patient: triangular and trapezoidal.Accordingly to the physiopathological and clinical condition of patients considered as well as to the physical characteristics of controlled breathings in assisted/controlled ventilation modalities, the theoretical treatment in both cases has been carried out evaluating each controlled breathing as a whole, i.e. in steady conditions and assuming linear the respiratory mechanics of patients [9,[32][33][34][35][36].
Advanced Square Waveform as Airways
Pressure Excitation (AD_SQUARE) Figure 1 shows the airways (p AW (t)) and endoalveolar (p EA (t)) pressures (p(t)) as well as the respiratory airflow ( RES (t)) as a function of time (t) resulting from the application to patient of the advanced square waveform as airways pressure excitation (AD_SQUARE).The time, the variables and the parameters relative to inspiration and expiration will be denoted with the addition of a specific pedix character (i) and (e), respectively.As depicted in Figure 1(b), on account of their opposite directions, the inspiratory ( INS (t i )) and expiratory ( EXP (t e )) airflows are conventionally considered positive and negative quantities, respectively.The most relevant results obtained in the previous work [9] are summed up as follows.The real time If the upper (PI) and lower or external positive end expiratory pressure (PEEP EXT ) constant levels of square waveform as p AW excitation are kept for an inspiration (TI) and expiration (TE) times equals to t TI and t TE , respectively, the following expressions occur: 5 5 5 5
is the lung volume as a function of time.
According to the assumption of linear respiratory mechanics for controlled breathings accounted as a whole, i.e. in steady conditions [9], the static lung compliance (C P ), defined by (3), can be considered as constant during the whole respiration time, while the different values assumed by the respiratory airways resistance (R RES ) during inspiration (R INS ) and expiration (R EXP ) can be both considered constant.
According to ( 6) and ( 7), the maximum or peak (PAP) and minimum or total positive end expiratory pressure (PEEP TOT ) values of p EA assumed at the end of inspiration (t i = TI) and expiration (t e = TE), respectively, can be easily detected since they equal the constant PI and PEEP EXT values assumed by p AW during the inspiration (0 ≤ t i ≤ TI) and the expiration (0 ≤ t e ≤ TE), respectively.( 8) establishes that the value assumed by p EA at the beginning of inspiration (p EAi (0)) should be equal to that assumed at the end of last expiration (p EAe (TE)).
Concerning with v P , considering (3), ( 6) and ( 8) at the beginning (t i = 0) and end (t i = TI) of inspiration, the following expressions result: FRC and V TID denote the functional residual capacity and the tidal volume delivered to patient for every inspiration.
If TI and TE are expressed in seconds, considering that the breathing period (TR) equals to the sum TI + TE, from both (1) and (2), the breathing frequency (FR), expressed in act for minutes, is defined as follows: Considering both (11) and ( 12), the so-called minute volume (V MIN ), i.e. the volume delivered to patient for every minute, is given by the following expression: As pointed out in § 2, (11) and (13) establish that V TID or V MIN are independent on R RES (both R INS and R EXP ) or C P , respectively.That is extremely relevant from both clinical and engineering point of view since an increase of R RES (obstructive process) or a reduction of C P (restrictive process) does not affect the control of V TID or V MIN , respectively, avoiding a critical regulation of p AW constant levels applied to patient.
As it is well known, the mean value (M) of a periodic function of time (t) with period T (f T (t)) is defined as follows: If p AW (t) and p EA (t) are considered as periodic functions of time with period TR, the mean p AW (t) (MAP) and p EA (t) (MEP) assume the following expressions: From Figure 1(a), (1), ( 2), ( 4), ( 5), ( 12), ( 15) and ( 16), it is easy to demonstrate that MAP and MEP values of AD_SQUARE (MAP squ and MEP squ ) result as follows:
Advanced Improved-Shape Waveforms as Airways Pressure Excitation
The favourable results obtained with the implementation of AD_SQUARE in term of ventilation control optimization ( § 3.1), suggest theoretical effort for considering advanced and improved-shape waveforms as p AW excitation applied to patient.Advanced means insensitive to patient breathing activity as well as to ventilator settings.Improved-shape intends in comparison to conventional square waveform for a progressive approaching to physiological transpulmonary pressure waveform producing a more suitable reaction of patient, i.e. a more realistic approximation of RES and p EA waveforms to physiological ones.For this reason, moving from AD_SQUARE, two waveforms of different geometrical shape which progressively approach the best solution are going to be considered: Triangular and trapezoidal.The problem to be solved consists in the proper smoothing of RES vertical discontinuities occurring at the beginning of both inspiration and expiration when RES is reversed, as response to upward and downward p AW vertical transitions characteristic of square waveform.So that, the elimina-tion of such p AW vertical transitions is the most relevant change to be applied on AD_SQUARE.
Advanced Triangular Waveform as Airways
Pressure Excitation (AD_TRIANG) Figure 2 shows the advanced triangular waveform as p AW excitation (AD_TRIANG).Unlike AD_SQUARE, where p AW is kept constant during the whole time of inspiration (0 ≤ t i ≤ TI), in AD_TRIANG p AW increases linearly from minimum or PEEP EXT to maximum or peak (PIP) values assumed at the beginning (t i = 0) and at the end (t i = TI) of the time during inspiration (t i ), respectively.The linear increase of p AW has been selected for smoothing RES discontinuity occurring at the beginning of every inspiration as response to upward p AW vertical transition of AD_SQUARE.As in AD_SQUARE, during the whole time of expiration (0 ≤ t e ≤ TE) p AW is kept constant on PEEP EXT value.AD_TRIANG can be carried out by connecting the patient's airways with an ideal generator creating a triangular waveform of p AW .The electrical-equivalent circuit of AD_TRIANG generator connected to the patient's airways is shown in Figure 3.The respiratory mechanics of patient (lung simulator) has been treated with a steady and linear physical model consisting of the respiratory airways resistance (R RES ) connected in series to the lung compliance (C P ) [9,45,46].Such physical model does not include any inductance on account of negligible inertia of airflow as well as airways, lungs and chest tissues at very low breathing frequencies involved (10 -12 act/min).
Inspiration Time
The application of the second Kirchhoff's law to the circuit of Figure 3 provides for the following equation: AD_TRIANG (Figure 2) requires the following expression of p AWi (t i ): where k 1i is the slope of p AWi (t i ) linear increase with time (t i ).
As it is well known, INS (t i ) is defined as the time derivation of v Pi (t i ), as follows: Considering (3) as well as by inserting both ( 20) and ( 21) into (19), the following equation results: In order to solve Eq.22, i.e. to find out the transient and steady expressions of v Pi (t i ), it is useful to transform it from time (t i ) to Laplace (s) variable domains, as follows: On account of both ( 1) and ( 9), the solution of ( 23) consists in the following expression: Eq.24 can be properly decomposed as follows: The unknown constants A, B and C can be determined by setting Eq.24 equal to Eq.25, resulting as follows: 1i Finally, by inserting ( 26), ( 27) and ( 28) into ( 25), the following expression results: According to the Inverse Laplace Transform of ( 29), the function v Pi (t i ) assumes the following expression: The functions INS (t i ) and p EAi (t i ) can be determined considering ( 21) and ( 3), respectively, as follows: The functions v Pi (t i ), INS (t i ) and p EAi (t i ) are reported in Figure 4, Figure 5 and Figure 6, respectively.The difference between p AWi (t i ) and p EAi (t i ) (p i (t i )) can be determined from both (20) and (32), as follows (Figure 6): The same result of (33) could also be obtained by inserting ( 31) into (19).Considering ( 30), ( 31) and ( 32) at the beginning of inspiration time (t i = 0), the following expressions result: 0 0 (34) and (36) fit well ( 9) and (8), respectively (Figure 4 and Figure 6), according to the same steady conditions occurring in AD_SQUARE at the beginning of inspiration time (t i = 0) and thus at the end of last expiration time (t e = TE).
If TI is set equal to the time required for reaching the 20), ( 30), ( 31), ( 32) and ( 33) provide for the following expressions: Such problem will be soon after removed with AD_ TRAPEZ ( § 3.4).Both ( 33) and ( 41) establish that p i increases from zero to a saturation value (Figure 6) given by the prod- (32) establishes that during the transient time (0 ≤ t i ≤ TI) the second time derivation of p EAi (t i ) is positive, the time rate of p EAi (t i ) increasing from 0 to k 1i , that is the slope selected for p AWi (t i ).The last results are equally remarkable if compared to those obtained with AD_SQUARE.Unlike AD_SQUARE, indeed, where p EAi (t i ) waveform can be controlled only by selecting the maximum (PI) constant level of p AW (control of first order), in AD_TRIANG p EAi (t i ) waveform can be much safely controlled by selecting k 1i value under of which the p EAi (t i ) increasing rate with time is certainly kept (control of second order).
Considering both (34) and (38), V TID for an inspiration time equal to TI (V TID (TI)), results as follows: From both ( 12) and ( 42), V MIN for an inspiration time equal to TI (V MIN (TI)), results as follows: (43) takes into account that TE is set equal to 5 EXP ( § 3.3.2).Expressions ( 37)-( 43) provide the rationale for the optimization of ventilation control in AD_TRIANG during the time of inspiration.k 1i value required for delivering in the same time (TI = 5 INS ) the same V TID as in AD_SQUARE (k squ1i ) can be determined by setting (42) equal to (11) The diagnostic procedure has to be implemented as follows.According to both (1) and ( 31 From ( 1), ( 49) and (50), R INS can be determined as follows: Once INS , C P and R INS have been determined, TI should be set equal to the measured t TI (TI = t TI ) and k 1i regulation should be performed in order to fit the clinical requirements on PIP, PAP, V TID or V MIN through (37), ( 40), (42) or (43), respectively.In particular, concerning with dual-control mode, k 1i values ensuring the pre-set V TID (k 1iTID ) or V MIN (k 1iMIN ) can be determined from ( 42) or (43), as follows: PIP values resulting from V TID (PIP TID ) or V MIN (PIP MIN ) dual-control mode, can be obtained by inserting (52) or (53) into (37), as follows: (54) and (55) show that PIP TID or PIP MIN are independent on R RES or C P , respectively.That is extremely relevant from both physiopathological and clinical point of view since an increase of R RES (obstructive process) or a reduction of C P (restrictive process) does not affect the maximum p AW value reached for dual-control mode with pre-set V TID or V MIN , respectively.The compensation procedure is required in order to assimilate ALVS with an ideal p AW generator providing for a real AD_TRIANG, i.e. triangular waveform as p AW excitation insensitive to patient's respiratory characteristics and ventilator settings, through a proper triangular waveform of external resistance (R EXT ) of ALVS which controls p AW [9].The electrical-equivalent network of ALVS is shown in Figure 7.According to ALVS configuration and performances [9], the compensation procedure requires that during both inspiration and expiration the airflow crossing R EXT , i.e. the external airflow ( EXT ), must be kept constant and equal to the equilibrium value assumed in initial steady conditions during apnea ( EXT0 ), for which the following expressions result: VEN0 is the steady equilibrium value assumed by the ventilation airflow ( VEN ), i.e. the airflow delivered by the generator.R EXT0 is the lowest R EXT value set for PEEP EXT regulation by means of (58).P G and R G0 are the output pressure and equilibrium value assumed by the internal resistance (R G ) of ALVS generator, respectively, both set for VEN0 regulation by means of (57), according to the following initial steady condition: Considering ( 58), (59) implies the following condition: So, R EXT (t i ) to be implemented for dual-control mode during inspiration (0 ), assumes the following expressions: The stabilization of EXT during inspiration ( EXTi (t i )) on EXT0 can be carried out by proper modeling VEN waveform during inspiration ( VENi (t i )) according to the first Kirchhoff's low applied at airways node of ALVS's network (Figure 7): By inserting both (57) and ( 31) into (66), the following expression results: The second Kirchhoff's low applied to the circuit of ALVS generator (Figure 7), assumes the following expression: is the R G waveform to be implemented for performing an effective compensation procedure during inspiration.By inserting both (67) and ( 20) into (68), R Gi (t i ) can be determined as follows: From (69), on account of (60), the maximum (R Gi *) and minimum ( Gi R ) values assumed by R Gi (t i ) at the beginning (t i = 0) and at end (t i = TI) of the time during inspiration (t i ), respectively, result as follows: So, considering that in coincidence with the end of inspiration (t i = TI), i.e. the end of transient inspiration time, R Gi and R EXTi assume their minimum ( Gi R ) and maximum ( * EXT R ) values, respectively, R Gi modeling must take into account the following final steady condition: According to both (70) and (71), i.e. to condition Gi R < R G0 and considering (61), (72) implies the following condition: Obviously, (72) and ( 73) replace ( 59) and (60), respectively.On account of both ( 73) and ( 58), (69) and (71) reduce to the following expressions, respectively: Moreover, in order to avoid that Gi R reaches unpractical reduced values, the following condition should be properly taken into account: Considering ( 75), (76) leads to the following functional limitation on k 1i value: The function R Gi (t i ) is reported in Figure 8.
Expiration time
The application of the second Kirchhoff' law to the circuit of Figure 3 provides for the following equation: From both ( 2) and (38), on account of continuity condition on v P when the switching between inspiration and expiration takes place (v Pe (0) ≡ v Pi (TI)), the solution of (82) consists in the following expression: Eq.83 can be properly decomposed as follows: The unknown constants A and B can be determined by setting Eq.83 equal to Eq.84, resulting as follows: Finally, by inserting (85) and ( 86) into (84), the following expression results: According to the Inverse Laplace Transform of (87), the function v Pe (t e ) assumes the following expression: The same result of (91) could also be obtained by inserting (89) into (78).Considering (88), ( 89) and (90) at the beginning of expiration time (t e = 0), the following expressions result: (92) and (94) fit well ( 38) and ( 40), respectively, according to continuity condition required between the end of inspiration and the beginning of expiration (Figure 4 and Figure 6).Unlike our purpose, (39) together with (93) establish a considerable discontinuity occurring on RES at the end of every inspiration when the switching between inspiration and expiration takes place (Figure 5).Such problem will be soon after removed with AD_ TRAPEZ ( § 3.4).
If TE is set equal to the time required for reaching the steady condition, i.e. the end of transient expiration time (t TE = 5 EXP ), ( 88), ( 89), ( 90) and (91) provide for the following expressions: (95) and (97) fit well (34) and (36), respectively, according to continuity condition required at the transition between the end of every expiration and the beginning of the following inspiration (Figure 4 and Figure 6).Moreover, according to our purpose, (96) together with (35) establish the real elimination of RES discontinuity occurring in coincidence with such a transition (Figure 5).
The diagnostic procedure has been implemented as follows.According to both (2) and (89), the measurement of the time required for reaching the end of transient expiration time (t TE = 5 EXP ), i.e. for observing a ninety nine per cent (99%) reduction of EXP with regard to its initial value ( EXP (0)), is useful for the determination of EXP , as follows: According to (93), for a given k 1i and INS values, the monitoring of EXP (0) leads to the determination of R EXP , as follows: From ( 2), ( 99) and (100), C P can be determined as follows: The second Kirchhoff's low applied to the circuit of ALVS generator (Figure 7), assumes the following expression: From (106), on account of (60), the maximum ( * Ge R ) and minimum ( Ge R ) values assumed by R Ge (t e ) at the beginning (t e = 0) and at end (t e = TE) of the time during expiration (t e ), respectively, result as follows: On account of both ( 58) and ( 60), ( 106) and (107) reduce to the following expressions, respectively: The function R Ge (t e ) is reported in Figure 8.So that, in conclusion, the implementation of ( 74) and (109) during the inspiration (0 ≤ t i ≤ TI) and expiration (0 ≤ t e ≤ TE) times, respectively, ensures the compensation procedure to be carried out, providing for an effective AD_TRIANG.
Advanced Trapezoidal Waveform as Airways Pressure Excitation (AD_TRAPEZ)
Figure 9 shows the advanced trapezoidal waveform as p AW excitation (AD_TRAPEZ).In AD_TRAPEZ the time of inspiration (0 ≤ t i ≤ TI) is divided into two sub sequent intervals of time lasting t1 and t2.During first (0 ≤ t i ≤ t1) and second (t1 ≤ t i ≤ t1 + t2 = TI) intervals, p AW increases linearly with an higher slope (k 2i ) from minimum or PEEP EXT to maximum or peak (PIP) values and keeps constant on PIP value, respectively.Therefore, t1, t2 and k 2i fit the following conditions: As in AD_TRIANG, the linear increase of p AW has been selected for smoothing RES discontinuity occurring at the beginning of every inspiration, but its duration has been reduced (t1 < TI) for having a following interval (t2 In such a way, according to our purpose, the discontinuity on RES occurring in AD_TRIANG at the end of every inspiration when the switching between inspiration and expiration takes place can be completely removed.AD_TRAPEZ can be carried out by connecting the patient's airways with an ideal generator creating a trapezoidal waveform of p AW .The electrical-equivalent circuit of AD_TRIANG generator connected to the patient's airways is shown in Figure 10.
Inspiration Time
Concerning the first interval of inspiration (0 ≤ t i ≤ t1), 1 2 1 e 0.86 According to AD_SQUARE analysis ( § 3.1), during the second interval of inspiration (t1 ≤ t i ≤ t1 + t2 = TI), on account of initial conditions (t i = t1) established by ( 116)-( 120), the following expressions can be deduced: ginning of inspiration (Figure 12).At the end of inspiration time (t i = TI) with an AD_TRAPEZ for which both ( 126) and (127) result, the following expressions can be deduced: (133) From both ( 12) and (133), V MIN for an inspiration time equal to TI (V MIN (TI)), results as follows: (134) takes into account that TE is set equal to 5 EXP ( § 3.4.2).According to our purpose, ( 131) and (132) establish the reduction to zero of both p i (TI) and INS (TI).
Under the last condition (k 2i = k squ2i ), the following expressions result: From ( 1), ( 49) and (141), R INS can be determined as follows: 2 0.86 5 1 2 Once INS , C P and R INS have been determined, TI should be set equal to the measured t TI (TI = t TI ) and k 2i regulation should be performed in order to fit the clinical requirements on PIP, PAP, V TID or V MIN through (128), ( 129), (133) or (134), respectively.In particular, concerning with dual-control mode, k 2i values ensuring the pre-set V TID (k 2iTID ) or V MIN (k 2iMIN ) can be determined from ( 133) or (134), as follows: PIP values resulting from V TID (PIP TID ) or V MIN (PIP MIN ) dual-control mode, can be obtained by inserting (143) or (144) into (128), as follows: In the same way as ( 54) and ( 55), ( 145) and ( 146) show that PIP TID or PIP MIN are independent on R RES or C P , respectively.That is, again, extremely relevant from both physiopathological and clinical point of view since an increase of R RES (obstructive process) or a reduction of C P (restrictive process) does not affect the maximum p AW value reached for dual-control mode with pre-set V TID or V MIN , respectively.Moreover, by comparing (145) and ( 146) with ( 54) and (55), respectively, the component of PIP TID and PIP MIN above PEEP EXT of AD_TRAPEZ both show a 20% reduction with respect to those of AD_ TRIANG.
The compensation procedure is required in order to assimilate ALVS with an ideal p AW generator providing for a real AD_TRAPEZ, i.e. trapezoidal waveform as p AW excitation insensitive to patient's respiratory characteristics and ventilator settings through a proper trapezoidal waveform of external resistance (R EXT ) of ALVS which controls p AW [9].The electrical-equivalent network of ALVS is shown in Figure 7.With the same approach in the first interval (0 ≤ t i ≤ t1) and as constant keeping of R EXT on * EXT R in the second interval (t1 ≤ t i ≤ t1 + t2), respectively, while during expiration (R EXTe (t e )) as linear decreasing of to R EXT0 in the first interval (0 ≤ t e ≤ t3) and as constant keeping of R EXT on R EXT0 in the second interval (t3 ≤ t e ≤ t3 + t4), respectively.In particular, concerning with dual-control mode, * EXT R values ensuring the pre-set ) can be obtained by inserting (145) or ( 146) into (61), respectively, and considering (9), as follows: So, R EXT (t i ) to be implemented for dual-control mode during the first (0 ≤ t i ≤ t1) and second (t1 ), assume the following expressions: According to (66), the stabilization of EXT during inspiration ( EXTi (t i )) on EXT0 can be carried out by proper modeling VEN waveform during inspiration ( VENi (t i )).
So that, the implementation of ( 155) and ( 158) during the first (0 ≤ t i ≤ t1) and second (t1 ≤ t i ≤ t1 + t2 = TI) intervals, respectively, ensures the compensation procedure to be carried out during the whole inspiration time (0 ≤ t i ≤ TI).
Expiration Time
Concerning to the first interval of expiration (0 ≤ t e ≤ t3), according to (128) and to Figure 9 In order to solve Eq.160, i.e. to find out the transient and steady expressions of v Pe (t e ), it is useful to transform it from time (t e ) to Laplace (s) variable domains, as follows: From ( 2) and ( 130), on account of continuity condition on v P when the switching between inspiration and expiration takes place (v Pe (0) ≡ v Pi (TI)), the solution of (161) consists in the following expression (see * in the end): Eq.162 can be properly decomposed as follows: The unknown constants A, B and C can be determined by setting Eq.162 equal to Eq.163, resulting as follows: Finally, by inserting (164), ( 165) and (166) into (163), the following expression results: According to the Inverse Laplace Transform of (167), the function v Pe (t e ) assumes the following expression: The functions EXP (t e ) and p EAe (t e ) can be determined considering (80) and ( 3), respectively, as follows: The difference between p AWe (t e ) and p EAe (t e ) (p e (t e )) can be determined from both (159) and (170), as follows (Figure 13): The same result of (171) could also be obtained by inserting (169) into (78).Considering (168), ( 169) and (170) at the beginning of expiration time (t e = 0), the following expressions result: (172) and (174) fit well (130) and (129), respectively, according to continuity condition required at the transition between the end of inspiration and the beginning of expiration (Figure 11 and Figure 13).Moreover, according to our purpose, (173) together with (132) establish the real elimination of RES discontinuity occurring in coincidence with such transition and the reduction to zero of EXP (0) (Figure 12).If t3 is set equal to EXP (t3 = EXP ), the following expressions result: (189) and (188) fit well ( 34) and ( 36), respectively, according to continuity condition required at the transition between the end of every expiration and the beginning of the following inspiration (Figure 11 and Figure 13).Moreover, according to our purpose, (191) together with (35) and (190) establish the real elimination of RES discontinuity occurring in coincidence with such transition and the reduction to zero of both p e (TE) and EXP (TE) (Figure 12 and Figure 13).Finally, from (177) together with (93), the peak of EXP ( EXP (t e =t3)) shows a 37% reduction compared to those of AD_TRIANG and AD_ SQUARE ( EXP (t e = 0)).Therefore, the waveforms reported in Figure 11, Fig- ure 12 and Figure 13 compared to those reported in Figure 4, Figure 5 and Figure 6, respectively, show clearly that AD_TRAPEZ induces a more physiological reaction than AD_TRIANG.This is essensially due to the elimination in AD_TRAPEZ of discontinuity on RES occurring in AD_TRIANG when the switching between inspiration and expiration takes place.
The diagnostic procedure has to be implemented as follows.The time required for reaching the end of transient expiration time (t TE = 5 EXP ) cannot be precisely measured with AD_TRAPEZ, due to the influence of unknown EXP on t3 and t4.The regular application of AD_TRIANG ( § 3.3.2) every few minutes represents the most suitable way to solve such problem.Once t TE has been evaluated in such a way, EXP can be determined by means of (99).According to (177), for a given k 2i and INS values, the monitoring of EXP (t3= EXP ) leads to the determination of R EXP , as follows: According to (103), the compensation procedure, i.e.
So that, the implementation of (199) and (202) during the first (0 ≤ t e ≤ t3) and second (t3 ≤ t e ≤ t3 + t4 = TE) intervals, respectively, ensures the compensation procedure to be carried out during the whole expiration time (0 ≤ t e ≤ TE).
DISCUSSION AND CONCLUSIONS
The promising experimental results, according to theo-retical ones, carried out in a previous work with the Advanced Lung Ventilation System (ALVS), set for applying a real square waveform as airways pressure excitation to a proper lung simulator reproducing the steady and linear respiratory mechanics of anaesthetized or severe brain injured patients, have suggested and motivated the present work.
It consists in the theoretical study in the field of assisted/controlled ventilation with advanced and improved-shape waveforms as airways pressure excitation for the optimization of controlled breathings applied to patients the respiratory mechanics of which can be assumed steady and linear.Advanced means insensitive to patient (load) breathing activity as well as to ventilator settings.Improved-shape intends in comparison to conventional square waveform for a progressive approaching to physiological transpulmonary pressure waveform producing a more suitable reaction of patient, i.e. a more realistic approximation of respiratory airflow and endoalveolar pressure waveforms to physiological ones.
The problem to be solved has been the proper smoothing of respiratory airflow vertical discontinuities occurring at the beginning of both inspiration and expiration when the respiratory airflow is reversed, as response to upward and downward airways pressure vertical transitions characteristic of square waveform.So that, the elimination of such airways pressure vertical transitions is the most relevant change to be applied on square waveform as airways pressure excitation.For the purpose, two waveforms of different geometrical shape (triangular and trapezoidal) as airways pressure excitation which progressively approach the best solution have been considered.
The results show that the application of both the diagnostic and compensation procedures together with the setting of the time of inspiration and expiration equal to five times the inspiratory and expiratory time constants, respectively, ensure the optimization of the ventilation control in all cases with the following different functional implications.
Advanced triangular (AD_TRIANG) and trapezoidal (AD_TRAPEZ) waveforms have been considered in comparison to conventional advanced square waveform (AD_SQUARE) as airways pressure excitation.The geometrical parameters of AD_TRAPEZ has been optimized in such a way the resulting respiratory airflow waveform does not show any vertical discontinuity approximating as much as possible the smoothed shape of physiological waveform as well as keeping quite the same values of mean airways (MAP) and endoalveolar pressures (MEP) of AD_SQUARE.
AD_SQUARE shows a low physiological profile due to the presence of two different considerable disconti-nuities on respiratory airflow waveform occurring when the switching between inspiration and expiration and vice versa, take place.Concerning with dual-control mode, tidal (V TID ) and minute (V MIN ) volumes are independent on respiratory resistance and lung compliance, respectively.That is extremely relevant from physiological, clinical and engineering point of view because an increase of respiratory resistance (obstructive process) or a reduction of lung compliance (restrictive process) does not affect the control of V TID or V MIN , respectively.
AD_TRIANG eliminates the discontinuity on respiratory airflow waveform between expiration and following inspiration but increases the amount of the discontinuity on respiratory airflow between inspiration and expiration.Moreover, the diagnostic procedure is available for accurate results.Keeping the same values of inspiration (TI) and expiration (TE) times and thus of breathing frequency (FR), as well as of external positive end expiratory pressure (PEEP EXT ) and V TID and thus of peak endoalveolar pressure (PAP) with regard to AD_SQUARE, the peak inspiratory airways pressure (PIP) increases less than 25% while the components of MAP and MEP above PEEP EXT show a 37.5% and 31.3%reduction, respectively.
AD_TRAPEZ eliminates both the discontinuities on respiratory airflow waveform providing for the desired physiological shape of both respiratory airflow and endoalveolar pressure waveforms.Unfortunately, the diagnostic procedure is not available for accurate results.Keeping the same values of TI, TE and thus of FR, PEEP EXT and V TID and thus PAP with regard to AD_SQUARE, PIP as well as both MAP and MEP are quite the same.
In both AD_TRIANG and AD_TRAPEZ, PIP resulting from dual-control mode with pre-set V TID or V MIN , are independent on respiratory resistance and lung compliance, respectively.That is extremely relevant from both physiopathological and clinical point of view since an increase of respiratory resistance (obstructive process) or a reduction of lung compliance (restrictive process) does not affect the maximum value of airways pressure reached for dual-control mode with pre-set V TID or V MIN , respectively.
So, in conclusion, AD_TRAPEZ fits well the requirements for a physiological respiratory pattern concerning endoalveolar pressure and airflow waveforms, while AD_TRIANG exhibits a lower physiological behaviour but is anyhow periodically recommended for performing adequately the powerful diagnostic procedure.
The promising results of the present work establish the rationale for laboratory and clinical test in the field of dual-controlled ventilation with AD_TRAPEZ along with AD_TRIANG.
Figure 1 .
Figure 1.(a) Airways (p AW (t)) and endoalveolar (p EA (t)) pressures (p(t)) along with (b) respiratory airflow ( RES (t)) as a function of time (t) resulting from the application of AD_ SQUARE.In (b) inspiratory ( INS (t i )) and expiratory ( EXP (t e )) airflows are depicted as positive and negative quantities, respectively, on account of their opposite directions.monitoring of both INS (t i ) and EXP (t e ) provides for the determination of the following parameters: t TI ; t TE ; INS (0); EXP (0). INS (0) and EXP (0) are the initial maximum values assumed by INS and EXP , respectively, while t TI and t TE are the times required for reaching the end of transient inspiration and expiration times, i.e. for observing a ninety nine percent (99%) reduction of INS and EXP with regard to INS (0) and EXP (0), respectively.
Figure 3 .
Figure 3. Electrical-equivalent circuit of AD_TRIANG generator connected to the patient's airways.
Figure 4 .
Figure 4. Lung volume (v P (t)) as a function of time (t) resulting from the application of AD_TRIANG.
Figure 5 .
Figure 5. Respiratory airflow ( RES (t)) as a function of time (t) resulting from the application of AD_TRIANG.
( 39 )
establishes that at the end of transient time (t i = TI = 5 INS ), the increase of INS reaches a saturation value given by the product of C P with the linear slope (k 1i ) selected for p AW (Figure 5).This result is remarkable since INS (5 INS ) being independent on R INS , provides for a more physiological INS waveform adapting itself to lung elastic characteristic (C P ).Moreover, INS (5 INS ) can be adequately adjusted by k 1i regulation for a proper compensation of the actual C P value.According to our purpose, (35) establishes the real smoothing of RES discontinuity occurring at the beginning of every inspiration time (Figure 5).Unfortunately, unlike AD_SQUARE, according to (39), the final value of INS ( INS (5 INS )) is different from zero (Figure 5).
), the measurement of the time required for reaching the end of transient inspiration time (t TI = 5 INS ), i.e. for observing a differential increase of INS with time lower than one percent (1%) or saturated INS ( INS (t TI =5 INS )), is useful for the determination of INS , as follows: to (39), for a given k 1i value, the monitoring of INS (t TI =5 INS ) leads to the determination of C P , as follows:
Figure 7 .
Figure 7. Electrical-equivalent network of ALVS.The components crossed with folded arrows are devices whose characteristic parameter output can be varied according to input setting control.
Figure 6
functions EXP (t e ) and p EAe (t e ) can be determined considering (80) and (3), respectively, as follows: functions v Pe (t e ), EXP (t e ) and p EAe (t e ) are reported in Figure4, Figure5and Figure6, respectively.The difference between p AWe (t e ) and p EAe (t e ) (p e (t e )) can be determined from both (79) and (90), as follows (
Figure 10 .
Figure 10.Electrical-equivalent circuit of AD_TRAPEZ generator connected to the patient's airways.according to the results of § 3.3.1,if t1 is set equal to 2 INS (t1 = 2 INS ), the following expressions result:
Figure 11 ,
Figure 11, Figure 12 and Figure 13, respectively.If t2 is set equal to 3 INS (t2 = 3 INS ) and from (113), the following conditions result: 5 INS TI (126) 2 1.5 1 t t (127) Condition (126) allows the best functional comparison between AD_SQUARE, AD_TRIANG and AD_TRA-PEZ, while condition (127) represents the best trade-off for minimizing the final value of INS anyhow retaining a sufficient degree of smoothing on INS rising at the be-
Figure 11 .
Figure 11.Lung volume (v P (t)) as a function of time (t) resulting from the application of AD_TRAPEZ.
Figure 12 .
Figure 12.Respiratory airflow ( RES (t)) as a function of time (t) resulting from the application of AD_TRAPEZ.
Figure 13 .
Figure 13.Endoalveolar pressure (p EA (t)) as a function of time (t) resulting from the application of AD_TRAPEZ.Expressions (116)-(134) provide the rationale for the optimization of ventilation control in AD_TRAPEZ during the time of inspiration.k 2i value required for delivering in the same time (TI = 5 INS ) the same V TID as in AD_TRIANG (k 1i ), can be determined by setting (133) equal to (42), as follows: 2 1 2 i i k k (135) So, the peak of INS and PIP required in AD_TRAPEZ for delivering in the same time (TI = 5 INS ) the same V TID and thus for reaching the same PAP as in AD_SQUARE shows a 57% reduction and keeps equals to the upper (PI) p AW constant level, respectively.The diagnostic procedure has to be implemented as follows.The time required for reaching the end of transient inspiration time (t TI = 5 INS ) cannot be precisely measured with AD_TRAPEZ, due to the influence of unknown INS on t1 and t2.The regular application of AD_TRIANG ( § 3.3.1)every few minutes represents the most suitable way to solve such problem.Once t TI has been evaluated in such a way, INS can be determined by means of (49).According to (118), for a given k 2i value, the monitoring of INS (t1=2 INS ) leads to the determination of C P , as follows:
Figure 14 .
Figure 14.Internal resistance (R G (t)) of ALVS generator as a function of time (t) to be implemented for performing the compensation procedure in AD_TRAPEZ.
EXT0 followed by constant keeping of R EXT on R EXT0 during expiration (0 ≤ t e ≤ TE).From (57) and (58), * stabilized on constant steady EXT0 value, AD_TRIANG can be obtained by modeling R EXT waveform (R EXT (t)) as linear increasing of R EXT from R EXT0 to its maximum value ( * EXT R ) during inspiration (0 ≤ t i ≤ TI) and as instantaneous fall of R EXT from * EXT R to R In order to solve Eq.81, i.e. to find out the transient and steady expressions of v Pe (t e ), it is useful to transform it from time (t e ) to Laplace (s) variable domains, as follows: tis defined as the time derivation of v Pe (t e ), as follows:Considering (3) as well as by inserting both (79) and (80) into (78), the following equation results: Figure 8. Internal resistance (R G (t)) of ALVS generator as a function of time (t) to be implemented for performing the compensation procedure in AD_TRIANG.
(101) can be employed for confirming the result obtained with (50).Once EXP , R EXP and C P have been determined, TE should be set equal to the measured t TE (TE = t TE ).So, considering (58) and (79), R EXT (t e ) to be implemented for dual-control mode during expiration (0 ≤ t e ≤ TE) with pre-set V TID (R EXT (t e ) TID ) or V MIN (R EXT (t e ) MIN ), assume the following expression: The compensation procedure, i.e. the stabilization of EXT during expiration ( EXTe (t e )) on EXT0 can be carried out by proper modeling VEN waveform during expiration ( VENe (t e )) according to the first Kirchhoff's low applied at airways node of ALVS's network (Figure7): , (79) and (90), it is easy to demonstrate that MAP and MEP values of AD_TRIANG (MAP tri and MEP tri ) result as follows: MAP tri above PEEP EXT with regard to the same component of MAP squ .Moreover, in comparison with (18), (112) shows a 31.3%reduction of the component of MEP tri above PEEP EXT with regard to the same component of MEP squ , if the ratio between EXP and INS is estimated twice.
,
AD_TRAPEZ requires the following expression of p AWe (t e ): At the end of expiration time (t e = TE) with an AD_TRAPEZ for which both conditions (185) and (186) result, the following expressions can be deduced: Pe (t e ), EXP (t e ) and p EAe (t e ) are reported in Figure11, Figure12and Figure13, respectively.PEZ, while condition (186) represents the best trade-off for minimizing the final value of EXP along with the increase of MAP (see (203)) and MEP (see (204)), anyhow retaining a sufficient degree of smoothing on EXP raising at the beginning of expiration (Figure12).
EXP , R EXP and C P have been determined, TE should be set equal to the measured t TE (TE = t TE ).R EXT (t e ) to be implemented for dual-control mode during the first (0 ≤ t e ≤ t3) and second (t3 ≤ t e ≤ TE) interval of expiration (0 ≤ t e ≤ TE) with pre-set V TID (R EXT (t e ) TID ) or V MIN (R EXT (t e ) MIN ), assumes the following expressions: ), (180) and (181), it is easy to demonstrate that MAP and MEP values of AD_ TRAPEZ (MAP tra and MEP tra ) result as follows:If the ratio between EXP and INS is estimated twice, in comparison with (17) and (18), (203) and (204) show that MAP tra and MEP tra assume quite the same values of MAP squ and MEP squ , respectively.
|
v3-fos-license
|
2018-04-03T02:33:55.362Z
|
2016-02-29T00:00:00.000
|
15455539
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11031-016-9548-8.pdf",
"pdf_hash": "7dff4c95b81bd0792acc626d2d02b39f7befa421",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41982",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "68a90a82a0614029ce54ae4fcc79d46e136b3b84",
"year": 2016
}
|
pes2o/s2orc
|
Power boosts reliance on preferred processing styles
A significant amount of research has proposed that power leads to heuristic and category based information processing, however, the evidence is often contradictory. We propose the novel idea that power magnifies chronically accessible information processing styles which can contribute to either systematic or heuristic processing. We examine heuristic (vs. systematic) processing in association with the need for closure. The results of three studies and a meta-analysis supported these claims. Power increased heuristic information processing, manifested in the recognition of schema consistent information, in the use of stereotypical information to form impressions and decreased the complexity of categorical representations, but only for those participants who, by default, processed information according to simplified heuristics, i.e., are high in need for closure. For those who prefer this processing style less, i.e., low in need for closure, power led to the opposite effects. These findings suggest that power licenses individuals to rely on their dominant information processing strategies, and that power increases interpersonal variability.
Introduction
Research has shown that power affects diverse psychological phenomena associated with increased reliance on heuristics (i.e., rules of thumb that can inform judgment, see Keltner et al. 2003), and category based thinking (Fiske 1993). Power holders are often socially inattentive, they rely on stereotypes (Fiske 1993;Guinote and Phillips 2010), sexualize others (Bargh et al. 1995), and fail to adopt another's perspective (Chen et al. 2009;Galinsky et al. 2006). It has also been proposed that power licenses individuals to act at will, and gives them the freedom of self-expression (Kraus et al. 2011;, such as the tendency to express enduring attitudes (Anderson and Berdahl 2002) and other chronically accessible constructs (Chen et al. 2001;Guinote et al. 2012). In the present article, we expand these findings to the domain of information processing strategies. We propose that power magnifies chronically accessible information processing styles, and that this in turn affects the extent to which people rely on heuristics. Thus, instead of arguing that power leads to a committed way of processing information (e.g., heuristic, systematic), we argue that power licenses individuals to use their default strategies.
In this paper we focus on the preference for heuristic (vs. systematic) processing, as these cognitive styles underlie many social cognitive phenomena (for an overview, see: Chaiken and Ledgerwood 2012). We examine heuristic (vs. systematic) processing in association with the need for closure, defined as the need to avoid ambiguity by having an answer on a given topic (Webster and Kruglanski 1994). High need for closure is manifested in category-based, nonsystematic and heuristic information processing style, preference in predictability and quick decision-making (Driscoll et al. 1991;Kruglanski and Webster 1996). By contrast, low need for closure leads to heuristic processing style less and it is usually manifested in vigilant behavior that is based on a more systematic and effortful search for relevant information, its evaluation, and its unbiased assimilation (for an overview see Roets et al. 2015). Thus we propose that power increased heuristic information processing, but only for those participants who, by default, processed information according to simplified heuristics, i.e., are high in need for closure. For those who preferred to process information less heuristically, i.e., low in need for closure, power led to the opposite effects. We expect also that the lack of power may lead people to less spontaneously apply their typical information-processing styles.
Power and processing styles
It has been extensively argued that there is a link between power and increased heuristic processing (i.e., the use of simplified rules of thumb to form judgments, see Fiske 1993;Keltner et al. 2003). This proposition derives from the assumption that power holders are cognitive misers, unmotivated to deploy attention, especially in the social domain. Consistent with this notion, power holders have been found to use simplified, category-based information, such as stereotypes, to make judgments (Fiske,1993;Guinote and Phillips 2010). For example, upon reading information about social targets who belong to different ethnicities, individuals with power paid relatively more attention to information that was consistent with the national stereotypes of the targets compared to stereotypeinconsistent information. This was not the case for participants in a control or powerless position (Fiske 1993;Guinote and Phillips 2010). Similarly, compared to powerless individuals, when making social judgments, power holders relied more on their own vantage point (Chen et al. 2009;Galinsky et al. 2003), and on information that easily came to mind (e.g., ease-of-retrieval, Weick and Guinote 2008).
In spite of this evidence, a number of studies have shown that power holders do not always use schematic, effortless processes to guide their attention, judgments and actions. For example, Ebenbach and Keltner (1998) demonstrated that while participants with power tended to use heuristic, effort-saving strategies when making judgments about the attitudes of an ideological opponent, this was not the case when they experienced negative emotions associated with the ideological conflict. Negative emotions trigger systematic processing (see Schwarz and Clore 2003), and enhanced the accuracy of the judgments. Similarly, Overbeck and Park (2001) demonstrated that in interactions marked with a sense of responsibility, power enhanced attention and memory for the personal attributes of the interaction partners. Guinote et al. (2012) proposed a single mechanism to account for the contradictory response tendencies found in power holders. They argue that power increases reliance on accessible constructs and scripts (i.e., those that have a low threshold of activation) regardless of whether they are chronically accessible or temporarily activated by the states and goals of the power holder or by the environment.
Past research on the links between power and dispositions focused on trait-like chronically accessible constructs and scripts stored in memory. In the present article, we argue that not only trait related aspects of the self but also information processing styles are capable of being affected by power. Specifically, we argue that power licenses individuals to rely on their preferred ways of processing information (heuristic or systematic). Because people in powerful positions feel free to act at will and in authentic ways (Kraus et al. 2011), they do not have the need to constrain the use of their processing styles. In contrast, the lack of power may lead people to less spontaneously apply their typical information-processing styles. This notion is consistent with the finding that lack of power decreases self-expression. For example, individuals who lacked power felt obliged to smile, and smiled in less authentic ways compared to power holders (Hecht and LaFrance 1998). Similarly, studies focusing on eating behavior found that the eating behavior of power holders was guided by their feelings of hunger and how appetizing the food was, while for powerless individuals there was no relationship between eating and their feelings of hunger or the attractiveness of the food (Guinote 2010).
Overview of the study
We expect that along with the freedom from constraints, the ability to act at will , and increased confidence (Petty et al. 2007), power holders may more freely rely on heuristic information processing, if they typically prefer this information processing style (i.e., if they are high in the need for closure). However, they should engage less on this information processing style if this is not their default mode (i.e., if they are low in the need for closure). Moreover, as lack of power usually leads people to less spontaneously rely on their dispositions, they also may be less prone to apply their typical information-processing style. Thus, in this condition we do not expect any relationship between need for closure and processing style.
To test these hypotheses we conducted three studies focusing on memory for schema-consistent information, stereotypical impression formation, and the construction of simple categories as core examples of heuristic processing (e.g., Fiske and Neuberg 1990;Fiske and Taylor 1984;Kruglanski and Mayseless 1988;Schroder et al. 1967;Van Hiel and Mervielde 2003). We expect that powerful people who typically process information according to simple heuristics (high need for closure) will recognize more schema-related information, use more stereotypical information to form impressions about a target group, and create less complex social categories, compared to those who prefer to process information in less heuristic and more systematic way (low need for closure). If our hypotheses that power magnifies default processing is true, we should also observe more less heuristic thus more systematic processing under power among low need for closure participants. Crucially, the influence of default processes on social judgments should be more pronounced for power holders than for individuals who do not have power.
Study 1
In Study 1 we tested the hypothesis that power amplifies the links between chronic processing strategies and preferences for schema-consistent information. A preference for heuristic processing manifests in increased attention and memory for schema-consistent compared to schemainconsistent information (see Fiske and Neuberg 1990). Importantly, we expected this effect to be especially pronounced among high (vs. low) power participants.
Positive mood boosts default information processing strategies (Hunsinger et al. 2012), and power has been associated with positive mood (Keltner et al. 2003). Thus, to check the possibility that the effects of power derive from differences in mood, mood was assessed in this study.
Participants
A total of 50 students (36 females and 14 males; M age = 16.6, SD 0.84) participated in the study on a voluntary basis. 1 Two participants failed to complete the measures, thus their results were excluded from the analyses. Participants were randomly assigned to the powerful or the powerless conditions.
Materials and procedure
Participants took part in the experiment in small groups. At the beginning of the session, participants completed the Need for Closure Scale (Webster and Kruglanski 1994) to assess their preferred information processing styles (heuristic vs. systematic). One of the subscale, Decisiveness, has been considered as an unreliable measure of motivation, and was replaced with six items developed by Roets and van Hiel (2007). Answers were given on sixpoint scales, from (1) completely disagree, to (6) completely agree. From these measures, a single scale was formed (Cronbach's a = .81, M = 3.56, SD 0.41). Higher mean values indicate a higher preference for heuristic processing.
Subsequently, participants were informed that they would work on two independent studies. The first study allegedly investigated the perception of past events. The second focused on the ways people form impressions of the personalities of other people. First, power was manipulated by asking participants to report either a past event in which they had power over someone, or a past event in which someone had power over them (Galinsky et al. 2003). The written report was followed by a manipulation check that read ''Now we would like to know how much in charge you were in this situation.'' Answers were given on a 6-point scale ranging from 1 (not at all) to 6 (very much). Participants also reported their mood on a single 6-point scale, from (1) very bad, to (6) very good.
The experimenter, who was unaware of hypotheses or conditions, then introduced the second, ostensibly unrelated, study. To measure preferences for schema-consistent information we used a classic task (Neuberg and Fiske 1987;Sentis and Burnstein 1979) that asks participants to form impressions about target people. Participants were given the written instruction, that informed that they would be presented with information about two different persons, whose friendliness had been assessed in a previous study. To help participants form a hypothesis about the two targets, one was described as ''very friendly'' by more than 80 % of the previous participants, and the other was described as ''very unfriendly'' by more than 80 % of the previous participants. Participants were then informed that they would be presented with a few statements describing each target. Each item was presented on a separate display (e.g. ''Tom (friendly): Volunteered to care for lonely old people.''). Participants were tasked with assessing the extent to which each piece of information confirmed the trait friendly of a given person on a scale from 1 (''Does not confirm'') to 6 (''Fully confirms''). Participants were presented with 30 sentences (15 per target). Both sets comprised five items consistent with the trait, five items inconsistent with the trait, and five items irrelevant to the trait. An example of a friendly-consistent item was: ''Volunteered to care for lonely old people.'' An example of an inconsistent item was: ''Refused to talk with fellow passengers on an organized trip.'' An irrelevant item is illustrated by the sentence: ''Works as an accountant.'' Sentences were presented in random order. For the ''unfriendly person,'' the information presented was analogous. Afterwards participants were presented with a surprise recognition task for the information they had read about the target people. The task presented participants with 45 statements in random order, of which 15 were ''friendly'' and 15 ''unfriendly''. The fifteen other statements were new. Among them 5 items were consistent, 5 inconsistent and 5 irrelevant. Participants were asked to assess the extent to which each sentence describes the target person on 6 points scale, from (1) ''the sentence certainly did not described target person,'' to (6) ''the sentence certainly described target person.'' The number of points assigned to schema-consistent versus to schema-inconsistent and irrelevant sentences correctly recognized was calculated and used as an indicator of heuristic information processing. Participants were then thanked, debriefed and dismissed.
Results and discussion
Participants indicated how much they thought they were in charge of the situation they reported. An independent t test revealed that participants in the powerful condition felt more in charge of the situation they recalled (M = 4.75; SD 0.79) than participants in the powerless condition (M = 2.88; SD 1.27), t(48) = 6.15, p \ .001, 95 % CI [-2.47, -1.25]. The need for closure was equally distributed among conditions (t(48) = 0.99, p = .26).
No gender or age differences were found, therefore, these variables were not considered in further analyses. We found significant main effect of power (b = -2.15; t(48) = 2.12; p = .04). Main effect of need for closure was non-significant (b = .06; t(48) = 0.95; p = .35). To examine the effects of power and processing style on schema-consistent memory, we run regression analysis with power as predictor and need for closure as moderator, using the PROCESS program (Hayes 2013, model 1). The experimental conditions were coded with -1 (powerless) and 1 (power). We calculated the effect of power on the schema-consistent memory for low and high values (-1 SD, ?1 SD) of the moderator. The interaction between preference for heuristic processing, i.e., need for closure, and power on schema-consistent recognition was significant (R 2 = .15; b = .12; p = .017, 95 % CI [0.02, 0.22]). The interaction pattern is depicted in Fig. 1.
The analyses indicated that preference for heuristic processing, operationalized as high need for closure, was positively related to memory for schema-consistent information for powerful participants (b = .15, p = .03, 95 % CI [0.01, 0.28]) and non-significantly related to it among powerless participants (b = -.09, p = .20, 95 % CI [-0.24, 0.05]). Moreover, participants low in the need for closure recognized significantly more schema-consistent items in the powerless condition, as compared to the powerful condition (t(48) = 2.88; p \ .01); while high need for closure participants recognized more of these items in the powerful, as compared to powerless, condition (t(48) = 2.11; p = .04). The mood ratings did not differ between high-power and low-power participants, t(48) = 1.02, p = .39, 95 % CI [-0.94, 0.30].
The results of Study 1 supported our predictions. Compared to lacking power, having power enhanced the use of default information processing styles. Powerful participants who preferred heuristic processing (those high in the need for closure) recalled more schema-consistent information, but those who preferred this processing style less (those low in the need for closure) recalled less of this type of information. In the low power condition, chronic processing strategies did not influence memory. We did not find the effect of power on mood, therefore, mood could not explain the effects of power. Numerous researchers have argued that power is expected to bias attention toward positive information (i.e., rewards) and the powerless toward negative information (i.e., potential punishments; see Anderson and Berdahl 2002;Galinsky et al. 2014;Gruenfeld et al. 1998;Keltner et al. 2003;Kunstman and Maner 2011). However, we didn't find any differences in overall heuristically-consistent recall for the target when labeled as friendly versus unfriendly. This finding provides evidence that the effects are not driven by heightened attention to positive or negative information.
Study 2
Although the results of Study 1 provided support for the hypothesis that power magnifies the use of default processing styles, the study did not include a control condition, and it was not clear that power was driving the effects. To verify that the effects obtained in Study 1 derive from having power, Study 2 included a control condition. In this study, we used stereotyping as a manifestation of heuristic information processing style (e.g., Chen and Chaiken 1999). The study tested the hypothesis that preferred processing styles, operationalized as need for closure, will guide the degree to which a target group is perceived in a stereotypic way for participants in the powerful, compared to control, condition. Understanding the relationship between power and stereotyping is important given the contradictory findings in past research (e.g., Fiske 1993;Overbeck and Park 2001;Weick and Guinote 2008). We expected that, in the powerful (vs. control) condition, those participants who prefer heuristic processing, as indexed by the high need for closure, would rely more on the stereotypes of the target group. Participants who do not prefer heuristic processing (low in need for closure) would rely on the stereotypes less. The possible role of mood in this process also was examined.
Participants
A total of 52 students (35 females and 17 males; M age = 19.86, SD 1.41) participated in the study on a voluntary basis. Participants were randomly assigned to the powerful or control conditions. Ten participants did not complete the dependent measure, as they received questionnaires with missing pages, thus their data were excluded from the analyses.
Materials and procedure
To identify participants' default heuristic information processing styles they completed five subscales of the Need for Closure Scale (Webster and Kruglanski 1994). The decisiveness subscale was not considered because it has been recognised to measure the ability to impose closure rather than the motivation for closure (Roets and Van Hiel 2007). A higher mean score (Cronbach's a = .77, M = 3.82, SD 0.64) indicates a higher preference for heuristic processing. Similarly to Study 1, power was manipulated by asking participants to report a past event in which they had power over someone. Participants in the control condition were asked to report what they did the day before. Subsequently, participants completed the same manipulation check as in Study 1, and they reported their mood using 6-point scales, from (1) very bad to (6) very good.
The experimenter, who was unaware of hypotheses or conditions, then introduced an ostensibly unrelated study on person perception. Participants were given a list of 13 attributes related to the stereotype of Gypsies, as tested in a previous study by Kofta and Narkiewicz-Jodko (2003). The attributes were: unreliable, educated, lazy, friendly, competent, moral, dishonest, family man, orderly, neat, intrusive, insolent, filthy. Participants were asked to assess on a 7 point scale (1-completely do not agree, 7-completely agree) to what extent they agreed that typical Gypsies had these characteristics. Positive attributes were reverse coded. Averaged assessments of the attributes served as an index of negative stereotypes (Cronbach's a = .71; M = 3.46; SD 0.60). Participants were subsequently thanked, debriefed and dismissed.
Results and discussion
Participants indicated how much they thought they were in charge of the situation they reported. To investigate whether the manipulation of power was successful, an independent t test (power vs. control) was conducted on this measure. As expected, participants in the powerful condition felt more in charge of the situation they recalled (M = 4.74; SD 1.25) than participants in the control condition (M = 3.78; SD 1.62), t(41) = 2.21; p = .03, 95 % CI [-1.82, -0.08]. The need for closure was equally distributed among conditions (t(41) = 0.04, p = .69).
No gender or age differences were found, therefore, these variables were not considered in further analyses. The main effect of power was non-significant (b = -.04; t(41) = 0.17; p = .68).The main effect of the need for closure was significant (b = .34; t(41) = 2.14; p = .039). To examine the joint effects of power and default processing styles, we used the PROCESS program (Hayes 2013, model 1). As in Study 1, we run regression analysis with power as predictor and need for closure as moderator. The experimental conditions were coded with -1 (control) and 1 (power). We calculated the effect of power on the DV for low and high values (-1 SD, ?1 SD) of the moderator. Crucially, there was a significant interaction between heuristic processing styles and power (R 2 = .28; b = .44, p = .01, 95 % CI [0.10, 0.77]). The interaction can be seen in Fig. 2.
Simple slope analyses indicated that the preference for heuristic processing (high need for closure) was positively related to the stereotype index for powerful participants (b = .92,p \ .001,95 % CI [0.41,1.43]) and non-significantly related to it among participants in the control condition (b = .04, p = .86, 95 % CI [-0.39, 0.47]). Moreover, participants low in the need for closure did not differ in stereotyping in both conditions (t(41) = 1.85; p = .07); while participants high in the need for closure stereotyped more in the powerful, compared to the control condition (t(41) = 2.27; p \ .01). Power did not affect mood, t(41) = 0.42, p = .67. There is also no significant correlation between mood and stereotyping (r = .06; p = .71).
The results of Study 2 demonstrated once more that power increases the use of preferred processing styles. Power may lead to more or less stereotyping depending on the individuals' cognitive preferences, i.e., need for closure. Powerful participants, who preferred more heuristic strategies (i.e., those high in the need for closure), relied more on stereotypes compared to those who preferred heuristic processing less (i.e., those low in the need for closure). Again, we did not find the effect of power on mood, therefore, mood could not explain these effects.
Study 3
Study 3 further tested the links between power and processing styles in the domain of cognitive complexity. In the present context, cognitive complexity refers to the capacity to construe social behavior in multidimensional ways, a capacity that requires less heuristic and more systematic processing (Schroder et al. 1967). We hypothesized that, for participants in the powerful condition, the higher their need for closure, the less complex will be the categories they construe to describe social targets. This should not be the case for participants in the powerless condition.
Participants
A total of 77 students (34 females and 43 males; M age = 22.12, SD 2.12) participated in the study on voluntary basis. Participants were randomly assigned to the powerful and powerless conditions.
Materials and procedure
As in the previous studies, four subscales of the Need for Closure Scale (Webster and Kruglanski 1994) were used to identify participants' default processing strategies. Due to the low reliability (Cronbach's a = .25), the Closedmindedness subscale 2 was excluded from the analyses, and the overall index was calculated using only three subscales (Cronbach's a = .76, M = 4.43, SD 0.73). A higher mean score indicated a higher preference for heuristic processing. Power was manipulated as it was in Study 1. Upon completion of the power manipulation, participants filled in the manipulation check and reported their mood.
The experimenter, who was unaware of hypotheses or conditions, then introduced a second, ostensibly unrelated, study. Cognitive complexity was measured using an object sorting task (Scott 1962), in which participants have to place objects into meaningful categories. Participants were asked to arrange a list of 28 nations into categories, which they thought belonged together, and to indicate what they thought the nations had in common. For example, from a list of nations, Japan and England might be grouped together as island nations. This procedure was continued until the number of categories of each subject was exhausted. Cognitive complexity is measured by the number of distinctions made in the category system. The greater the number of different attributes ascribed to the objects, the higher the complexity score. The cognitive complexity score was calculated with a formula suggested Scott (1962) and based on information theory. 3 Participants were subsequently thanked, debriefed and dismissed.
Results and discussion
An independent t test (power vs. powerless) indicated that participants in the powerful condition felt more in charge of the situation they recalled (M = 4.20; SD 1.30) than participants in the powerless condition (M = 2.43; SD 1.28), t(74) = 5.89; p \ .001, 95 % CI [1.18, 2.36]. Thus, the manipulation effectively induced power differences. The need for closure was equally distributed among conditions (t(74) \ 0.25, p = .80).
No gender or age differences were found, therefore, these variables were not considered in further analyses. Main effects of power (b = -.10; t(74) = 1.08; p = .28) and need for closure (b = -.11; t(74) = .85; p = .40) were non-significant. To examine the effects of power and processing styles on the dependent variable, as in previous studies we run regression analysis with power as predictor and need for closure as moderator, using the PROCESS program (Hayes 2013, model 1). The experimental conditions were coded (-1 powerless/1 power). We calculated the effect of power on the DV for low and high values (-1 SD, ?1 SD) of the moderator. The results of the analysis revealed a significant interaction between preferred processing styles (i.e., high vs. low need for closure) and power on the cognitive complexity index (R 2 = .08, b = .26, p = .03, 95 % CI [0.02, 0.50]). The interaction is illustrated in Fig. 3.
Because we were interested in the relationship between preferred processing styles and cognitive complexity in the powerful and powerless conditions separately, we performed simple slope analyses. The analyses indicated that the preference for heuristic processing, i.e. need for closure, was negatively related to cognitive complexity for people in the powerful condition (b = -.36, p = .02, 95 % CI [-0.56, -0.06]) and unrelated to it for participants in the powerless condition (b = .16, p = .38, 95 % CI [-0.20, 0.54]). Moreover, low need for closure did not differentially affect the complexity of the categories construed by participants in powerful and powerless condition (t(74) = 1.45; p = .15). In contrast, participants high in the need for closure used less complex categories in the powerful condition, compared to the powerless condition (t(74) = 2,43; p = .02). Power did not affect mood, t(74) \ 0.30; p = .76.
The results of Study 3 supported the hypothesis. In the powerful condition, those participants with a preference for more heuristic processing expressed less complex social structures, compared to those participants with a preference for less heuristic processing. In the powerless condition, the pattern of results was non-significant. Thus, we conclude that power magnifies reliance on idiosyncratic processing styles. Conversely, the lack of power may lead individuals to refrain from using default processes. Again, we did not find the effect of power on mood. Mood can not therefore explain the abovementioned effects.
Study 1-3: Meta-analysis
Given that each study only differed in terms of the materials that were used, and that did other manipulations were not included, we report the integrated results using a metaanalysis of the three experiments (Cumming 2014). The meta-analysis was conducted using Comprehensive Meta-Analysis Software, on standardized regression coefficients and its standard errors. The analysis was performed on values of regression coefficients for the predictor (need for closure), obtained from simple slope analysis of interaction terms. So, in each study there were two separate predictor coefficients (one for each experimental condition). In each study we used different manifestations of heuristic processing as dependent measures (total N = 165). Across the three studies we have high power conditions, across two studies low power conditions and in one a control condition. As we were mainly interested in the relationship between the preference for heuristic processing (measured via the need for closure) and its manifestation in the powerful and powerless/control conditions separately, we integrated the results for the high power conditions from three studies and for low power conditions from two studies. We did not include the results from the control Fig. 3 Regression lines showing cognitive complexity as a function of processing styles and power condition to make the results more clear. Thus, we analyzed data from three studies, in two within-study subgroups (for the high power condition we included effects from three studies; for low power condition we included effects from two studies). We used the random-effects model, as it is appropriate and more realistic in this case (Schmidt et al. 2009). It assumes that the population means estimated by the different studies are randomly chosen from a superpopulation with standard deviation of s (Cumming 2014).
The calculated effect size and confidence interval of the heuristic processing manifestation is reported in Fig. 4. The heterogeneity of effects sizes was not statistically significant (high power: Q(2) = 3.72, p = .16, I 2 = 46.17 %; low power: Q(1) = 0.28, p = .60, I 2 = 0.00 %). As predicted, the analysis indicated that preference for heuristic processing was positively and significantly related to the heuristic processing expression for participants from the powerful condition (b = .53, SE .13, p \ .001, 95 % CI [0.22, 0.83]) and negatively but not statistically significantly related to it among participants from the powerless condition (b = -.21, SE .10, p \ .01, 95 % CI [-0.455, 0.028]). However, the difference between these two conditions was highly significant, as indicated by high the between-group variance component Q(1) = 13.76, p \ .001.
General discussion
In three studies we found that across a variety of domains, such as memory for schema-consistent information, stereotyping, and cognitive complexity, situationally induced power consistently increased reliance on default information processing styles. Power increased the recognition of schema consistent information, the use of stereotypical information to form impressions, and decreased the complexity of categorical representations, but only for those participants who preferred to process information in a heuristic way prior to attaining power. For those who preferred to process information less heuristically and more systematically, power led to the opposite effects. These effects were not present for the control and powerless conditions. Together, these findings indicate that power accentuates the ways individuals typically process information.
A great deal of past research focused on the effects of power on information processing, and in particular, whether power increases the reliance on stereotypes (for reviews see Fiske and Berdahl 2007;Guinote 2013). Even though evidence suggests that this is often the case, the notion that power holders are cognitive misers, unmotivated to be socially aware should not be generalized. For example, it has been shown that power holders effectively pursue goals (Guinote 2007), and can pay close attention to subordinates when individuating information is relevant to the attainment of their goals (Overbeck and Park 2001). Guinote et al. (2012) explained the variability of power related findings, arguing that power leads to flexibility and situated responses, in line with accessible constructs, including those that are temporarily or chronically accessible (associated with dispositions). Expanding this notion to the present context, the findings reported here show that, similar to accessible declarative memory, accessible procedural memory regarding processing styles is also amplified by power. That is, instead of leading to a particular way of processing information, power seems to magnify the default, idiosyncratic processing strategies that individuals typically prefer. Therefore, consistent with past research (Guinote et al. 2002), power increased interpersonal variability. Part of the inconsistencies found in past research could derive from differences in the preferred processing styles of power holders, triggered by chronic response tendencies. The present work focused on heuristic processing, assessed through the need for closure (Kruglanski et al. 2009). One limitation of the present research is that it did not include other information processing dimensions. For example, systematic processing will be better measured via need for cognition than low level of need for closure. We would expect power to magnify reliance on other default processing preferences, such as local or global, abstract or concrete, fast or slow (Kozhevnikov et al. 2014). Power holders' sense of confidence and reliance on accessibility should facilitate the use of any default procedural strategies. These hypotheses await future research.
Future research also needs to consider how power and dispositions interact with environmental inputs, such as organizational goals, and with temporary states of the perceiver, such as emotions. Given the greater cognitive flexibility of power holders (Guinote 2007), we would expect them to be able to adapt processing strategies to salient goals or inner states. Research that simulated organizational contexts supports these claims, showing that power holders can be socially attentive or inattentive depending on whether the organization was person-centered or product-centered (Overbeck and Park 2006). Similarly, emotions shape the attentional strategies of power holders (Ebenback and Keltner 1998). Dispositions and context need to be considered in order to more fully understand the implications of these findings, namely that power enhances preferred information processing strategies.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.