added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2023-11-10T16:37:36.080Z
|
2023-11-01T00:00:00.000
|
265084429
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/24/22/16019/pdf?version=1699330438",
"pdf_hash": "17c40f5901ee69bcdea9cc10b7cca8bbd8db5766",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45841",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "4460215e3be23e5c61ad315eecbb21bb7c05ee7e",
"year": 2023
}
|
pes2o/s2orc
|
The Role of BDNF, YBX1, CENPF, ZSCAN4, TEAD4, GLIS1 and USF1 in the Activation of the Embryonic Genome in Bovine Embryos
Early embryonic development relies on the maternal RNAs and newly synthesized proteins during oogenesis. Zygotic transcription is an important event occurring at a specific time after fertilization. If no zygotic transcription occurs, the embryo will die because it is unable to meet the needs of the embryo and continue to grow. During the early stages of embryonic development, the correct transcription, translation, and expression of genes play a crucial role in blastocyst formation and differentiation of cell lineage species formation among mammalian species, and any variation may lead to developmental defects, arrest, or even death. Abnormal expression of some genes may lead to failure of the embryonic zygote genome before activation, such as BDNF and YBX1; Decreased expression of CENPF, ZSCAN4, TEAD4, GLIS1, and USF1 genes can lead to embryonic development failure. This article reviews the results of studies on the timing and mechanism of gene expression of these genes in bovine fertilized eggs/embryos.
Introduction
The prerequisites for pregnancy require oocyte maturation, successful fertilization, and the acquisition of a high-quality embryo.Most infertility in both humans and animals is caused by alterations in various developmental regulatory factors, such as mutations in multiple developmental regulators.These causes may result in impaired oocyte maturation, failure of fertilization, or cessation of embryo development at an early stage.Spermatogonia undergo proliferation, growth, and maturation stages to develop into mature spermatozoa, but to achieve fertilization; they also need to undergo the "acrosome reaction" and the energization stage, while the development of the primordial oocyte into a "competent" egg cell is even more difficult.The maturation of the oocyte is not continuous; after undergoing the proliferative process of mitosis, it undergoes meiosis, but this does not allow it to develop to maturity, and after undergoing the first meiosis, the follicle ruptures and is expelled.At this point, the oocyte is not capable of fertilization and needs to undergo a second meiosis to develop into a capable fertilization oocyte.After fertilization in mammals, the fertilized egg undergoes cleavage and division to develop into a blastocyst.During this period, there were three important developmental events: chromosome group activation, zygotic genome activation (ZGA), polarization/compression, and the first specification of the cell lineage [1,2].Two distinct cell types emerge, the inner cell mass (ICM) and the trophoblastic ectoderm (TE), which subsequently form the blastocyst and then continue to differentiate.Most animals, including humans, are influenced by maternal genes prior to ZGA, and embryonic development changes from maternal factor-dependent to embryonic factor-dependent after ZGA.However, all processes involved in cell differentiation, as well as in a range of embryonic development, are closely linked to changes in gene expression and histones.In mammals, the correct transcription, translation, and expression of genes play a pivotal role in the formation of blastocysts and the differentiation of cell lineage species.During the development of bovine embryos, different genes are expressed at different stages, and the expression of these genes is regulated by many factors, including transcription factors and epigenetic modifications.The study of these genes and their regulatory mechanisms will lead to a better understanding of the genetic program of bovine embryonic development.Many maternal genes are synthesized and accumulated in the oocyte and play a key role in early embryonic development.Among them, the maternal effector genes (MEGs) refer to the genes that play an important role in maintaining the survival and development of mammalian embryos during the cleavage stage after fertilization.For example, knocking out NOBOX in bovine embryos significantly reduced not only the blastocyst rate but also the expression of pluripotent genes (POU5F1/OCT4 and NANOG) and the number of cells in the inner cell mass within the blastocyst [3].ZNFO has intrinsic transcriptional repressor activity and is another maternally derived oocyte-specific nuclear factor essential for early bovine embryonic development.Knockdown of ZNFO by siRNA significantly reduced embryo development to the 8-16 cell stage and blastocyst stage [4].Maternal genes can support early embryonic development before the syn gene is activated and then begins to degrade.Previous studies have found that nearly 90% of maternal genes are degraded when cattle congenital gene activation occurs at the 8-16 cell stage [5].Since the embryonic genome is transcribed up to the ZGA stage, it is likely that the accumulation of maternal factors (proteins and mRNAs) during oocyte development has led to the recording of this phenomenon.In addition, the requirement for extensive epigenetic reorganization prior to embryo implantation is associated with the pluripotency and activation of the embryonic genome.Transcription of the zygotic genome is required for the normal development and differentiation of early animal embryos, and ZGA-related factors are essential for the transcriptional regulation of the zygotic genome.For instance, OCT-4, a transcription factor that plays a pivotal role in maintaining the pluripotency of embryonic stem cells (ESCs), has been demonstrated to elicit a diminished rate of blastocyst development in bovine embryos with targeted knockdown of OCT-4 via siRNA injection [6,7].It has been confirmed that the knockdown of SOX2, an additional transcription factor associated with embryo quality and essential for maintaining embryonic pluripotency, leads to a reduction in the number of blastocysts and blastomeres in cattle [8][9][10].Knockdown of TSPY by microinjection in bovine fertilized eggs had no effect on the female embryo population but caused male embryo development to stop before the blastocyst stage [11].
In recent years, an increasing number of genes associated with oocyte maturation and early embryonic development have been identified.In order to successfully develop into healthy offspring, growing and maturing mammalian oocytes and embryos must undergo matching gene expression and epistatic modifications before they can develop into new life.Therefore, knowledge of key genes and epistatic modifications for normal development before implantation is essential to ensure normal gametogenesis, normal fertilization and maintenance of high fecundity rates, and normal early embryogenesis in domestic animals.Compared with natural mating and artificial insemination, the success rate of in vitro production (IVP) of bovine embryos to establish pregnancy is low, and oocyte quality is a key factor limiting the success of animal pregnancy.Therefore, in order to better understand the genetic regulation of early embryo development and improve the efficiency of in vitro embryo production, we summarized the genetic factors that have been found to inhibit oocyte maturation in recent years.
BDNF (Brain-Derived Neurotrophic Factor) Is Associated with Egg Maturation
BDNF is detected in numerous mammals, including ovaries and oocytes, and holds significance in the progression of mammalian follicles, ovulation, proliferation of granulosa cells, fertilization, and subsequent embryonic development [12][13][14][15][16].A variety of animals (such as rodents, sheep, cattle, etc.) can secrete high-affinity neurotrophins (NTs) in the ovaries, and there are p75NTR receptors, which play an important role in signal transduction.High-affinity receptors for neurotrophin-4/5 (NT-4/5) and BDNF are important in early follicular growth and oocyte survival, and BDNF is secreted by cumulus and granulosa cells, so it is speculated that they can affect oocyte maturation and early embryonic development in many species [17][18][19].
Zhao et al. first used immunofluorescence staining to detect the expression pattern of BDNF and its receptors during early buffalo embryonic development and found that BDNF expression first increased and peaked at the 4-cell stage and then gradually decreased (Figure 1) [20].The mRNA manifestation profile of BDNF was akin to that of its receptor NTRK2, and the manifestation of an alternative receptor, p75, was significantly limited in comparison to NTRK2.The high synchronization of the two suggests that both are involved in follicular development.When buffalo oocytes were treated with K-252a and p75 inhibitor (pep5) for in vitro maturation, the reaction rate was significantly reduced when BDNF and receptor inhibitor K-252a were added simultaneously, demonstrating that K-252a can eliminate the effect of BDNF on oocyte maturation, receptor NTRK2 works with BDNF during maturation of buffalo oocytes The receptor NTRK2 acts with BDNF during oocyte maturation, while the receptor p75 does not seem to play a role in oocyte maturation.Therefore, it is concluded that p75, as a low-factor receptor, may not function primarily in buffalo granulosa cells.
The high synchronization of BDNF with its receptor NTRK2 suggests that both are involved in follicle development, so it is speculated that the enhancement of the developmental capacity of buffalo embryos may be related to BDNF.A discovery was made that the utilization of 10 ng/mL BDNF greatly enhanced the rate of oocyte maturation and blastocyst formation in buffalo embryos using in vitro fertilization.Nevertheless, as the concentration increased to 100 ng/mL, there was a decline in the blastocyst rate, indicating a two-way effect of BDNF [20].To further study the mechanism of BDNF on cumulus cells, the mRNA expression changes of apoptosis-related genes and BDNF receptor genes during IVM and the selection of developmental genes in cumulus cells were analyzed by RT-qPCR.Further study on the mechanism of BDNF's action on cumulus cells during IVM finally found that BDNF can down-regulate apoptosis-related genes CASP9 and FAS.The expression levels of receptor-related genes NTRK2 and cumulus cell development-related genes CCNB1, PCNA, GJA4, GJA1, HAS2, PTX3, and TNFAIP6 were up-regulated, which enhanced the proliferation of cumulus cells, promoted the expansion of cumulus cells, and thus improved the maturation rate of buffalo oocytes.It can also promote the proliferation of cumulus cells.
Y-Box Binding Protein 1 (YBX1) Reduces Attenuation of Damaging Maternal Genes
YBX1 has a significant function in the stabilization of RNA and regulation of transcription.This particular gene encodes a protein with a cold shock domain that is highly conserved and possesses broad binding properties toward nucleic acids.The encoded protein serves as a multifaceted nucleic acid-binding macromolecule, demonstrating its involvement in manifold cellular endeavors encompassing the orchestration of transcription and translation, intricate pre-mRNA splicing, intricate DNA reparation, and delicate mRNA packaging.Additionally, this protein is an integral constituent within messenger ribonucleoprotein (mRNP) formations, suggesting potential implications in the intricate realm of microRNA processing [21]; Y-Box Binding Protein is reported to be enriched in oocytes [22,23] and recognized as primary constituents of cytoplasmic messenger ribonucleoproteins (mRNPs) with a diverse array of RNA-binding capacities [24].Deng et al. used the public RNA-seq dataset to reanalyze the transcription level of YBX1 in bovine embryos.Bovine YBX1 was progressively up-regulated from oocyte to blastula, and during the early development of bovine embryos, YBX1 expression was significantly increased in 8-cell embryos [25] (Figure 1).YBX1 is highly expressed in bovine mature oocytes, and its expression is further increased after fertilization, especially during MZT, as confirmed by Grurgia et al. [26].
In order to further study the effect of YBX1 on embryonic development ability, siRNA injection was performed to knock out the expression of YBX1 and observe the expression changes of YBX1.A total of 5154 differentially expressed genes (DEGs) were obtained by using DESeq2.It was found that in the embryos with low YBX1 knockdown, the number of 2-cell and 4-cell stage blocked embryos increased, and the percentage of blastocysts decreased significantly.Among them, 1623 and 3531 up-regulated and down-regulated genes were enriched in the regions that regulate RNA splicing and RNA stability.Moreover, a large number of genes related to Z-decline were changed, and these results all indicated that YBX1 knockdown would lead to the impairment of alternative splicing (AS) and RNA stability during ZGA, as well as the attenuation of maternal mRNA [25].
ribonucleoproteins (mRNPs) with a diverse array of RNA-binding capacities [24].Deng et al. used the public RNA-seq dataset to reanalyze the transcription level of YBX1 in bovine embryos.Bovine YBX1 was progressively up-regulated from oocyte to blastula, and during the early development of bovine embryos, YBX1 expression was significantly increased in 8-cell embryos [25] (Figure 1).YBX1 is highly expressed in bovine mature oocytes, and its expression is further increased after fertilization, especially during MZT, as confirmed by Grurgia et al. [26].
In order to further study the effect of YBX1 on embryonic development ability, siRNA injection was performed to knock out the expression of YBX1 and observe the expression changes of YBX1.A total of 5154 differentially expressed genes (DEGs) were obtained by using DESeq2.It was found that in the embryos with low YBX1 knockdown, the number of 2-cell and 4-cell stage blocked embryos increased, and the percentage of blastocysts decreased significantly.Among them, 1623 and 3531 up-regulated and down-regulated genes were enriched in the regions that regulate RNA splicing and RNA stability.Moreover, a large number of genes related to Z-decline were changed, and these results all indicated that YBX1 knockdown would lead to the impairment of alternative splicing (AS) and RNA stability during ZGA, as well as the attenuation of maternal mRNA [25].
Down-Regulation Centromeric Protein F (CENPF) May Cause the Embryo to Stagnate at the 8-Cell Stage Formatting of Mathematical Components
CENPF is a component of the centromere-centromeric complex, and the expression of this gene results in the production of a protein that is linked to this complex.This protein plays a critical role in the differentiation of somatic cells.It was found that the number of CENPF mRNA decreased gradually from 2 to 8 cells stage, and all of them originated from the mother line.The transcription of CENPF in embryo begins at the late stage of 8 cells, so it can be assumed that after the appearance of EGA, its gene expression begins to rise again and remains basically unchanged until the blastocyst stage (Figure 2) [27].To investigate the impact of CENPF on embryonic development potential, Tereza et al. employed CENPF-specific double-stranded RNA (dsRNA) to suppress the corresponding mRNA expression and found that the embryos showed obvious developmental abnormalities until the EGA stage and found that the most common defects in the embryos inoculated with CENPF dsRNA were: The size varies, the edge of the blastomere is blurred, the part of the blastomere becomes transparent, there are obvious nuclear fragments in the embryo, even there is no nucleus in the blastomere, only less than 1/3 of the embryos developed to 8 cells [28].CENPF plays an important role in the aggregation, arrangement, and separation of chromosomes because CENPF plays an important role in the interaction between centromere and microtubule in somatic cells [29].The protein functions as a vital component of the nuclear framework during the G2 stage of interphase.As G2 nears its conclusion, the protein establishes connections with the kinetochore
Down-Regulation Centromeric Protein F (CENPF) May Cause the Embryo to Stagnate at the 8-Cell Stage Formatting of Mathematical Components
CENPF is a component of the centromere-centromeric complex, and the expression of this gene results in the production of a protein that is linked to this complex.This protein plays a critical role in the differentiation of somatic cells.It was found that the number of CENPF mRNA decreased gradually from 2 to 8 cells stage, and all of them originated from the mother line.The transcription of CENPF in embryo begins at the late stage of 8 cells, so it can be assumed that after the appearance of EGA, its gene expression begins to rise again and remains basically unchanged until the blastocyst stage (Figure 2) [27].To investigate the impact of CENPF on embryonic development potential, Tereza et al. employed CENPF-specific double-stranded RNA (dsRNA) to suppress the corresponding mRNA expression and found that the embryos showed obvious developmental abnormalities until the EGA stage and found that the most common defects in the embryos inoculated with CENPF dsRNA were: The size varies, the edge of the blastomere is blurred, the part of the blastomere becomes transparent, there are obvious nuclear fragments in the embryo, even there is no nucleus in the blastomere, only less than 1/3 of the embryos developed to 8 cells [28].CENPF plays an important role in the aggregation, arrangement, and separation of chromosomes because CENPF plays an important role in the interaction between centromere and microtubule in somatic cells [29].The protein functions as a vital component of the nuclear framework during the G2 stage of interphase.As G2 nears its conclusion, the protein establishes connections with the kinetochore and maintains this interaction until early anaphase.It is found in the spindle midzone during late anaphase and the intracellular bridge during telophase and is thought to undergo subsequent degradation.The precise pattern of localization for this protein suggests its potential role in promoting chromosome segregation during mitosis.Studies in humans and other animals have shown that CENPF silencing affects cell division by disrupting the chromosome division process [30,31].The research results of cattle also showed that blastomere abnormalities, to a certain extent, can also be understood as related to the process of chromosome division, but the specific mechanism or pathway through which the impact on bovine embryos remains to be explored.
and maintains this interaction until early anaphase.It is found in the spindle midzone during late anaphase and the intracellular bridge during telophase and is thought to undergo subsequent degradation.The precise pattern of localization for this protein suggests its potential role in promoting chromosome segregation during mitosis.Studies in humans and other animals have shown that CENPF silencing affects cell division by disrupting the chromosome division process [30,31].The research results of cattle also showed that blastomere abnormalities, to a certain extent, can also be understood as related to the process of chromosome division, but the specific mechanism or pathway through which the impact on bovine embryos remains to be explored.
Zinc Finger and SCAN Domain Containing 4 (ZSCAN4) May Result in 16-Cell Phase Growth Arrest
ZSCAN4 is an extraordinary gene responsible for encoding a protein that participates in the upkeep of telomeres.This remarkable gene plays a pivotal role in a crucial attribute of mouse ESs, namely, defying cellular senescence and maintaining normal karyotypes for many cell divisions in culture.Initially, its presence was discovered to be exclusive to the late 2-cell stage of the pre-implantation embryo in mice [32,33].Knockout of ZSCAN4 with interfering RNA(siRNA) results in delayed progression at the 2-4 cell stages, leading to the failure of embryo implantation [32].However, the expression status and role of ZSCAN4 in the pre-implantation development of bovine embryos remains unclear.Kazuki TAKAHASHI et al. investigated the necessity of the development of the ZSCAN4 gene before implantation in bovine embryos.They found that the expression of ZSCAN4 remained in a low pattern until the 4th cell stage but increased after the 4th cell stage, where the expression level was significantly increased at the 4-8 cell stage until the embryonic transcription level peaked at the 16th cell stage (Figure 2).In addition, it was found that ZSCAN4 increased significantly at the 4-8 cell stage due to de novo synthesis of ZSCAN4 in zygotes [34].Additionally, they cultured bovine embryos that were injected with ZSCAN4-siRNA in vitro.The results demonstrated that all embryos encountered growth arrest at the 16-cell stage, with only a limited number progressing to the blastocyst stage.Moreover, the researchers assessed the mRNA levels of developmental pluripotency-associated gene 2 (DPPA2) and Piwi-like RNA-mediated gene silencing 2 (PIWIL2) in 16-cell embryos to evaluate the impact of reducing ZSCAN4 expression on reprogramming-associated gene transcripts.It was found that PIWIL2 expression levels were reduced in embryos injected with ZSCAN4-siRNA.Research has indicated that the Piwi protein and its linked small RNAs, known as Piwi interacting RNAs (piRNAs), impede the transcription of late transposition factors in animal germ cells, resulting in a notable upregulation of long terminal repeat retrotransposons [35,36], among them, pi-RNAs have been shown to be essential for targeted elimination of mRNA transcripts during the [37].In addition, more biological reactions occur during the transcription of ZSCAN4, including instantaneous expression of other ZGA-specific groups, rapid expansion of
Zinc Finger and SCAN Domain Containing 4 (ZSCAN4) May Result in 16-Cell Phase Growth Arrest
ZSCAN4 is an extraordinary gene responsible for encoding a protein that participates in the upkeep of telomeres.This remarkable gene plays a pivotal role in a crucial attribute of mouse ESs, namely, defying cellular senescence and maintaining normal karyotypes for many cell divisions in culture.Initially, its presence was discovered to be exclusive to the late 2-cell stage of the pre-implantation embryo in mice [32,33].Knockout of ZSCAN4 with interfering RNA(siRNA) results in delayed progression at the 2-4 cell stages, leading to the failure of embryo implantation [32].However, the expression status and role of ZSCAN4 in the pre-implantation development of bovine embryos remains unclear.Kazuki TAKAHASHI et al. investigated the necessity of the development of the ZSCAN4 gene before implantation in bovine embryos.They found that the expression of ZSCAN4 remained in a low pattern until the 4th cell stage but increased after the 4th cell stage, where the expression level was significantly increased at the 4-8 cell stage until the embryonic transcription level peaked at the 16th cell stage (Figure 2).In addition, it was found that ZSCAN4 increased significantly at the 4-8 cell stage due to de novo synthesis of ZSCAN4 in zygotes [34].Additionally, they cultured bovine embryos that were injected with ZSCAN4-siRNA in vitro.The results demonstrated that all embryos encountered growth arrest at the 16-cell stage, with only a limited number progressing to the blastocyst stage.Moreover, the researchers assessed the mRNA levels of developmental pluripotency-associated gene 2 (DPPA2) and Piwi-like RNA-mediated gene silencing 2 (PIWIL2) in 16-cell embryos to evaluate the impact of reducing ZSCAN4 expression on reprogramming-associated gene transcripts.It was found that PIWIL2 expression levels were reduced in embryos injected with ZSCAN4-siRNA.Research has indicated that the Piwi protein and its linked small RNAs, known as Piwi interacting RNAs (piRNAs), impede the transcription of late transposition factors in animal germ cells, resulting in a notable upregulation of long terminal repeat retrotransposons [35,36], among them, pi-RNAs have been shown to be essential for targeted elimination of mRNA transcripts during the [37].In addition, more biological reactions occur during the transcription of ZSCAN4, including instantaneous expression of other ZGA-specific groups, rapid expansion of telomere 5, and blocking of translation of the entire protein [38,39].It can be speculated that the down-regulation of PIWIL2 expression will cause the dysfunction of other retrotransposons (including transcriptional transposons) in bovine embryos, and thus halt the early development of ZGA.However, the mechanism of the interaction between ZSCAN4 and PIWI-piRNA in bovine embryos is unclear and remains to be investigated.
Interaction of TEA Domain Transcription Factor 4 (TEAD4) and CCN2
During embryonic development, TEAD4 plays an important role in organ formation and development by regulating gene transcription.Studies have shown that TEAD4 deficiency leads to early embryo death and developmental deformities [40,41].TEAD4 is a regulator of blastomere TE properties in mouse models and plays a key role in TE differentiation [40,42], participating in a variety of life processes such as cell proliferation, cell survival, tissue regeneration, and stem cell maintenance; TEAD4 potentially plays a role in the development of porcine embryo blastocysts by modulating the activity of SOX2, thereby influencing the conversion of morula into blastocyst [43]; bovine TEAD4 has a unique function to activate the specific pregnancy recognition factor interferon tau in ruminants [44].More studies have shown that TE is a single layer of epithelioid cells surrounding the outer layer of the blastula, and the expression level of transcription factor caudal homo box 2(CDX2) determines its development direction.During embryonic development, CCN2 is involved in the regulation of many growth factors and extracellular matrix interactions and has different expression patterns and effects in different organs and tissues [42,45].Therefore, CDX2 and CCN2 play important roles in blastocyst development.In addition, CCN family 2(CCN2) is an important downstream gene of TEAD4, which is widely used in mammalian somatic cells, and TEAD4 regulates the proliferation of CCN2 by regulating its expression.Bovine TEAD4 was studied by Hiroki Akizawa et al.It was found that TEAD4 mRNA was expressed in TE and ICM, and Tead4 mRNA content was higher in TE than in ICM, and Tead4 was mainly expressed in TE nucleus.The initial amount of TEAD4mRNA was low and increased from the 8-cell stage, and TEAD4 reached its peak after the morula stage (Figure 2).By using short hairpin RNA (shRNA) to interfere with the TEAD4 gene, TEAD4 was knocked out KD, and it was found that CDX2, GATA2, and CCN2 genes were significantly reduced, and the change of CCN2 expression was the most prominent [46].When CCN2KD was performed in bovine embryos, TEAD4 expression levels were significantly reduced in CCN2KD blastocysts.It is worth noting that the expression level of TEAD4 in CCN2KD blastula also showed a significantly decreased trend.Interestingly, neither TEAD4KD nor CCN2KD had an effect on cleavage development rate or blastocyst formation in vitro, but CCN2KD led to a significant reduction in the ratio of TE to ICM cell numbers without changing the total number of TE and ICM cells.Regulation of cell composition in bovine blastula is related to the expression of CCN2 in TE; since bovine CCN2 is expressed in endometrial epithelial cells, it is speculated that maternal CCN2 may influence pre-implantation development.These results indicate that TEAD4 and CCN2 regulate and influence each other, and TEAD4 directly regulates the transcriptional activation of CCN2, leading to changes in cell characteristics [47,48].The expression of CCN2 is also affected by TEAD4.The interaction between TEAD4 and CCN2 is important for normal cell differentiation during pre-implantation development.
Deletion of GLI-Similar 1 (GLIS1) May Lead to ZGA Failure
GLIS1 is a transcription factor 15 residue closely related to the Gli family [49][50][51].GLIS1 plays an important role in the formation and development of organs such as the cardiovascular system, kidney, eye, thyroid, and pancreas and is considered to be a direct recombinant coding factor that can promote the production of pluripotent stem cells and is richly expressed in both unfertilized mouse oocytes and 1-cell stage embryos [52,53].In addition, Glis1 also has the function of temporal and spatial regulation, so it can be speculated that Glis1 regulates the embryonic development process in a certain period of time [50,54].Kazuki Takahashi et al. studied GLIS1 in bovine oocytes in vitro and found that a large amount of the GLIS1 gene could be detected in both bovine oocytes and embryos at stage 1 to 4 cells, and the GLIS1 gene decreased from the stage 1 cell to the stage of 8 cells and beyond (Figure 2) [55].By injecting Glis1-siRNA into bovine embryos to investigate the relationship between the effects of bovine embryo development and the downregulation of the GLIS1 gene, it was found that the injected embryos had no effect on 16-cell stage development, but the rate of 32-cell stage development was significantly reduced.In order to further explore the effect of GLIS1 downregulation on gene transcripts, mRNA expressions of PGK1, PDHA1, heat shock homologous protein 70 (HSPA8), and X non-live specific transcripts (XIST-) at cell stage 8-16 were detected.The expression of PDHA1 and HSPA8 decreased significantly.PDHA1 is involved in glucose metabolism and plays a key role in embryonic development, especially in cattle and mice [56,57].HSPA8 encodes HSC70, which is involved in the pretreatment and selective autophagy of intron RNA, so HSC70 knockdown will lead to a large number of cell death of various types.In conclusion, GLIS1 down-regulated embryos may lead to the failure of ZGA initiation, suggesting that GLIS1 may be an important factor in the pre-implantation development of bovine embryos.
Upstream Stimulating Factor 1 (USF1) Gene Knockout Affects Early Embryonic Development in Cattle
USF1 is a transcription factor with a basic helix-loop-helix structure that selectively attaches to E-box DNA motifs.It is recognized as a cis-element of crucial genes responsible for oocyte expression, which is vital for early embryonic and oocyte development [58].Datta T et al., therefore, first examined the expression patterns of USF1 in bovine oocytes and embryos at different times.USF1 mRNA was found to increase during meiosis, increase significantly during 2-8 cells, and then decrease until it is almost undetectable at the blastocyst stage, indicating that it may play a role in embryo genome activation, indicating that the gene is maternal in origin and may be consumed or degraded during embryo genome activation (Figure 2).In order to investigate the role of USF1 in bovine oocyte and embryonic development, the siRNA program was used to mediate gene silencing in bovine embryos, and the abundance of USF1 transcripts was significantly reduced after injection of USF1 siRNA.The study found that the total cleavage rate in bovine embryos after USF1 knockout had no effect but reduced the number of development to the 8-16 cell stage, and in particular, the blastocyst rate was significantly reduced.In addition, these genes TWIST 2, JY-1, GDF 9, and FST were found to carry USF1-binding elements (e-boxes) in their promoter region necessary for the ability of bovine oocytes to develop.The abundance of TWIST2 and JY1 mRNA increased, but the abundance of GDF9 and FST transcripts decreased in USF1 siRNA oocytes collected at the MII stage.The abundance of GDF9 transcripts was moderately decreased, suggesting that negative control siRNA had a moderate off-target effect on GDF9 expression.The transcription factor TWIST2 functions as a molecular switch, which can either activate or suppress target genes by directly binding to conserved E-box sequences in promoter regions and enlisting coactivators or suppressors [59].As such, USF1 has the potential to regulate the transcriptional levels of GDF9, FST, TWIST2, and JY-1 during oocyte maturation.
Conclusions
After fertilization, mammals may require a specific object or abundant eggs for mRNA transcription and protein synthesis to give them the ability to develop fully.In mice, blastocyst formation is dependent on the presence of maternal factor(s) as mRNA in the egg and also on syncytial genetic information [60,61].The normal expression of these genes is inextricably linked to the normal development of the blastocyst and the normal attachment of the early embryo.Moreover, abnormal oocyte mRNA level abundance and pattern of oocyte-follicle axis of development may lead to oocyte development failure and thus affect later development [62].Large mammals similar to mice, such as cattle and sheep, have similar mechanisms.Among them, transcripts of genes required for oocytes and zygotes of some domestic animals have been identified, and different levels of transcript deletion cause developmental arrest and other developmental disorders at different stages, including BDNF, GLS1, YBX1, GENPF, ZSCAN4, and TEAD4, and at which stage they play a key role, as shown in Figure 3.A general pattern emerges from various studies: During the continuous division and maturation of the primordial follicle, a series of transcriptome changes occur.Two attenuation occurs during the period; the first is M-attenuation, starting from the GVBD stage.The second attenuation from the MII phase is called Z-attenuation; after fertilization, maternal mRNA will be heavily degraded via a key developmental process known as maternal to zygotic transition (MZT), in which developmental control is transferred from maternally supplied gene products to products synthesized from the zygotic genome, resulting in activation of the syngeneic genome [63][64][65][66].Thus, oocytederived mRNAs and proteins may play an important role in this process [67].
and zygotes of some domestic animals have been identified, and different levels of transcript deletion cause developmental arrest and other developmental disorders at different stages, including BDNF, GLS1, YBX1, GENPF, ZSCAN4, and TEAD4, and at which stage they play a key role, as shown in Figure 3.A general pattern emerges from various studies: During the continuous division and maturation of the primordial follicle, a series of transcriptome changes occur.Two attenuation occurs during the period; the first is M-attenuation, starting from the GVBD stage.The second attenuation from the MII phase is called Z-attenuation; after fertilization, maternal mRNA will be heavily degraded via a key developmental process known as maternal to zygotic transition (MZT), in which developmental control is transferred from maternally supplied gene products to products synthesized from the zygotic genome, resulting in activation of the syngeneic genome [63][64][65][66].Thus, oocyte-derived mRNAs and proteins may play an important role in this process [67].For example, if BDNF is defective during the maturation and early embryonic development of buffalo follicles and oocytes, the expression levels of related genes and receptor genes are dysregulated so that cumulus cells and their receptor NTRK2 cannot promote oocyte maturation.However, after YBX1 gene knockdown, it affects the stability of AS and RNA, thus leading to the development defects of pre-ZGA embryos.However, the difference is that studies on pigs, sheep, mice, zebrafish, and other species have shown that the development of embryos is affected by m6A modification, while the mechanism of the effect on cattle has not been reported [68][69][70][71].CENPF, ZSCAN4, GLIS1, TEAD4, and CDX2 all inhibited the development of early ZGA to varying degrees.For example, CENPF-specific knockdown resulted in mRNA and protein silencing during pre-implantation development, disrupting the morphology of blastomere and inhibiting the development of 8 cells after implantation.ZSCAN4 knockdown affected PIWIL2 mRNA level, which may affect the normal function of transposons and lead to a decrease in the number of 16-cell stage embryos.Down-regulation of GLIS1 affected the expression levels of PDHA1 and HSPA8, thus inhibiting the embryonic development from the 16-32 cell stage.TEAD and CDX2 should interact with each other to ensure the stable expression of the TE gene; otherwise, the transcriptional regulation of pluripotency-related genes in bovine blastula TE and ICM cell lines will be affected, leading to the failure of embryonic development.USF1, on the other hand, alters the developmental capacity of oocytes by affecting the promoter-binding element E-box.In addition, the time of the first division after embryo fertilization is also closely related to the normal development of the embryo.If For example, if BDNF is defective during the maturation and early embryonic development of buffalo follicles and oocytes, the expression levels of related genes and receptor genes are dysregulated so that cumulus cells and their receptor NTRK2 cannot promote oocyte maturation.However, after YBX1 gene knockdown, it affects the stability of AS and RNA, thus leading to the development defects of pre-ZGA embryos.However, the difference is that studies on pigs, sheep, mice, zebrafish, and other species have shown that the development of embryos is affected by m6A modification, while the mechanism of the effect on cattle has not been reported [68][69][70][71].CENPF, ZSCAN4, GLIS1, TEAD4, and CDX2 all inhibited the development of early ZGA to varying degrees.For example, CENPFspecific knockdown resulted in mRNA and protein silencing during pre-implantation development, disrupting the morphology of blastomere and inhibiting the development of 8 cells after implantation.ZSCAN4 knockdown affected PIWIL2 mRNA level, which may affect the normal function of transposons and lead to a decrease in the number of 16-cell stage embryos.Down-regulation of GLIS1 affected the expression levels of PDHA1 and HSPA8, thus inhibiting the embryonic development from the 16-32 cell stage.TEAD and CDX2 should interact with each other to ensure the stable expression of the TE gene; otherwise, the transcriptional regulation of pluripotency-related genes in bovine blastula TE and ICM cell lines will be affected, leading to the failure of embryonic development.USF1, on the other hand, alters the developmental capacity of oocytes by affecting the promoter-binding element E-box.In addition, the time of the first division after embryo fertilization is also closely related to the normal development of the embryo.If the time interval of the first division after embryo fertilization is too long, the igf-1 ligand may be reduced or even absent, and the mRNA abundance may be changed in response to the unfavorable growth environment, thus adversely affecting the development of the embryo [72,73].The role of the fallopian tube in the development of the early embryo should not be ignored, and the mechanism of embryo-fallopian tube interaction also affects changes in transcription levels [74,75].We can see that there is up-regulation and down-regulation of gene expression before embryo implantation.During this period, no matter how large or small the gene is, once the expression disorder occurs, the embryo development will encounter problems.Therefore, the research on important genes related to genes before embryo implantation is of great significance and also poses great challenges.However, as technology advances and we learn more about genes and the mechanisms by which they work, the more comprehensive the study will be.
Figure 1 .
Figure 1.Description of expression patterns of BDNF and YBX1mRNA.The horizontal coordinate represents the stages of embryonic development; The ordinate only represents the rise and fall; there is no actual value.
Figure 1 .
Figure 1.Description of expression patterns of BDNF and YBX1mRNA.The horizontal coordinate represents the stages of embryonic development; The ordinate only represents the rise and fall; there is no actual value.
Figure 2 .
Figure 2. Description of expression patterns of CENPF, ZSCAN4, TEAD4, GLIS1, and USF1mRNA.The horizontal coordinate represents the stage of embryonic development; The ordinate only represents the rise and fall and has no actual value.
Figure 2 .
Figure 2. Description of expression patterns of CENPF, ZSCAN4, TEAD4, GLIS1, and USF1mRNA.The horizontal coordinate represents the stage of embryonic development; The ordinate only represents the rise and fall and has no actual value.
Figure 3 .
Figure 3.The process of the action of each gene is summarized.BDNF and YBX1 played an important role before ZGA.CENPF ZSCAN4, TEAD4 GLIS1, and USF1 affect the growth of the late ZGA.
Figure 3 .
Figure 3.The process of the action of each gene is summarized.BDNF and YBX1 played an important role before ZGA.CENPF ZSCAN4, TEAD4 GLIS1, and USF1 affect the growth of the late ZGA.
|
v3-fos-license
|
2018-12-27T10:28:30.072Z
|
2014-08-14T00:00:00.000
|
83835264
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/ija/2014/252563.pdf",
"pdf_hash": "65a7417be6b2ddd6dc334c566cc3d357a06a4272",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45846",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "1deeff5bdee43e5000945080e39d1480fade0eaf",
"year": 2014
}
|
pes2o/s2orc
|
Response of Soybean to Early-Season Planting Dates along the Upper Texas Gulf Coast
Soybeans (Glycine max L.) can be planted along the upper Texas Gulf Coast from mid-March through May to take advantage of early season rains and to complete harvest before hurricane season and fall rains become a problem. However, in the Calhoun County area (28.5 north latitude), these planting dates have resulted in below average yields and reasons for these yield reductions are not clear. To determine if earlier planting dates could be an option to eliminate the low yields, field studies were conducted from 2005 through 2010 in Calhoun County, Texas, to determine soybean cultivar response to planting dates which ranged from mid-February through the last of April. Typically, soil temperatures in this area are above 18C in mid-February and depending on weather patternsmay not fallmuch lower during any time in the early portion of the growing season.The greatest yieldwas obtained with the mid-February and mid-March planting dates compared with earlyor late-April planting dates. Typically, as planting date was delayed, the interval between planting and harvest decreased.
Introduction
Soybeans (Glycine max L.) are grown along the upper Texas Gulf Coast and this area has become the largest soybean production area in the state. Most of the soybeans are planted from mid-March through May and are categorized as early soybean production system plantings (ESPS). Production components such as planting date and variety can be manipulated to counter the effects of various environmental factors on soybean development and yielded [1][2][3][4][5]. The rationale for planting early is to avoid the high temperature of July and August and to take advantage of late spring and early summer rains for maximum flowering, seed set, and seed filling [4,6]. Stress can reduce soybean yield by reducing the number of pods seeds and seed mass [7,8]. Both determinate and indeterminate soybean cultivars have reduced growth rates under drought stress and resume normal growth rates when such stress is removed [8]. This may be an important growth attribute to consider if producers expect considerable soil moisture deficits due to short, intermittent droughts during the growing season [5].
The effect of planting date on soybean yield can vary substantially from year to year depending on variations in environmental conditions, principally rainfall amounts and distribution [9]. Soybean planted along the upper Texas Gulf Coast in late March through April can yield over 1400 kg/ha when timely rainfall is received during the growing season [10]. Soybeans planted during early April through early May in the Calhoun County area of Texas have yielded less than 1000 kg/ha and although the plants develop normally, the pods in many instances do not contain any soybean. Stink bugs including the green (Nezara viridula) or brown (Euchistus heros) stink bug has been mentioned as a source of the problem (M. O. Way, personal communication) and insecticide applications add to production costs. Calhoun County is located along the upper Texas Gulf Coast and since this area borders the coastal area of the Gulf of Mexico, it was thought that an extremely early planting date could take 2 International Journal of Agronomy advantage of the early-season rainfall and the threat of stink bugs can be less of an issue since they typically move into soybeans after grain sorghum (Sorghum bicolor L. Moench) has matured and has been harvested which is usually in early July through August.
Therefore, the objectives of this research were to identify the components of soybean production encompassing cultivar and planting date that could increase soybean yield in Calhoun County depending on moisture conditions without stink bug as one of the limiting factors. This information will aid producers in adapting earlier planting dates that will improve soybean yield and reduce the chance of yield reductions. These two areas are about 30 to 40 km apart. Soil type at the southwestern area of the county was a Houston Black clay (fine, smectitic, thermic Udic Haplustert) with 1% organic matter and a pH of 7.4 to 7.7 while the soil type in the north-central area was also a Houston Black clay with <1% organic matter and a pH of 7.3 to 7.8. Since soybean is a legume and can produce its own nitrogen, no nitrogen fertilizer was applied by the grower; however, phosphorus and potassium were applied as needed according to Texas Cooperative Extension recommendations for soybean.
Soybean Cultivars. Cultivars (late Group IV through late
Group V's) selected for the study were those that had shown promise in previous studies or had been produced well in other soybean producing regions of Texas or surrounding states (Table 1). However, difficulty in obtaining the cultivars from the seed companies for the extremely early planting dates limited the cultivars selected and prevented many of the same cultivars from being used in each year of the study. For ease of reporting these cultivars will be referred to by maturity group. Soybean seed was planted on slightly raised seedbed (except in 2010) with a vacuum planter (Monosem ATI, Inc., Lenexa, KS) to provide a uniform seeding rate of 33 seed/m (55,847 seeds/ha) on a pair of rows with 97 cm centers.
Planting Dates.
Planting dates, approximately three weeks apart, were used each year with the first planting date around the 15th of February depending on weather conditions ( Table 2). The March and early-April planting dates in 2007 were delayed due to above normal rainfall which was received during the normal March to early-April planting window and which prevented entry into fields (Table 3). In 2009, the 2nd planting was delayed due to extremely dry conditions while plantings were delayed in 2010 due to extremely wet conditions which persisted throughout the early part of the growing season and prevented entry into the field (Table 3). Also, these plantings in 2010 were on flat ground without beds since land preparation could not be completed during the fall of 2009 due to the extremely wet conditions. Later plantings in April were attempted but poor stands developed due to plantings on flat ground which remained extremely wet because of heavy rains in April. Due to poor stands, it was felt that the data from the April plantings would not give accurate results. For ease of reporting, the planting dates will be referred to as February 20, March 15, April 5, and April 25. All planting dates were within 3 to 5 days of the above dates with few exceptions ( Table 2).
Plant Stands and Plant
Height at Maturity. Plant stand counts were not taken in 2005; however, stand counts were taken approximately 6 weeks after soybean were planted in all other years. Plant height was measured in 2008 through 2010 approximately 3 to 4 weeks prior to harvest with measurements taken from ground level to the tip of the plant growth terminal. Five plants per plot were measured and an average was recorded.
Determining Cultivar Maturity and Harvesting.
Physiological maturity of soybean seed occurs when the accumulation of dry weight ceases [11]. This stage first occurs when the pod turns yellow or has completely lost its green color. With favorable drying weather, the soybeans lose moisture quickly [11]. For all cultivars, paraquat at 0.28 kg/ha was applied when at least 70% of the seed pods had reached a mature brown color or when the seed moisture was 25% or less [12]. These guidelines were adopted from the US Gramoxone Inteon label [12] and used on all cultivars. Within 3 to 5 days when seed moisture was approximately 12%, plots were harvested with a small plot combine. At the 3-to 5day interval, additional cultivars were checked for color and moisture content and, if at the desired level, sprayed with paraquat.
Stink Bug Control.
Typically, soybean producers along the upper Texas Gulf Coast make 2 to 3 insecticide applications during the growing season depending on stink bug numbers (authors' personal observations). These growers treat soybean as a secondary crop and are typically not willing to spend the money or the time and the effort to fully control these insects. In our studies we tried to duplicate this practice and normally made the first insecticide application when stink bug numbers reached the threshold values. This was followed by a second insecticide application two to four weeks later.
Experimental
Design and Data Analysis. The treatment design was a factorial arrangement using a randomized complete block design with a planting date and soybean maturity group (cultivars) as factors. To reduce harvesting difficulties when using a small plot combine, it was decided to keep all of the plots for one planting date physically together and cultivars randomized within planting dates. Because the experimental areas were quite uniform in their surface drainage and soil type, we felt that the effect of physical field location would be small compared with the planting date and cultivar effect. Replicates were separated by 1.7 m while planting dates were separated by 6.4 m. Each cultivar was replicated three times within each planting date with a soybean cultivar plot size of 2 rows (97 cm centers) by 9.1 m long. An analysis of variance was performed using the ANOVA procedure for SAS [13] to evaluate the significance of planting date and soybean cultivar on soybean plant stands, plant height, and yield. The Fisher's Protected LSD at the 0.05 level of probability was used for separation of mean differences. Since environmental conditions were different at each location and soybean cultivars varied from year to year due to availability, data are presented separately by years. (Table 5) were above 18 ∘ C at all planting dates. The ideal soil temperature for soybean germination and emergence is 25 ∘ C at the 5 cm depth [11]. However, soybean can easily germinate at soil temperatures of 10 ∘ C and it is not unusual for emergence to take 3 weeks at these [11]. Other studies have reported that early plantings may delay and decrease seedling emergence if the soil is cold and wet at planting [9], resulting in plant populations that are below the threshold for maximum yield [14]. Rainfall events for February through June were below normal with above normal rainfall for July (Table 3).
Soybean Plant Height as Influenced by
In 2009, differences were found between the February and mid-March planting date for all maturity groups and mid-March and the 5th of April for all maturity groups with the exception of the 5.4 maturity group. For the late April planting date, the 4.9 maturity group soybean produced a lower plant height than the April 5th planting. No other plant height differences were noted between the April plantings. Although above normal rainfall was observed in April, the rainfall was below normal throughout the growing season (Table 3).
In 2010, plant height increased when planting was delayed with the 4.9 and 5.1 maturity group soybean; however, no other differences were noted. Rainfall for May through July was above normal and may have accounted for some of the increased plant height as planting date was delayed. Previous research has shown a reduction in plant height and node numbers as planting date was delayed [15,16]. However, these studies were based on May through July plantings and are expected considering photoperiod effect [15]. Since the plantings in this study were early in the season, a response to photoperiod is not unexpected.
Soybean Yield as Influenced by Planting Date and Maturity
Group. There was a maturity group (cultivar) by planting date interaction for each year; therefore, data are presented separately by maturity group and planting date. Generally, the mid-February and March planting dates produced the highest yield with the exception of 2005 and 2010 (Table 7). In 2005, the above normal rainfall in March may have accounted for yields with the April 5 planting that were not seen in 2006 through 2008. The 4.6 maturity group soybean produced the greatest yield when planted in February while the 5.1 soybean yielded the highest when planted mid-March.
International Journal of Agronomy 5 In 2006, yields were poor with the February and March planting dates and no yield was produced when soybeans were planted in April. Rainfall was below normal for the February through April time period but was normal or above normal for May through July (Table 3). In 2007, soybean yields were at least 1600 kg/ha for all maturity groups soybeans planted in February or March with the exception of the 4.8 maturity group soybean planted mid-March which yielded less than 1100 kg/ha. No yield was obtained with the April plantings. In 2008, similar trends as seen in 2006 were noted due to the extremely dry conditions during the growing season (Table 3).
In 2009, soybean planted in February or March produced yields which ranged from 1200 to over 2100 kg/ha regardless of maturity group while soybean planted on the first of April yielded 232 to 773 kg/ha ( Table 7). The 5.0 maturity group soybean produced the greatest yield at either the February or March planting dates. Overall, 2009 was extremely dry but a major rain event in April greatly helped improve the yields. In 2010, extremely wet condition persisted throughout the early portion of the growing season. Soybean yields with either the March or April planting dates were the greatest with the 5.0 and 5.9 maturity group soybean. No differences in soybean yield were noted between planting dates with the exception of 4.9 maturity group soybean which produced a greater yield when planted on the first part of April compared with a March planting date.
Number of Days from Plant to Harvest.
Generally, the later the planting date, the shorter the interval between soybean planting and harvest [10]. However, in the year with above average rainfall, trends toward a greater number of days from planting to harvest were noted. This is important in that the longer the plant is exposed to the elements, whether it is an increasing chance of a hurricane or an increase in green or brown stink bug populations, the greater the chance of yield loss is [10].
In 2005, the planting to harvest interval varied across planting dates and maturity groups and was not consistent (Table 8). In 2006, the time interval between planting and harvest did become shorter by 8 days as planting was delayed from February to March. In 2007, the interval between plant date and harvest date was longer for the March planted soybean than in 2006 and this was due to the above average (Table 3) and below normal temperatures (data not shown). These weather conditions slowed plant growth and therefore extended the growing season [7,10]. When the soybean plant reaches beginning maturity, warm weather does not hasten maturity unless it causes water deficit stress and maturity is more strongly influenced by photoperiod [11].
In 2008, the interval between planting and harvest for the March planting date was reduced from 2006 to 2007 due to the dry, hot conditions [10,11]. All soybeans planted in February were harvested 119 days after planting while those planted in March were harvested 110 days after planting (Table 8). In 2009, the planting and harvest interval decreased from the February planting while in 2010 little or no difference was noted.
Heatherly [2] reported that, near Stoneville, MS, cultivars planted before 16 April took an average of 5 days longer to reach R1 (beginning bloom) than did cultivars planted after 16 April to 1 May. When cultivars were planted from May through June, the number of days to R1 decreased as planting date was delayed. Heatherly [2] concluded that the reproductive period of later-maturing cultivars would occur later in the season when stored soil moisture has been reduced, probability of rainfall is lower, and air temperatures are higher.
Conclusion
Planting soybean in February to mid-March is an option for soybean growers in the Calhoun County (28.5 ∘ N latitude) area of the upper Texas Gulf Coast. Soybean planted in early April produced over 1100 kg/ha in two out of six years while in the other years little or no yield was obtained with this planting date. When planting date was delayed until late April, no soybean yields were produced. Also, the length of time from planting until harvest is an issue for producers who are concerned with stink bug population increases after grain sorghum harvest and the increased chance of hurricanes as the season progresses.
Under less than optimum growing conditions due to dry conditions, the early plantings in February took advantage of available soil moisture and produced yields of at least 1000 kg/ha. In contrast, Heitholt et al. [17] reported that a mid-March planting date was not desirable for North Texas (33 ∘ N latitude) due to stand loss and poor seedling growth associated with cold and wet weather conditions. They concluded that waiting until mid-May to plant soybeans in that region was less successful than planting in April. Bowers [1] also reported on similar work in North Texas and found that, in general, April plantings outyielded May plantings across all twelve cultivars. The use of maturity group V cultivars resulted in fruiting during hot, dry conditions normally found in July and August while the early maturing types fruited during June when soil moisture was adequate and temperatures were not as severe [1].
Another advantage to early planting is that, in 2007, when Asian soybean rust (Phakopsora pachyrhizi) was found in many soybean fields along the upper Texas Gulf Coast in early to mid-July, the soybean in the earlier planted plots had already been harvested and out of the field before this disease became an issue.
|
v3-fos-license
|
2020-07-30T14:31:02.034Z
|
2020-07-29T00:00:00.000
|
220854850
|
{
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10661-020-08504-x.pdf",
"pdf_hash": "2308efd9905b46b52477c2fb0cee70f18edbf8d6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45847",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "2308efd9905b46b52477c2fb0cee70f18edbf8d6",
"year": 2020
}
|
pes2o/s2orc
|
Status and trends of orthophosphate concentrations in groundwater used for public supply in California
Phosphorus is a necessary nutrient for all organisms. However excessive phosphorus can cause eutrophication in surface water. Groundwater can be an important nonpoint contributor of phosphorus to surface water bodies. Most groundwater phosphorus is in the form of orthophosphate and orthophosphate concentrations in California groundwater vary temporally and geographically. This study quantifies orthophosphate concentrations in water samples from public supply wells in California, evaluates temporal trends (both step and monotonic trends) in orthophosphate concentration for different areas of the state, and explores potential explanatory factors for the trends observed. Orthophosphate concentrations are low in 42 percent of the groundwater used for public supply in California, moderate in 43 percent, and high in 15 percent of this groundwater relative to reference conditions and a goal expressed by the USEPA for streams overlying the aquifers. The findings also suggest that orthophosphate concentrations increased in approximately one-third of this groundwater during the study period (2000 to 2018). The timing of orthophosphate increases observed in time-series evaluations coincided approximately with the timing of increases observed in step-trend evaluations, with both suggesting that the increasing trend occurred mostly before 2011. Principal component analysis (PCA) of the statewide dataset indicates that orthophosphate concentrations are antithetically related to dissolved oxygen (DO), and weakly associated with boron, arsenic, and fluoride. Step trend and time-series trend analyses using PCA were inconclusive. Electronic supplementary material The online version of this article (10.1007/s10661-020-08504-x) contains supplementary material, which is available to authorized users.
Introduction
Phosphorus is a necessary nutrient for all organisms, and is common in soils, rocks, and sediments (Hem 1992). However excessive phosphorus in surface water bodies can lead to eutrophication, because phosphate is often the limiting nutrient for the growth of aquatic plants in fresh water (Drever 1997;Litke 1999). Groundwater can be an important nonpoint contributor of phosphorus to surface water bodies (Litke 1999;U.S. Geological Survey 1999). Phosphorus occurs in natural water primarily as dissolved ortho-, pyro-, and polyphosphates (Hem 1992). Most groundwater phosphate is in the form of the orthophosphate (PO 4 -) ion (Domagalski and Johnson 2012), because it is more thermodynamically stable than the other common P 5+ ions likely to occur in natural waters (Hem 1992). Orthophosphate is naturally present in rivers, streams, and lakes that recharge aquifers, as well as in the aquifer materials themselves from the erosion of rocks, and the recycling of animal waste and plant and animal tissue (Hem 1992).
In addition to naturally occurring orthophosphate, human activity also contributes orthophosphate to surface water that subsequently recharges aquifers. Agriculture contributes orthophosphate through use of chemical phosphorus fertilizers, manure, and composted materials Johnson 2011, 2012). From about 1940 to 1970, orthophosphate, used as a calcium and magnesium-chelating agent in laundry detergent (Kogawa et al. 2017), was a major source of orthophosphate to the environment (U.S. Geological Survey 1999). From the 1970s to the 1990s the use of phosphate detergents declined due to mandated bans and voluntary cessation of its use (Litke 1999;U.S. Geological Survey 1999). The amount of phosphate discharged to the environment has also decreased, starting in the 1990s, as a result of upgraded wastewater treatment plants (U.S. Geological Survey 1999).
The US Environmental Protection Agency (USEPA) recommends nutrient concentration criteria that estimate reference conditions for rivers, streams, and lakes, by ecoregion, based on the 25th percentiles of all available nutrient data (USEPA 2000a(USEPA , 2000b(USEPA , 2001. Reference condition concentrations for total phosphorus in the twelve ecoregions located at least partially in California range from 0.009 to 0.077 milligrams per liter (mg/L) as P. The USEPA has also expressed desired phosphorus limits for the prevention of surface water eutrophication (USEPA 1986). The desired limit for total phosphates in streams flowing into a lake or reservoir is 0.050 mg/L as phosphorus (as P). The desired limit for total phosphorus in other flowing waters not directly discharging to lakes or reservoirs is 0.100 mg/L as P (Macenthun 1973). These other flowing waters that transport phosphorus to streams may include groundwater discharge (Domagalski and Johnson 2011). Dissolved phosphate occurs in small concentrations in natural water because it has low mobility, is readily taken up by biota, and adsorbs to metal oxides in soils (Hem 1992;Litke 1999). However, anthropogenic inputs can cause orthophosphate concentrations in natural waters, including groundwater, to be greater than reference conditions or the desired limits to prevent eutrophication (Holman et al. 2008).
It is imperative to understand how nutrients such as phosphorus change in concentration over time to better manage areas vulnerable to eutrophication. Groundwater quality changes over time are termed "temporal trends" in this study. Temporal trends of groundwater quality are difficult to assess due to the long time scales involved with groundwater movement and the resulting changes in quality, although relatively short-term studies are useful for monitoring the progress of local or statewide remediation efforts (McHugh et al. 2014;Saraceno et al. 2018;Stoline et al. 1993) or to observe short-period variability in groundwater quality (Granato and Smith 1999;MacDonald et al. 2017;Opsahl et al. 2017;Saraceno et al. 2018). In contrast to these studies are the long-term or continuing groundwater quality trend assessments conducted on regional or national spatial scales (e.g., Rosen 1999Rosen , 2001Rosen and Lapham 2008), and these may reach century temporal scales (e.g., Hansen et al. 2018). Groundwater quality trend studies may focus on only one or two waterquality constituents (Hantzsche and Finnemore 1992;Rosen 2003;Batlle Aguilar et al. 2007;Burow et al. 2007;Landon et al. 2011;Kent and Landon 2013;Naranjo et al. 2013;Hansen et al. 2018) or several water-quality constituents (Stoline et al. 1993;Rosen 1999; Barlow et al. 2012;Lindsey and Rupert 2012;Kent and Landon 2016;Kent 2018). An understanding of how and why concentrations of water-quality constituents are changing over time is helpful to water resource managers as they plan for the future.
There have been several studies describing waterquality trend analysis methods (Hirsch et al. 1991;Loftis 1996;Grath et al. 2001;Wahlin and Grimvall 2010;Lopez et al. 2014;Kent 2018), as well as studies that estimate the ability of these methods to assess and predict future groundwater quality (Hantzsche and Finnemore 1992;Stuart et al. 2007;Visser et al. 2009;Naranjo et al. 2013). Some trend evaluation studies are purely descriptive, involving neither formal hypothesis testing nor quantification, but include graphical methods and summary statistics (Bodo 1989;Esterby 1996;Jurgens et al. 2018). However, most studies that evaluate temporal trends in water quality use one of two statistical modes (Hirsch et al. 1991). The first mode performs hypothesis tests on the differences between two or more water-quality datasets collected at distinct time periods (Burow et al. 2008;Rupert 2008;Saad 2008;Barlow et al. 2012;Lindsey and Rupert 2012;Kent and Landon 2016;Kent 2018).
Changes detected by this mode are sometimes referred to as step trends.
Step trends, if they exist, are more likely to be detected with a greater number of sample pairs (Anderson 1987), and if there is a relatively long gap between the time periods (Hirsch et al. 1991). The second statistical mode performs correlation tests on data time series where time is the independent variable and some measure of water quality is the dependent variable (Stoline et al. 1993;Rosen 2003;Shipley and Rosen 2005;Batlle Aguilar et al. 2007;Landon et al. 2011;Kent and Landon 2013;Chaudhuri and Ale 2014). Changes detected by this mode are sometimes referred to as monotonic trends (Hirsch et al. 1991;Esterby 1996), and the European Water Framework Directive recommends at least 8 measurements when using this mode of trend analysis (Grath et al. 2001). The present paper quantifies orthophosphate concentrations in water samples from public supply wells in California, evaluates temporal trends (both step trends and monotonic) in orthophosphate concentration for different areas of the state, and explores potential explanatory factors for the trends observed.
California Groundwater Ambient Monitoring and Assessment Program Priority Basin Project
The California State Water Resources Control Board implemented the Groundwater Ambient Monitoring and Assessment (GAMA) program to assess California groundwater quality (GAMA, http://www.waterboards. ca.gov/gama/). The GAMA Priority Basin Project (GAMA-PBP) is a component of GAMA, conducted in cooperation with the US Geological Survey (USGS) (http://ca.water.usgs.gov/gama/; Belitz 2004-rev. 2006). GAMA-PBP is conducting three types of water-quality assessments as follows: (1) status of groundwater quality, (2) understanding of factors that affect groundwater quality, and (3) trends in groundwater quality. GAMA-PBP studies span all the major hydrogeologic provinces of California (Belitz et al. 2003), and use consistent methods to collect groundwater quality datasets (Koterba et al. 1995;U.S. Geological Survey variously dated). The statewide status and understanding assessments began in 2004 and were conducted by sequentially sampling 35 defined "study units" ranging in area from less than 80 km 2 (Santa Barbara study unit) to more than 40,000 km 2 (Sierra Nevada study unit) ) (Online Resource 1). The trend assessment began in 2007, and is ongoing (Kent and Landon 2013;Kent et al. 2014;Kent 2015;Kent and Landon 2016;Mathany 2017;Kent 2018).
Well selection
Three different procedures were used to select wells to evaluate status, step trends, and monotonic (time-series) trends of orthophosphate in California groundwater used for public supply.
Status well selection
The initial sampling of wells for the GAMA-PBP program was designed to provide a spatially unbiased status assessment of the quality of untreated groundwater used for public water supplies in California. Study areas were the fundamental unit of organization for the GAMA-PBP. The GAMA-PBP assessed 87 study areas defined in this manner, which included nearly all the groundwater used statewide for public drinking water supply . From 2004 to 2011, the GAMA-PBP collected samples from more than 2000 wells of which 1114 were analyzed for orthophosphate as part of the status assessment ( Fig. 1, Online Resources 1 and 2). Details on the selection of wells, grid design, analytical approach, and additional research topics for each study unit can be found in the relevant USGS Reports accessible from the "Publications" link at: http://ca.water. usgs.gov/projects/gama/.
Trend well selection for step-trend evaluations
Approximately 3 and 10 years after their respective initial sampling, a subset of status wells were selected for resampling as "triennial" and "decadal" trend wells (Fig. 2). From 2007 to 2013, the GAMA PBP collected samples from 226 wells to evaluate triennial trends, approximately 10 percent of the status wells in each study area (Online Resource 3). The 226 triennial trend wells represent 34 of the 35 GAMA-PBP study units and 83 of the 87 GAMA-PBP study areas (Kent and Landon 2016). Triennial trend wells were randomly selected from status wells that were still available for sampling in each study area (Kent et al. 2014;Kent 2015;Mathany 2017).
Decadal trend-well sampling began in 2014, continues in 2020, and will be completed in 2021. A revised trend-sampling strategy, which began in 2015, resamples 20 percent of status wells every 5 years. By the end of 2018, 352 wells had been resampled for decadal trends (Online Resource 3). As with triennial trend wells, decadal trend wells are selected randomly from status wells that are still available for sampling in each study area, but with three additional considerations as follows: (1) preference is given to wells that have been sampled as triennial trend wells, (2) preference is given to wells whose initial samples were analyzed for the most complete set of constituents, and (3) decadal trend wells are selected with efforts to provide an approximately even areal distribution of trend wells throughout study areas.
Only wells which have orthophosphate results providing at least one pair-wise comparison among the Fig. 1 Map of California showing the 1114 public supply wells sampled for the orthophosphate status assessment, the hydrogeologic zones, the numbered USEPA level III ecoregions, and the reference concentrations for total phosphorus in flowing waters overlying the aquifers defined for each ecoregion by the US Environmental Protection Agency. The public supply wells are symbolized by their orthophosphate relative concentration category three sampling intervals (initial, triennial, decadal) could be included in the step-trend evaluations. Therefore, the pair-wise comparisons were approximately 3, 7, or 10 years apart. Note that there were 3 instances in which initially sampled wells had been replaced by new wells in approximately the same location by the time of decadal sampling. Comparisons of major ion and isotope chemistry between samples from the initially sampled and the replacement wells indicated that, in all 3 cases, groundwater quality in the replacement wells was representative of the groundwater quality in the initially sampled wells. Figure 2 shows the locations of the 352 Fig. 2 Map of California showing hydrogeologic zones, USEPA level III ecoregions, and the 352 GAMA-PBP wells evaluated for step trends. Yellow symbols are wells evaluated for a step trend between initial and triennial sampling (E1). Blue symbols are wells evaluated between initial and decadal sampling (E2). Red symbols are wells evaluated between triennial and decadal sampling (E3). Green symbols are wells evaluated for step trends between all 3 intervals (E1, E2, and E3) step-trend wells and identifies which pair-wise comparisons (step-trend evaluations) were done on each of them. Site identifications and attributes for these wells are provided in Online Resource 3.
Trend well selection for time-series evaluations
The USGS maintains a database of over 75,000 wells and water-quality results in California. These wells include the ones sampled by GAMA-PBP as well as many other projects dating back several decades. Wells for time-series trend evaluation (monotonic trends) were selected from this database (National Water Information System-NWIS-U.S. Geological Survey 2018) based on the availability of at least 8 orthophosphate results for each well, spread over at least 8 years from the year 2000 to 2018. The minimum requirement of 8 results was imposed because the European Water Framework Directive recommends at least 8 measurements when using this mode of trend analysis (Grath et al. 2001). The time period from 2000 to 2018 was selected to approximately coincide with the time period evaluated for step trends (2004 to 2018). Because censored results (those expressed simply as a concentration less than the reporting level) cannot be analytically distinguished, at least 7 of the results for each well needed to be uncensored (detections) for orthophosphate to provide the required 8 distinct results. All censored results were substituted with the value 0.002, which was less than all detected concentrations, so that this value was the lowest ranking result for each time series. When two or, at most, three orthophosphate results per year were available for a well, the mean value for that year was used to give only one result per year. The mean value was used so that each year with orthophosphate data would have equal weight in the time-series evaluations.
An additional requirement was imposed to ensure that all time-series-evaluated wells were of an appropriate depth to represent the groundwater resource used for public supply. Wells that lacked a depth measurement or that were shallower than any of the wells sampled for the GAMA-PBP status assessment of the public-supply resource in the corresponding study unit were excluded from the time-series evaluations. Wells shallower than those sampled for GAMA-PBP status assessment in each area may be unrepresentative of the resource used for public supply. Site identifications and attributes for the 141 wells meeting the requirements for time-series evaluation are provided in Online Resource 4.
Sample collection and analyses
Groundwater samples for the GAMA-PBP are collected using consistent protocols designed to minimize inadvertent sample contamination (Koterba et al. 1995;U.S. Geological Survey, variously dated). Detailed descriptions of sample collection and analysis methods can be found in USGS GAMA-PBP Data Series Reports accessible from the "Publications" link at: http://ca.water. usgs.gov/projects/gama/. Trend samples were analyzed for a large suite of constituents. Analytical methods for the constituents mentioned in this study, including nutrients (such as orthophosphate), major ions, trace metals, and isotopes are described in Kent et al. (2014 ), Kent (2015), and Mathany (2017).
Status evaluation of orthophosphate concentrations
The status of orthophosphate in California groundwater used for public supply was evaluated by comparing orthophosphate concentrations in the 1114 status well samples analyzed for orthophosphate during the GAMA-PBP status assessment (Online Resource 2) with two tiers of benchmark concentrations. The benchmarks are based on work done by the US Environmental Protection Agency (USEPA). The first benchmark concentration tier is the level III ecoregion-specific reference concentration for total phosphorus in California flowing waters (U.S. Environmental Protection Agency 2000a, 2000b, 2001 (Fig. 1). California spans twelve level III ecoregions with total phosphorus reference concentrations ranging from 0.009 to 0.077 mg/L. The second benchmark concentration tier is 0.100 mg/L, which is the desired limit for total phosphorus in flowing waters (including groundwater to surface water discharges) not directly discharging to lakes or reservoirs (U.S. Environmental Protection Agency 1986). Note that the use of benchmarks based on total phosphorus includes phosphorus species that may not be readily transported to groundwater.
For the present status assessment, orthophosphate concentrations that are less than the ecoregion-specific reference concentration for flowing water in the ecoregion of the sampled well are considered "low." Concentrations that are between the ecoregion-specific reference concentration and the second benchmark concentration of 0.100 mg/L as P are considered "moderate." Concentrations greater than 0.100 mg/L as P are considered "high." Note that the upper range of ecoregion-specific reference concentration is close to 0.077 mg/L as P, leaving a narrow range (≥ 0.077 and < 0.100) for the designation of moderate conditions to groundwater in the Central Valley, where the upper range is applied. This 3-tiered status assessment method is similar to the method used by the GAMA-PBP to provide context for concentrations of constituents that have health-based thresholds for drinking water . Ecological thresholds were used in the present study because there are no health-based thresholds for phosphorus species in drinking water.
Cluster analysis
Cluster analysis was performed to identify statistically significant clusters (p value ≤ 0.05) of high and low values of orthophosphate using the categories described above. The Hot Spot Analysis tool in Esri's ArcPro software (Esri Corporation 2019) uses the Getis-Ord Gi* spatial statistic (Ord and Getis 1995) and was run using the 1114 "status" wells with orthophosphate values. The test compares the observed value at a well with its neighbors to determine if the comparison resembles or differs from the mean. The test requires the user to determine a distance at which spatial autocorrelation ceases, indicating that orthophosphate values beyond this distance have no correlation with one another. For this purpose, a semi-variogram plot was created to determine the distance at which the variance plateau occurs. Any well located beyond the determined distance from another well would not be considered part of the same cluster. The False Discovery Rate Correction (FDRC) option was selected when running the tool to mitigate multiple testing and spatial dependency issues (Esri Corporation 2019). The FDRC is a conservative measure, effectively increasing p values and reducing the likelihood for the observed values to be statistically significant.
Statistical methods for the determination of step trends
Temporal trends in groundwater quality cannot be detected in individual wells by comparing results from just two samples collected over each time interval. In this study, step trends were evaluated by grouping the wells into categories that might be expected to share relatively si milar geologic, cli mat ic, and hydr ol ogic characteristics. Belitz et al. (2003) defined 10 hydrogeologic provinces in their framework report which established the design of the GAMA-PBP. For the present study, the study units were grouped into 5 condensed "hydrogeologic zones" as follows: Central Valley, Coastal, Desert, Mountain, and Southern California (Fig. 1). The zones were condensed to increase the number of wells in each group because the ability for a statistical test to detect a difference (when it exists) improves with increasing sample size (Anderson 1987). It should be noted that despite efforts to group the wells into hydrogeologic categories with similar characteristics, variations in geology, climate, and hydrology were large within each hydrogeologic zone.
Hypothesis tests on the grouped differences of paired samples were used to conclude whether step trends in orthophosphate concentrations had occurred in groundwater statewide, and within each of the 5 hydrogeologic zones. Three time intervals were evaluated (Fig. 2). The first evaluation interval (E1) compared initial orthophosphate concentrations with concentrations in samples collected from the same 144 wells approximately 3 years later (triennial sampling). The second evaluation interval (E2) compared initial concentrations in 227 wells with concentrations in samples collected approximately 10 years later (decadal sampling). The third evaluation interval (E3) compared concentrations in samples collected in 159 wells during triennial sampling with concentrations in samples collected during decadal sampling. Therefore, sample pairs for E3 were collected approximately 7 years apart. Comparisons among the evaluation intervals were made on paired samples from the same wells. However, inferences of a step trend are drawn at the statewide scale and for each hydrogeologic zone, and not for individual wells.
Before the hypothesis tests were performed, the data were processed using the "GAMA Replicate Acceptability Criteria" method described by Kent (2018), so that small differences in the paired results, due to analytical limitations, would not support an inference that a step trend had occurred. After processing the data, a Wilcoxon signed-rank test with a modification proposed by Pratt (1959) was performed comparing the paired orthophosphate results (initial and trend sampling results) statewide and within each hydrogeologic zone to determine whether the concentration was increasing or decreasing in a statistically significant way. The Wilcoxon signed-rank test is a nonparametric alternative to a paired t test that does not assume that the data have a normal distribution, an assumption often violated with water-quality data (Helsel and Hirsch 2002). The Wilcoxon signed-rank test is used to test whether the median difference between paired observations equals zero (null hypothesis). The absolute values of the differences are ranked, so that the relative magnitudes and the relative number of changes in each direction (increases or decreases) are both taken into consideration. When there is no difference between paired results (ties), the traditional Wilcoxon signed-rank test discards that pair during the ranking step. The Pratt modification ranks the observations, including the tied pair results, and then drops the ties before performing the test. A trend was considered detected at a significance level ≥ 95 percent (α = 0.05).
Statistical methods for the determination of time-series trends
The nonparametric Mann-Kendall trend test (Mann 1945;Helsel and Hirsch 2002) was used to test for the significance of a Kendall's τ correlation of orthophosphate concentration and time in the 141 public supply wells that met the requirements described earlier for time-series trend evaluation. The Sen slope estimator was calculated to estimate the trend magnitude (Sen 1968;Hirsch et al. 1991) or rate of change in orthophosphate concentrations (mg/L/year as P). It should be noted that substituting the value of 0.002 for censored data, as described previously, could affect trend slope calculations, especially if the censored result occurred near the beginning or the end of the evaluated period. As with the step-trend evaluations, a time-series trend was considered detected at a significance level ≥ 95 percent (α = 0.05). In contrast to the step-trend evaluations, time-series trends were evaluated for individual wells, not for hydrogeologic zones. Based on the results of the trend test, each well was categorized as "decreasing," "increasing," or "no trend" in orthophosphate concentrations.
Evaluation of potential explanatory factors
Principal component analysis (PCA) was used to look for relationships among groundwater chemistry and attributes of the wells evaluated for this study with orthophosphate concentrations and changes in orthophosphate concentrations in groundwater from those wells. Principal components (PC) show which variables explain the variance in the data. PCs that explain less than 10 percent of the data are generally considered insignificant. Therefore, within these datasets, the first 3 principal components, PC1-PC3, were used to explain the variance in the datasets. Chemical PCA variables included, in addition to orthophosphate concentrations, pH, total dissolved solids, dissolved oxygen, nitrate, magnesium, boron, manganese, fluoride, sulfate, arsenic, uranium, and bicarbonate. Other ions and metals are available and were not included in the PCA analysis because they autocorrelate with those used. For example, sulfate was excluded from the status analysis because it autocorrelates with total dissolved solids, and iron was excluded because it autocorrelates with manganese and there are fewer censored results for manganese than for iron. However, sulfate was included in the step trend analyses because fewer data were available for step trends and changes were observed between steps. Groundwater chemistry data which span the study period are available through the USGS National Water I n f o r m a t i o n S y s t e m ( N W I S ) d a t a b a s e a t https://waterdata.usgs.gov/nwis, by entering the USGS Station IDs provided in Online Resources 2, 3, and 4. Ancillary PCA variables included land use, depth of well below land surface, ranked age of groundwater in the well, septic tank density, and aridity index within a 500-m radius of each well.
Land use data were represented as percentages of the broad categories, agricultural, natural, and urban, in discrete years spanning five decades (1974, 1982, 1992, 2002, and 2012) (Falcone 2015). The land use data from 2002 was used as a median date across the time of orthophosphate sampling and used for the status well PCA. The 19 different "Coding 2012 Land Uses" described by Falcone (2015) were aggregated into the three broad categories used here as follows: codes 43-45 were categorized as agricultural; codes 11, 12, 41, 42, 50, and 60 as natural; and the other ten codes were categorized as urban. Septic tank density, aridity index, groundwater age, and well depth are static variables in this study. That is, their values were obtained for one moment in time and related to both status and trend data for PCA. Septic tank density was determined from the 1990 Census of Population and Housing (the most recent census that inquired whether a home was on a septic or a sewer system) and expressed as tanks/km 2 (U.S. Department of Commerce 1992). The aridity index is calculated as the average annual precipitation (PRISM Climate Group 2012) divided by the average annual evapotranspiration (Flint and Flint 2007), and values can range from 0.05 (hyper-arid) to greater than 1.00 (wet). Groundwater ages were static variables based principally on the activities of tritium (Plummer et al. 1993) and carbon-14 (Clark and Fritz 1997) measured during the status assessment, and were presented as the following categories: modern, modern or mixed, mixed, premodern or mixed, and premodern. PCA requires numerical values, so these 5 categories were assigned values of 1 through 5 in the order listed above representing a youngest-to-oldest ranked gradient scale. Springs were included in the status and trends evaluations, and these were assigned a well depth of zero. Ancillary attributes of the wells evaluated in this study are provided in Online Resources 2, 3, and 4.
In total, 20 variables were included in the PCA, 13 chemical variables (see above) and 7 ancillary variables (land use; as percent agricultural, urban, or undeveloped, aridity, depth of well, age rank, and septic tank density). Some data processing was needed before the variables were submitted for PCA analysis. Censored data were processed in a manner similar to what was done before statistical trend evaluations. Censored results in datasets for PCA were set to a single value less than any detected concentration in the dataset. PCA is a nonparametric test and this strategy ensured that censored results shared the lowest ranked value used for each constituent. For PCA trend evaluations (step trends and time series), initial values and rates of change were calculated for each parameter as separate variables. Rates of change were expressed as the average change in milligrams or micrograms per year. For PCA steptrend evaluations, this was simply calculated as where P i is the parameter concentration in the initial sample, P t is the parameter concentration in the trend sample, and years is the interval length in years. For PCA time-series evaluations, rates of change were calculated by the same Sen slope estimator method used to estimate the magnitude of orthophosphate time-series trends (Sen 1968;Hirsch et al. 1991).
Finally, some wells that were evaluated for status and trends in orthophosphate concentrations lacked some of the additional chemistry and attribute data, and blank entries are not permitted in PCA. For samples lacking field-measured pH or specific conductance, laboratorymeasured values were substituted when available. In contrast, most of the alkalinity measurements for GAMA-PBP samples were made at NWQL. But, when these were lacking, field-measured alkalinity measurements were substituted for laboratory measurements when available. However, substituted results were not available for most missing data, and the decision as to how many parameters to include in each PCA was, by necessity, a compromise between including the maximum number of parameters versus including the maximum number of wells. Therefore, PCA was performed for datasets consisting of fewer wells than were evaluated for status and trends in orthophosphate concentrations. PCA was performed for 801 of the 1114 GAMA-PBP status wells that had orthophosphate results (Online Resource 2). PCA variables for the status evaluation consisted of the initial sample measurements for chemical variables, the 2002 land use values, and static values for the other ancillary variables. PCA was performed for 119 of the 144 step-trend wells evaluated in E1, 190 of the 227 wells evaluated for E2, and 139 of the 159 wells evaluated for E3. Chemical and ancillary variables for all PCA trend evaluations were expressed as the slope of their change during the relevant time periods. As with the status PCA evaluation, static values were used for the other ancillary variables for trendevaluation PCA.
Arsenic, uranium, DO, and groundwater age were not included in PCA for time-series wells because these data were lacking for many of the time-series samples. Aridity was also not included in the time-series analyses because it did not show any predictive value.
Due to the highly variable ranges in concentrations and values among the parameters, all values were normalized using the method of Kramer (1998). Timeseries PCA chemical variables were not normalized because all changes over time were analyzed by the slope of the change, and these were all within the same range. However, static explanatory variables (well depth, septic tank density, etc.) were normalized because of the large variations in these parameters. Principal component analyses were conducted using OriginPro 2019b software version 9.6.5.169 (OriginLab® Northampton, MA) add-in module. The add-in uses the same methods as the PCA in the OriginPro software but provides 3D graphical output.
Quality-control samples in the form of blanks and replicates were collected during the three sampling intervals. The results of quality-control samples for orthophosphate and the chemical parameters submitted to PCA were evaluated to determine whether analytical variability or positive bias might have affected the results of trend evaluations or PCA. In addition, GAMA-PBP periodically evaluates field blank results to define study reporting levels that are greater than the laboratory reporting levels (Olsen et al. 2010;Davis et al. 2014). Results from quality-control samples indicate that neither variability nor positive bias had an appreciable effect on trend evaluations or PCA. Field blank samples collected for all three sampling intervals had detection frequencies that were less than 5 percent for all chemical parameters of interest. Replicate results for these parameters were acceptable, by the project criteria described by Kent (2018), with few exceptions.
Relative concentrations of orthophosphate
Statewide, orthophosphate was analyzed in samples from 1114 GAMA-PBP status wells (Table 1, Fig. 1, Online Resources 1 and 2). Concentrations in 169 of the initial samples collected from those status wells (15.2 percent) were greater than the 0.100 mg/L as P, defined as "high" by the classification scheme used in this report. Orthophosphate concentrations in samples from 482 wells (43.3 percent) statewide were at levels defined as "moderate" in this report. Orthophosphate concentrations in samples from the remaining 463 status wells (41.6 percent) were at levels defined as "low" in this report (Table 1).
Summary statistics on the relative concentrations of orthophosphate by hydrogeologic zone are presented in Table 1. The zones with the greatest percentages of high relative concentrations were the Central Valley (21.5 percent) and the Coastal (20.4 percent) zones, followed by the Mountain zone (17.1percent). It should be noted, however,thatmostofthehighrelativeconcentrationsof orthophosphate in samples from Mountain zone wells wereobservedintheCascadeRangeandModocPlateau. Ingeneral,moderaterelativeconcentrationswerefound in groundwater from about half of the wells in each hydrogeologic zone (Table 1, Online Resource 2). The CentralValleyzonewastheexception.Moderaterelative concentrations were found in groundwater from High orthophosphate RC is defined as greater than 0.1 mg/L as P. Moderate RC is defined as between the ecoregion-specific reference concentration for total phosphorus in streams overlying the area where the well is located (USEPA 2000a, b;2001) and 0.100 mg/L as P, and low RC is defined as less than that ecoregion-specific reference condition relatively few wells in the Central Valley zone, because the ecoregion-specific reference concentration of 0.077 (Fig. 1), the boundary for moderate relative concentrations, is so close to the high benchmark concentration of 0.100mg/LasP.
Cluster analysis
Cluster analysis was performed by first creating a semivariogram of the orthophosphate values, which indicated a plateau of variance at approximately 25 km. Therefore, a search distance of 25 km was used for identifying Figure 3 shows the high and low clusters when using Z-scores greater than 2 standard deviation (p value ≤ 0.046). Significant clusters are seen in the northern half of the Central Valley, portions of the Cascade Range and Modoc Plateau, Santa Cruz mountains, northern Lake Tahoe, as well as in areas near the cities of Eureka, Redding, Chico, Napa/Sonoma, and Santa Barbara. Conversely, clusters of low OP values are seen in the cities of Madera and Chowchilla in the southern half of the Central Valley, as well as portions of the Owens and Coachella Valleys. The first 3 principal components (PC1-3) of 801 status samples explain just over 41 percent of the variance in the data (Fig. 4, Online Resource 5), with PC1 (17.3 percent), PC2 (12.5 percent), and PC3 (11.6 percent) being the only components explaining more than 10 percent of the variance with Eigen values greater than 2. Explaining 41 percent of the variation is somewhat low for the first three PCs, indicating that the dataset is highly variable and that correlations are not well defined by the variables used. However, the loadings shown in Fig. 4 appear to indicate that orthophosphate is most associated with arsenic, boron, and fluoride. No other explanatory factor grouped with orthophosphate, although the DO loading was somewhat antithetical to orthophosphate, suggesting that when DO is low, orthophosphate concentrations are higher. The loading arrows for septic tank density around a well grouped with urban land use and with nitrate concentrations and was also antithetical to orthophosphate (Fig. 4). Orthophosphate concentrations were related to groundwater redox state. McMahon and Chapelle (2008) defined anoxic groundwater as having a DO concentration less than 0.5 mg/L. Using this criterion, groundwater in 23 percent of the status wells, statewide, was anoxic (Online Resource 2). Orthophosphate concentrations were significantly greater in anoxic groundwater samples compared with oxic samples, statewide, as well as when analysis was done by hydrogeologic zones in the Central Valley, Coastal, and Southern California hydrogeologic zones (Fig. 5). There was no significant difference in orthophosphate concentrations by redox state for the Mountain or Desert hydrogeologic zones. Only wells with orthophosphate concentrations that changed by more than the threshold difference were considered "increasing" or "decreasing" a b
Trends in orthophosphate concentrations in California groundwater
Temporal trends in orthophosphate concentrations in California groundwater were evaluated by step-trend and time-series methods. The step-trend method determined trends comparing results among 3 sampling intervals (initial, triennial, and decadal sampling). The time-series method determined trends in groundwater concentrations in individual wells using multiple samples spanning a minimum of 8 years.
Step-trend evaluation results For E1 (initial compared to triennial sampling results), 144 wells were evaluated for step trends in orthophosphate concentration (Fig. 2 Fig. 6a). No step trend was observed in the Mountain nor Coastal zones. These results are similar to those found by Kent and Landon (2016). For E1, the mean rate of increases in orthophosphate concentrations statewide for wells where the changes exceeded the threshold differences was 3.47 × 10 −05 mg/L/year (Table 2). Principal component analysis was conducted on the entire trend dataset for each step-trend evaluation because there are not enough wells in each hydrologic zone to treat each zone separately. The first 3 principal components (PC1-3) in the E1 PCA evaluation explained 42 percent of the variation (Online Resource 5). PC1 explained 20.6 percent of the variation and PC2 and PC3 explained about 10 percent each. Although well depth, septic tank density, and groundwater age did not vary with time, they were included in all the step analysis PCA to see if a static characteristic might explain a trend in a variable that did vary. Loading scores of PC1 showed that a change in orthophosphate was directly associated with changes in pH, manganese, DO, and in the amount of natural land use near the well. Groundwater age and depth (static variables) of the wells were also associated with this grouping (Fig. 7a). That is, older and deeper groundwater was weakly associated with increasing orthophosphate concentrations. The loading scores for most of these variables were relatively low (< 0.35). In addition, changes occurred in both directions. For Fig. 6 a Scatterplot of orthophosphate concentrations, by hydrogeologic zone, measured in initial vs. triennial sampling (E1). b Scatterplot of orthophosphate concentrations, by hydrogeologic zone, measured in initial vs. decadal sampling (E2). c Scatterplot of orthophosphate concentrations, by hydrogeologic zone, measured in triennial vs. decadal sampling (E3) R c Fig. 6 (continued) a b Fig. 7 a 3-dimensional plot of findings from principal component analysis for E1-initial vs. triennial sampling. All variables were normalized. Chemical and land use variables expressed as the slope of their change. Well depth, aridity, septic tank density, and age rank are static variables. b 3-dimensional plot of findings from principal component analysis for E2-initial vs. decadal sampling. All variables were normalized. Chemical and land use variables expressed as the slope of their change. Well depth, aridity, septic tank density, and age rank are static variables. c 3dimensional plot of findings from principal component analysis for E3-triennial vs. decadal sampling. All variables were normalized. Chemical and land use variables expressed as the slope of their change. Well depth, aridity, septic tank density, and age rank are static variables Environ Monit Assess (2020) 192: 550 example, orthophosphate increased in concentration in 51 wells, and decreased in 67 (one well showed no change), and manganese increased in 68 wells and decreased in 50 wells (one different well showed no change). The same wells increased in both orthophosphate and manganese in 26 wells.
Step-trend patterns for E2 were the same as for E1; increasing step trends were observed statewide and in the Central Valley, Southern California, and Desert hydrogeologic zones (Table 3, Fig. 6b). Again, no step trend was observed in the Mountain nor Coastal zones. For E2, the mean rate of increases in orthophosphate concentrations statewide for wells where the changes exceeded the threshold differences was 8.74 × 10 −06 mg/L/year. Only wells with orthophosphate concentrations that changed by more than the threshold difference were considered "increasing" or "decreasing" The first 3 principal components (PC1-3) in the E2 PCA evaluation explained 45.2 percent of the variation. Most of the variation was explained by PC1 (25.3 percent) and PC2 and PC3 both explained less than 12 percent of the variation (Online Resource 5). However, orthophosphate was not significant and had low loading scores in the first three principal components (Fig. 7b).
For E3 (triennial compared to decadal sampling results), 159 wells were evaluated for step trends (Fig. 2, Table 4, Online Resources 1 and 3). In contrast to the findings of the first two step-trend evaluations, no step trends in orthophosphate concentrations were observed for E3 (Table 4, Fig. 6c). The absence of step trends in E3 suggests that the orthophosphate increases observed in E1 and E2 occurred mostly between initial and triennial sampling (2004 to 2013).
The first 3 principal components in the E3 PCA evaluation explained 37.6 percent of the variation, which is less than E1 and E2 analyses (Fig. 7c, Online Resource 5). Although orthophosphate has a high loading score for PC3, this principal component explains slightly less than 10 percent of the data. The loading groupings appear to be similar to E1 with groundwater age, well depth, and DO grouping with orthophosphate. However, as mentioned above, PC3 is not particularly significant.
Time-series trend evaluation results
Time-series evaluations were performed for 141 wells with orthophosphate data meeting the requirements described in the "Methods section" under "Trend well selection for time-series evaluations" (Table 5, Online Resources 1 and 4). Wells in the NWIS database (U.S. Geological Survey 2018) that met these requirements are unevenly distributed in the state and occur in five distinct clusters as follows: the San Joaquin and Tulare Basins (Dubrovsky et al. 1998), the Sacramento River Basin (Domagalski et al. 2001), the Santa Ana Basin (Belitz et al. 2004), the desert region (Dawson and Belitz 2012), and the central coast (Burton et al. 2013;Davis and Kulongoski 2016) (Fig. 8). Time-series evaluations were performed for at least some wells in each of the hydrogeologic zones except for the Mountain zone ( Table 5). Some of the wells used in the time-series evaluations are nested, producing groundwater from various depths in the same spatial location.
Time-series evaluations found significant increasing trends in orthophosphate concentrations for groundwater from 35 percent of the wells (Table 5). Decreasing trends were found for groundwater from 6 percent of the wells. Wells with increasing trends outnumbered wells with decreasing trends in all hydrogeologic zones where time-series evaluations were performed (Fig. 8, Table 5). However, it should be noted that the mean observed rate at which orthophosphate decreases was about twice the mean observed rate at which it increases (Table 5). It is also interesting to note that rates of change (both increasing and decreasing) found by time-series evaluations (Table 5) were between two and three orders of magnitude greater than rates of change found by steptrend evaluations (Tables 2, 3, and c4). Finally, in most of the wells showing statistically significant increases (30 out of 50), highest orthophosphate concentrations were Only wells with orthophosphate concentrations that changed by more than the threshold difference were considered "increasing" or "decreasing" observed between 2008 and 2011. This period coincides with the interval between initial and triennial sampling during which most of the step-trend increases apparently occurred. Together, these observations suggest that the increasing trend in orthophosphate concentrations occurred early in the study period. The first 3 principal components of the time-series data explain 46.6 percent of the data (Fig. 9, Online Resource 5). However, orthophosphate loading is highest for PC3, which explains only about 10 percent of the data. Orthophosphate also does not group with any other variable other than pH, fluoride, and natural land use. Finding direct relationships between changes in orthophosphate and changes in pH and natural land use would be counterintuitive. The solubility of inorganic phosphorus species in water is lower with higher values of pH (Diaz et al. 1994), and it would not be expected that increases in natural land use around a well to be associated with increases in orthophosphate concentrations in groundwater from the well. Fluoride may be introduced into the environment along with phosphate by anthropogenic activities, such as the application of phosphate-containing fertilizers (Saxena and Ahmed 2003), although it is unlikely this would cause a correlation on a statewide basis. However, the loadings are small and the relation between orthophosphate and each of these parameters is weak. It is possible that there are not enough data to fully evaluate time-series trends with PCA. It is also possible that because orthophosphate trends increase and decrease, the PCA may not be able to correlate these changes to other explanatory factors.
Discussion
The present study found that orthophosphate concentrations in California groundwater used for public supply are at mostly low or moderate concentrations relative to reference conditions for streams overlying the aquifers and the goal expressed by the USEPA for surface water (0.100 mg/L as P) to prevent nutrient enrichment. However, orthophosphate concentrations are high (above that goal) in about 15 percent of the groundwater, statewide, and in more than 20 percent of the groundwater in the Central Valley and Coastal zones of the state.
Cluster analysis indicated that relatively high groundwater orthophosphate concentrations are found specifically in the northern half of the Central Valley, portions of the Cascade Range and Modoc Plateau, Santa Cruz mountains, as well as in areas near the cities of Eureka, Redding, Chico, Napa and Sonoma, Santa Barbara, and Truckee. Relatively low concentrations are found in groundwater near the cities of Madera and Chowchilla in the southern half of the Central Valley, as well as in portions of the Owens and Coachella Valleys.
Present-day groundwater orthophosphate concentrations in the Central Valley hydrogeologic zone may be linked to temporal trends in orthophosphate concentrations found for overlying streams in the recent past. Relatively high groundwater orthophosphate concentrations are prevalent in the northern area of the Central Valley. This is an area where significant upward trends for flow-adjusted orthophosphate concentrations were found for several streams during variable time periods ending in 2004 (Kratzer et al. 2011). In contrast, most Kratzer et al. (2011) found surface-water orthophosphate to be increasing before the start of GAMA-PBP, while relatively low groundwater orthophosphate occurs where they found surface-water orthophosphate to be decreasing. Trend evaluation results suggest that orthophosphate concentrations have increased in approximately onethird of California groundwater used for public supply. Step-trend evaluations comparing grouped-well results from initial sampling with results from triennial sampling were consistent with evaluations comparing results from initial sampling with results from decadal sampling. Those evaluations showed increasing step trends observed statewide and in the Central Valley, Southern California, and Desert hydrogeologic zones. There were no statistically significant step trends for the evaluations comparing results from triennial sampling with results from decadal sampling. This suggests that the orthophosphate concentrations in California groundwater were greatest between initial sampling (2004 to 2011) and triennial sampling (2007 to 2013).
Time-series evaluations and plots show that the timing of increases and the greatest concentrations of orthophosphate are in approximate agreement with steptrend observations. For most wells, the highest concentrations were observed between 2008 and 2011. Also, it appears that groundwater orthophosphate concentrations are still increasing (have not peaked or plateaued) in only 11 of those 50 wells. It should be reiterated that individual wells that met the requirements for timeseries evaluation were poorly distributed in California. Nevertheless, time-series evaluation findings mostly confirmed the step-trend evaluation findings that increases in orthophosphate concentrations are more prevalent than decreases, statewide, and for the Central Valley, Southern California, and Desert hydrogeologic zones. In addition, however, and in contrast to the steptrend findings, the time-series evaluation also found many wells showing orthophosphate increases in the Coastal hydrogeologic zone.
It is not clear why the timing of trends in orthophosphate concentrations would be similar throughout California in groundwater with widely varying age distributions and hydrogeologic settings. The state was in a prolonged drought during the study period (Stokstad 2020), but the pattern of relatively wet versus relatively dry years in California between 2000 and 2016 does not explain the timing of the orthophosphate trends.
The relatively high concentrations of orthophosphate prevalent in a few areas of the state are related to low redox conditions as shown by PCA and correlation analysis. A possible explanation for this is that ferric oxides adsorb phosphate, and when these oxides are reduced, phosphate is released (Williams et al. 1976;Drever 1997). The release can be rapid with changing chemical conditions in the aquifer because the phosphate-containing complexes are often adsorbed to sediment surfaces rather than being incorporated in the aquifer material (Kent et al. 2007;Holman et al. 2008). Orthophosphate concentrations were significantly Fig. 9 3-dimensional plot of findings from principal component analysis for the timeseries evaluation. Variables for time-series PCA were not normalized. Chemical and land use variables are expressed as the slope of their change. Well depth, septic tank density, and age rank are static variables greater in anoxic groundwater samples compared with oxic samples, statewide, as well as in the Central Valley, Coastal, and Southern California hydrogeologic zones. There was no significant difference in orthophosphate concentrations by redox state for the Mountain or Desert hydrogeologic zones.
Explanatory variables other than redox conditions (land use, depth of well, septic tank density, and groundwater age) do not appear to be related to orthophosphate concentrations. PCA analysis shows that orthophosphate loadings are generally low and explain only about 10 percent of the data. Relatively high concentrations of orthophosphate in areas such as the Cascades and Modoc Plateau may be related to the volcanic aquifer materials in the area (Felitsyn 2002;Porder and Ramachandran 2013). However, there is insufficient detailed geologic information for each well to determine if geology is an explanatory factor. Orthophosphate has low mobility because it readily sorbs to aquifer materials (Holman et al. 2008). This low mobility of the orthophosphate ion contrasts with the high mobility of the nitrate ion when it is not being taken up by vegetation (Drever 1997). This may explain why PCA performed for the status evaluation of this study showed that concentrations of nitrate were antithetical to concentrations of orthophosphate.
Groundwater pH has been shown to influence orthophosphate concentrations (Kent et al. 2007;Domagalski and Johnson 2011), because anion sorption is greater at lower pH (Stumm 1992). However, pH did not provide an explanatory factor for OP concentrations or trends in the present study. This is likely because the pH levels observed in the samples collected for this study (5.0 to 9.8 with a median of 7.4) were generally too high to have an effect like the ones observed by the Kent et al. (2007) and Domagalski and Johnson (2011) studies, which included pH values between 4.6 and values slightly above 8.
Given the finding that anoxic groundwater is more likely than oxic groundwater to have higher concentrations of orthophosphate, it might be expected that decreases in DO would be associated with increases in orthophosphate. This study found little evidence of such an inverse correlation between the continuous variables of change in DO and orthophosphate. Confounding expectations, the E1 and E3 PCA evaluations even found a weak direct association between these variables. A possible explanation for this is that release of orthophosphate occurs only when DO concentrations decrease to an anoxic threshold. Manganese may also be released at or below this DO threshold. The strongest direct association found for orthophosphate change by the E1 PCA was with manganese change. The presence of dissolved manganese in groundwater indicates reducing (anoxic) groundwater conditions (Rosecrans et al. 2018).
Principal component analysis of step trends was inconclusive and could not relate available explanatory variables to the binary correlations. This is likely due to several reasons as follows: (1) the changes in orthophosphate concentrations are small compared with other variables. Even with normalization of the data, the small changes are difficult to assess compared with other variables; (2) changes in orthophosphate and other variables are not unidirectional so that complex interactions will reduce the statistical significance of explanatory variables, and (3) the number of trend wells is small for each region, making it difficult to find statistically significant differences by region using multidimensional analysis.
Conclusions
This study found that orthophosphate concentrations are low in 42 percent of the groundwater used for public supply in California, moderate in 43 percent, and high in 15 percent of this groundwater relative to reference conditions and a goal expressed by the USEPA for streams overlying the aquifers. It should be noted that the California State Water Resources Control Board is currently working on nutrient criteria in water based on biostimulatory response (https://www.waterboards.ca. gov/water_issues/programs/biostimulatory_ substances_biointegrity/). The new criteria will likely replace those used here as ecological thresholds in future research. Water managers may use information from the present study to prioritize watersheds for the newly established biostimulatory response monitoring.
The findings also suggest that orthophosphate concentrations increased in about one-third of California groundwater used for public supply during the period from about 2004 to 2011. However, later in the study period, increases were generally not observed. Advancements in wastewater treatment, improvements in agricultural best management practices, and the decline of phosphate detergents in the late twentieth century may have begun to collectively lower the concentrations of orthophosphate in the surface water sources recharging California aquifers.
The baseline conditions and trends described herein for orthophosphate concentrations in California groundwater used for public supply may help to determine whether groundwater discharges to surface water are contributing to eutrophication in surface water bodies in the state. Such information could be more important than ever before due to a recent ruling by the US Supreme Court holding that, under certain circumstances, such discharges may need to be permitted under the Clean Water Act (County of Maui, Hawaii v. Hawaii Wildlife Fund, 2020). Currently (2019) GAMA-PBP is resampling approximately 20 percent of trend wells every 5 years. This sampling strategy will better define temporal trends in California groundwater quality as data accumulate over time.
Acknowledgments We thank the field crews for the collection of samples and the well owners who graciously allowed the USGS to collect samples from their wells.
Funding information This study was supported by funds from the California State Water Resources Control Board.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
v3-fos-license
|
2020-11-26T09:03:42.809Z
|
2020-11-25T00:00:00.000
|
243194165
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-109260/v1.pdf?c=1606339635000",
"pdf_hash": "467e01785ca81c73f9c22553a9b68360de12be49",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45848",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"sha1": "59a95d996948d6e08127359d4fbfb8288e759ea7",
"year": 2020
}
|
pes2o/s2orc
|
Boosting Phonological Fluency Following Leftward Prismatic Adaptation: A New Neuromodulation Protocol for Neurological Decits?
Prism adaptation (PA) has been recently shown to modulate a brain frontal-parieto-temporal network, with an increase of excitation of this network in the hemisphere ipsilateral to the side of prismatic deviation. This effect raises the hypothesis that left prismatic adaptation, modulating the excitability of frontal areas of the left hemisphere, could modulate subjects’ performance on linguistic tasks that map on those areas. To test this hypothesis, sixty-one healthy subjects participated in experiments in which leftward, rightward or no-PA were applied before the execution of a phonological uency task, i.e. a task with strict left hemispheric lateralization and mapping onto frontal areas. Leftward-PA signicantly increased the number of words produced compared with the pre-PA (p = .0017), R-PA (p=.00013) and no-PA (p=.0005) sessions. In contrast, rightward-PA did not signicantly modulate phonological uency compared with the pre-PA (p = .92) and no-PA (p = .99) sessions. The effect of leftward PA on phonological uency correlated with the magnitude of spatial aftereffect, i.e. the spatial bias towards the side of space opposite to prismatic deviation following prisms removal (r = .51; p = .04). The present ndings document for the rst time modulation of a language ability following prismatic adaptation. The results could have a huge clinical impact in neurological populations, opening new strategies of intervention for language and executive dysfunctions.
Introduction
Prism adaptation (PA) is a form of visuomotor adaptation to displaced vision (for review see 1 ) and it has been shown to modulate a wide range of behaviors (for review see 2 ) in addition to the well-known application in patients with right hemispheric lesion and spatial neglect (for a review 3 ).
The majority of observations indicate that prism adaptation acts both on space representation and on other features interacting with space representation. For example, in healthy subjects leftward PA induces a sort of left minineglect, counteracting the physiological leftward bias called pseudoneglect 4,5 . PA aftereffects have also been reported in visual search 6 , endogenous and/or exogenous orienting of attention 7 , spatial/temporal representation 8,910,11,12,13,14, visually guided actions 15 , auditory representation 16 , chronic pain 17 , constructional disorders 18 and reward-based learning 19 .
Recent research suggested that visuomotor adaptation elicited by PA can also induce modulation of frontal areas ipsilateral to prismatic deviation, i.e. contralateral to the after-effect induced. Magnani et al. 11 , in a study using paired-transcranial magnetic stimulation (TMS) in healthy subjects, rst reported modulation of excitatory brain circuits on the motor cortex speci c to the direction of the visual shift induced by prismatic lenses: left deviation increased excitation of the left motor cortex, while right deviation increased excitation of the right motor cortex, as tested with the amplitude of motor evoked potentials.
Bracco et al. 20 reproduced these ndings in a study combining TMS, transcranial direct current stimulation and PA in healthy subjects. Prismatic adaptation increased excitability of the motor cortex ipsilateral to the deviation, as tested with TMS, in a manner similar as anodal tDCS did. The combination of the two excitatory interventions (i.e. PA and anodal tDCS) induced homeostatic plasticity effects, reducing motor cortical excitability. The same research group 21 showed that prismatic deviation induces an increase of the power of beta oscillations in the frontal areas of the hemisphere ipsilateral to the optical deviation during motor preparation but not visual attention tasks.
These ndings suggest that prismatic adaptation can neuromodulate brain excitability of a brain network ipsilateral to the deviation, with effects that could impact the cognitive functions subserved by that network. This view suggests that left PA, modulating the excitability of frontal areas of the left hemisphere, could modulate subjects' performance on linguistic tasks that map on those areas.
In the present study, we tested this assumption by investigating the power of PA in modulating phonemic uency tasks. We chose to investigate phonological uency because it shows a strong left hemispheric lateralization in frontal areas 22 and it has been studied with other neuromodulatory techniques 23 .
Phonological uency tasks require search, access, selection, retrieval and pronunciation of as many words as possible in a restricted time, based on a prede ned criterion of a target letter. Therefore, uency tasks are included in many neuropsychological batteries in that they probe cognitive functions at the interface between language and executive processing. As such, phonological uency can be impaired in a variety of clinical populations, including aphasia and dementia 24,25,26 .
We assumed that adaptation to a leftward optical deviation should increase subjects' performance compared to both rightward optical deviation and no adaptation conditions.
Subjects
Sixty-one healthy subjects (10 males, mean age: 23.1 ± 2.4 years) volunteered to participate in this experiment. All participants were native Italian speakers, right-handed, had a normal or corrected-tonormal vision and reported no history of neurological or psychiatric disease.
Thirty-one subjects were randomly allocated in the experimental group (4 males, mean age: 23.48 ± 2.32 years). Participants were assigned to a leftward Prismatic Adaptation group (L-PA; n = 16; mean age = 23.43 ± 1.86 years) or a rightward Prismatic Adaptation group (R-PA; n = 15; mean age = 23.53 ± 2.79 years). The l-PA group wore 20° left shifting prismatic lenses and the r-PA group wore 20° right shifting prismatic lenses. Participants handedness was assessed using the Edinburgh Handedness Inventory 74 .
In the control group, there were 30 right-handed healthy participants (4 males, mean age = 24.8 ± 2.34 years).
All subjects gave written informed consent for participation in the study, that was approved by the ethical committee of the University of Palermo (approval n. 25/2020). The experiments were done in accord to the principles of Declaration of Helsinki.
Neuropsychological assessment
The experimental group underwent a neuropsychological evaluation. Digit Span forward and backward 75
Prismatic adaptation procedure
The procedure for prismatic adaptation was similar to that adopted in previous studies 9, 12,13, .
For PA, subjects sat in front of a box (height = 30 cm, depth = 34 cm at the center and 18 cm at the periphery, width = 72 cm) open on two sides: the side facing the subjects and the opposite side, facing the experimenter. The experimenter placed a pen as a visual target at the distal edge of the top surface of the box, in one of three randomly determined positions: a central position (0°), 21° to the left of the center, and 21° to the right of the center. Subjects were asked to keep their right hand at the level of the sternum and then to point toward the visual target using the right index nger; the experimenter recorded the end position of the subject's pointing direction. The pointing task was performed in four experimental conditions: pre-exposure, exposure (early-exposure, late-exposure) and post-exposure.
In the early-exposure ( rst 9 trials while wearing prisms), late-exposure (last 9 trials while wearing prisms), and exposure conditions, the subjects performed the task with prismatic lenses inducing a rightward or leftward 20° shift. The pointing procedure was visible, i.e. the subjects could see the trajectory of the arm movement.
In the post-exposure condition, performed immediately after prisms removal, the subjects were required to look at the target and to make their pointing movements with their eyes closed as in the pre-exposure condition. Thus, in this condition the trajectory of the arm movement was invisible for the subject.
Exposure condition comprehended 90 trials, while each of the other conditions comprehended 30 trials. All the pointing trials were equally and randomly distributed toward the three marked positions of the panel.
Phonemic uency tasks Two phonemic uency tasks, standardized for the Italian population, were used 8081 . Both tasks require participants to generate as many words as possible starting with a given letter within 1 min, excluding proper nouns and words differing only for the su x. In one of the two phonemic uency tasks, the 3 letters used were "F" "A" "S". In the second task, the 3 letters used were "F" "P" "L".
Experimental procedure
Both the L-PA and the R-PA groups and the control group participated in two testing sessions over two separate days, with an interval of seven days between sessions (Fig. 1).
In the rst testing session, the two experimental groups were given the cognitive baseline tasks and the phonemic uency task (FAS or FPL).
In the second testing session, the two experimental groups were rst administered the PA procedure (L-PA or R-PA), immediately followed by one of the two phonemic uency tasks.
The control group was administered one of the two phonemic uency tasks (FAS or FPL) in the rst testing session. In the second testing session, the control group was administered the other uency task. The order of administration of the two phonemic uency tasks was counterbalanced across the control group and randomly assigned.
Statistical analysis
Prismatic adaptation Error reduction. To verify whether subjects adapted to prismatic deviation, showing an error reduction following rightward or leftward deviation, we compared their displacement measure in the pre-exposure (visible pointing) condition with that of the rst three (early-exposure condition) and the last three trials (late-exposure condition) of the exposure Condition (more details on this procedure can be found in 82 ). A difference between a pre-exposure condition and the early-exposure condition is expected due to the rightward or leftward displacement induced by prism exposure. On the other hand, no difference is expected between pre-exposure and the late-exposure condition in the assumption of an almost perfect error reduction. The dependent measure in this analysis was the mean displacement (expressed as degrees of visual angle) of subjects' visible pointing. An ANOVA was conducted with Group (L-PA; R-PA) as between-groups and Condition (pre-exposure, early-exposure and late-exposure) as the within-subjects variable. Post hoc comparisons were conducted using Tukey's test.
Aftereffect. We compared the subjects' displacement in the invisible pointing in the pre-exposure and post-exposure conditions. If, after prism exposure, subjects point to the direction opposite the displacement induced by the prism, a difference is expected between the pre-and the post-exposure conditions (aftereffect). The dependent measure was the mean displacement (expressed in degrees of visual angle) of the subjects' invisible pointing responses in the pre-exposure condition and the post-exposure condition. An ANOVA was conducted with Group (L-PA; R-PA) as between-groups and Condition (pre-exposure, post-exposure) as a within-subjects variable. Post hoc comparisons were conducted using Tukey's test.
Phonemic uency task
Behavioral data were analyzed with an ANOVA for repeated measures, with Condition (L-PA, R-PA, No-PA) as between-subjects factor and Session (pre-PA, post-PA) as a within-subjects factor. Post-hoc analyses were conducted with Tukey's test.
Results
Demographic and cognitive data of the experimental groups are reported in Table 1. The presence of after effect was con rmed by a signi cant difference between blind pre-exposure and blind post-exposure in both the L-PA (p = .0001) and the R-PA (p = .0001) groups (Fig. 2).
In the rst Session (pre-PA), we found no signi cant difference between the phonemic uency performance of the no-PA group and the two experimental groups (L-PA and R-PA). There was no signi cant difference between no-PA and L-PA (p = 1.00) or R-PA (p = .93).
In sum, we found that adaptation to a leftward optical deviation increased subjects' performance in the phonemic uency task as compared to both rightward optical deviation and no adaptation conditions. Additionally, we investigated whether PA affected the quality of the words (nouns or verbs) produced in the Phonemic uency task. In sum, leftward optical deviation increases the number of nouns but not of verbs produced.
Post-hoc analyses showed that L-PA but not R-PA (p = .57) increased the number of syllables (p = .02).
These ndings indicate that leftward PA increases not only the absolute number of words produced but also the production of words formed by a greater number of syllables.
Furthermore, we conducted Pearson correlation analyses to investigate the relationship between the effect of the PA and Phonemic uency tasks. Differences between pre-PA and post-PA scores in Phonemic uency tasks (Δ phonemic uency) were correlated with the index of prismatic adaptation (PA aftereffect). A positive correlation was found between Δ of phonemic uency and the index of prismatic adaptation in the L-PA group (r = .51; p = .04) but not in the R-PA group (r = .46; p = .08).
Discussion
The main results of the present study show that leftward optical deviation induced by prismatic deviation is associated with improved phonemic uency performance in healthy subjects when compared with either baseline (i.e. no optical deviation) or rightward optical deviation conditions. Improved phonemic uency was evident either in terms of the number of words produced and in the number of syllables for each word. The increase in phonemic uency following leftward PA was mainly evident for the grammatical category of nouns.
These results were not accounted for practice effects. Parallel forms of the task were used in baseline and post-PA sessions; moreover, the control study in the no-PA group failed to document signi cant increases in phonological uency performance across repeated sessions.
Adaptation to both left and right PA induced sensorimotor aftereffects. The R-PA and the L-PA group did not differ in their baseline performance but only the L-PA group showed a signi cant effect on phonological uency.
To our knowledge, this is the rst study documenting facilitation of a linguistic task by prismatic adaptation, i.e. a procedure traditionally associated with modulation of spatial cognition or cognitive functions linked to spatial components.
According to recent ndings, suggesting that prismatic adaptation increases excitability of frontal and parietal areas ipsilateral to the deviation side 11,20,21 , we may interpret the present results as re ecting a boosting of brain excitability of left hemispheric brain regions that are also associated with phonological uency tasks. In this eld, neuroimaging and neuropsychological studies show that phonological uency recruits a left lateralized network including inferior frontal gyrus, motor cortices, anterior cingulate, temporal regions, superior parietal cortex, hippocampus, thalamus and cerebellum 27,28,29,30,31,32 . All these areas are part of a dorsal language network 33 encompassing the left fronto-temporal arcuate fasciculus 34 , a nding consistent with the articulatory component of the phonological uency tasks. On the other hand, the motor articulatory component in linguistic tasks is associated with recruitment of motor cortical circuits 35 . The increase in the number of syllables produced for each word is also consistent with the recruitment of frontal motor areas 36 .
The literature shows modulation of other brain regions, in addition to frontal ones, by prismatic adaptation. Neuroimaging and neurophysiological studies support the idea that PA affects the visual attention and sensorimotor networks, including the parietal cortex and the cerebellum 37,38,39,40 . The activation of the parietal cortex and the cerebellum has been related to error collection and realignment during prismatic adaptation. The anterior cingulate cortex is also activated in an early error-correcting phase 39 . Interestingly, parietal cortex and cerebellum are also activated during phonological uency tasks 41,42 .
It is therefore possible that phonological uency modulation is also controlled by the parieto-cerebellar network, activated during the spatial realignment.
The grammatical class effect encountered in phonological boosting following leftward PA, with greater production of nouns than verbs, could depend on different factors. A neuroanatomical account posits that verb processing is mainly supported by the left frontal cortex while noun processing is supported by left temporal regions 43,44,45,46 . On the other hand, other evidence suggests that left frontal, parietal and temporal areas are similarly correlated with the noun and verb processing 47,48,49,50,51,52,53 . Since PA modulates a network encompassing both frontal and parieto-temporal areas, the grammatical class effect encountered in the present study could re ect linguistic rather than strictly anatomical factors. In particular, it has been reported that verbs are semantically more complex, they have a lower imageability and less perceptual features than nouns 54,55,56 . Also, verbs would be morphologically more complex 57,58 . These factors could partly explain greater facilitation of nouns production following modulation of a left hemispheric network by left PA. Moreover, while PA increases beta power in motor cortices ipsilateral to prismatic deviation 21 , verb retrieval is associated with beta suppression in motor areas 59 .
Modulation of a phonological uency task by leftward prismatic adaptation ts the general idea that cognition is grounded on sensorimotor interactions. According to this, the signi cant changes of brain activation in regions related to sensorimotor learning following PA have been correlated to prism aftereffects beyond sensorimotor learning and extending to higher cognitive functions.
A recent rTMS study 23 showed that low-frequency rTMS of the right inferior frontal gyrus increased subjects' performance in phonological uency tasks. The results were interpreted as re ecting plastic neural changes in the left lateral frontal cortex induced by low-frequency rTMS, suppressing interhemispheric inhibitory transcallosal interactions. Interestingly, an electrophysiological study reported that leftward PA increases transcallosal interhemispheric inhibition from the left to the right primary motor cortex 60 . The results of the present study may, therefore, be associated to both an increase of the left frontal excitability and modulation of transcallosal inhibition, with a reduction of activity of homologous regions of the right hemisphere, as in the reported rTMS study 23 .
Previous ndings reported that rightward prismatic adaptation does not produce signi cant cognitive changes in healthy subjects 61,62,63,64,65,66,67,7,68, (but see 69,21 for neurophysiological changes of brain activities in healthy adults). The authors interpreted this asymmetry of prismatic adaptation effects as related to the right hemisphere dominance in visual attention networks 70,71 . This dominance would explain the phenomenon of leftward attentional bias called pseudoneglect. Indeed, leftward PA can counteract pseudoneglect, while rightward PA would be less e cient in shifting attention further towards the left hemispace. Therefore, one may think that the selective effects of leftward optical deviation on phonological uency could also be linked to the modulation of spatial factors selectively in this condition. Indeed, the signi cant correlation between spatial aftereffect and phonological uency is in line with this hypothesis.
The in uence of spatial components on linguistic representations has been reported in the literature.
Turriziani et al. 72 described attentional representational biases in semantic judgments in healthy subjects, similar to those observed for the processing of space and numbers. Spatial manipulation of semantics was linked to the activation of specialized attentional resources located in the left hemisphere, and it was selectively modulated by left parietal rTMS. One could argue that there could be an in uence of spatial factors also in the phonological uency task. This task requires to produce as many words as possible in a restricted time based on the prede ned criterion. A leftward spatial bias has been reported for mental representations of alphabet lines. This bias is counteracted by leftward but not rightward PA 73 . Therefore, assuming that the representation of alphabet letters could be spatially organized in a left-to-right pattern, it could be hypothesized that in the present study leftward PA has shifted attention to the right space and facilitated focusing of attention to the ending letter targets (i.e. "S"). Although the hypothesis is intriguing for future, at present it only remains speculative and further, dedicated, studies, will be necessary to test this prediction.
If con rmed and extended to clinical populations of neurological patients, the present ndings could help to devise a novel type of non-invasive neuromodulation approaches for cortical dysfunctions involving the left hemisphere. In this eld, since uency tasks lie at the interface between language and executive functions, and can be impaired in numerous neurological disorders, their neuromodulation could have a huge clinical impact for a variety of disorders. Figure 1 Schematic representation of the experimental design. Phonemic uency performance before and after prismatic adaptation (pre-PA, post-PA) across groups (L-PA group, R-PA group and no PA group). L-PA signi cantly improves the performance on Phonemic uency task.
|
v3-fos-license
|
2016-10-26T03:31:20.546Z
|
2013-10-11T00:00:00.000
|
14863600
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://openaerospaceengineeringjournal.com/VOLUME/6/PAGE/1/PDF/",
"pdf_hash": "ba4f289266947bcaa804b999abf8b549dd5d98ee",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45849",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "ba4f289266947bcaa804b999abf8b549dd5d98ee",
"year": 2013
}
|
pes2o/s2orc
|
Hysteresis Nutation Damper for Spin Satellite
Hysteresis dampers are commonly used in Passive magnetic Attitude Control System (PACS). In PACS these rods produce a damping torque and reduce the satellite angular momentum and angular velocity. In this paper, a spin satellite was investigated which utilizes a passive magnetic damper consisting of magnetic hysteresis rods aligned with principal axis or spin axis of satellite and de-tumbling of the satellite, and the pure spin was achieved. An analytical model was presented to analyze hysteresis damper and a numerical simulation was performed to obtain dynamic properties of the spin attitude. In addition, assuming a dynamic imbalance, attitude behavior and damper effect on the spin rate of satellite were analyzed. The behavior of this passive magnetic stabilized satellite was simulated from the initial post separation phase.
INTRODUCTION
Spin stabilization is a current method to stabilize the attitude of a satellite in space.With spin stabilization, the satellite gains a gyroscopic stiffness which makes its accurate control possible.Unwanted oscillations of the satellite must be damped out.Therefore, satellites are usually equipped with one or more nutation damping equipments.Nutation or damping unwanted angular rate can be achieved by active or passive attitude control.The active way is performed by countering the attitude determination using the respective sensors.This feedback system increases the power consumption, complexity, and risk of an active attitude determination and control system.But in the passive way, no external energy and additional sensors and actuators are required.However, a dynamic analysis of the satellite attitude with damper is required to obtain mass properties and arrangement of the damper inside the satellite without any damage.
Passive angular rate damping of the satellite using dampers was first performed in 1963 by Miles [1].Many types of nutation dampers have been designed for spinstabilized spacecrafts ranging in size from small to large satellites.Nutation dampers dissipate the kinetic energy of periodic rotations of a satellite in a specific direction.When energy is dissipated, the principal axis of the angular momentum vector becomes aligned with the largest moment of inertia.There are a variety of these dampers including viscous ring dampers [2], ball in tube dampers [3], pendulum dampers [4], wheel dampers [5] and spring-mass-dampers [6].
*Address correspondence to this author at the Space Science Research Institute (SSRI), Tehran, Iran; Tel:+982177919668; Fax;+982177919668; Email: hamed.oosaloo@gmail.com The purpose of the present paper is to investigate the results of using hysteresis dampers as nutation dampers in spin satellite to dissipate kinetic energy.One way to acquire passive angular rate damping is simply adding magnetic hysteresis material.Passive magnetic stabilization is very attractive.It's often used in small and light satellites to gain basic pointing or merely to avoid random and unpredictable tumble.Magnetic hysteresis rods are used to create passive de-tumble torques on the satellite.With this system, the rotation is damped about two un-spin axes.The main advantages of these dampers are their cost and reliability.But they are not programmable and their capability for attitude stabilization is limited.
DESIGN OF HYSTERESIS DAMPER FOR SPIN SATELLITE
To investigate hysteresis damper, a mathematical simulation of the hysteresis phenomenon and the satellite spin is required.Hysteresis dampers are currently used in passive magnetic attitude control system (PACS) [7][8][9].PACS has two main components.One plays as permanent magnets which aligns the satellite with the earth magnetic field when it moves on its orbit.These permanent magnets are made of hard ferromagnetic materials and exchange the angular momentum of the satellite.The other component of PACS is a set of hysteresis rods.These rods produce a damping torque to enhance the energy dissipation property of soft ferromagnetic materials and reduce the satellite angular momentum by converting a part of the angular motion kinetic energy to heat; consequently, the angular velocity decreases.Hysteresis magnetic materials are much like permanent magnets in their function, except that their permeability is significantly higher.The most popular and current hysteresis magnetic material is hysteresis damper represented by an elongated rod made of soft-magnetic materials with heat treatment.Hysteresis materials have magnetic domains with random distribution resulting in a zero magnetic dipole.When subjected to an external field, the domains orient themselves.After removing the external field, the residual magnetization remains.In fact, depending on the material magnetic properties, it retains a magnetic dipole of some strength when the external magnetic field is removed.
The damping torque provided by the hysteresis rods in a magnetic field is obtained from: where is the earth magnetic flux expressed in body-fixed frame relative to inertial frame, m is the magnetic moment of the hysteresis rod given by: where m h is the magnetic moment of hysteresis rod aligned with spin axis or X axis (Fig. 1), B h is the magnetic flux induced in the rod, V h is the volume of the rod, and μ 0 is the permeability of free space.In this paper, hysteresis damper was investigated with two hysteresis rods aligned with the satellite spin axis (Fig. 1).In this configuration, the un-spin angular rate is damped passively and the nutation angle is dissipated.
The hysteresis rods produce variable magnetic dipoles (m h ) proportional to the earth magnetic field component along with the satellite spin axis.Hysteresis magnetic materials have significantly higher permeability than permanent magnets.Hence, affecting by a variable magnetic field, the hysteresis materials tend to show a dynamic realignment of micro-magnetic dipoles and a variation in magnetic domain boundaries.These changes cause frictional dissipation of energy at the molecular level.This phenomenon is known as hysteresis dissipation [13].
MODEL OF MAGNETIC HYSTERESIS DAMPING
The B-H curve of Fig. (2) represents a hysteresis loop at saturation state.This loop is generally defined by three magnetic hysteresis parameters.The material can be characterized by the maximum magnetization (saturation induction (Bs)), the remaining magnetization after removal of the external field (remanence (Br)), and the magnetic field required to nullify the magnetization (coercive force (Hc)).The way by which soft magnetic materials are magnetized depending on the external field can be displayed in a B-H curve.
This curve was shown in Fig. (2).Various mathematical models were presented for hysteresis rods in the literature [10,11].One of them was proposed by Kumar [11] based on an induced flux density developed by Flately [10] as: where is a constant value and H is the component of magnetic field strength aligned with the hysteresis rod.A positive value for Hc is used if dH/dt< 0 and a negative value of -Hc is used if dH/dt> 0.
The magnetic material PERMENORM 5000 H2 is used in all hysteresis rod dampers because it produces high hysteresis losses and is unaffected by pace environment over long periods of time.Some properties of this material are:
HYSTERESIS DAMPER AND SPIN DYNAMIC CONDITIONS
Consider the rotational properties of a rigid satellite equipped with hysteresis dampers.First, we assume that the satellite is dynamically balanced and the body-fixed coordinates are selected to be coincident with the spacecraft principal axes.The dynamic properties of spacecraft are given by: where L B is the spacecraft angular momentum about center of mass expressed in the spacecraft body frame or in inertial reference frame, L I .N is the external torque including the disturbance torques and the hysteresis torques.A BI is the inertial-to-body attitude matrix.The angular velocity BI is given by: where J is the spacecraft moment-of-inertia tensor (kgm 2 ).
Spacecraft dynamic condition is usually modeled by Eq. (4a) and kinematic condition is modeled by: where A quaternion or some other lower-dimensional representation of A BI is often integrated rather than Eq. ( 6).In this paper, the kinematic equations were expressed by separate integrations of the vector and the scalar part of the attitude quaternion.where the quaternion q = ( 1 , 2 , 3 , 4 ) for attitude representation can be derived from the Euler axis, e, and principal rotation angle, , as follows: A quaternion satisfied the constraint q T q = 1 : Hysteresis dampers utilize the magnetic rate damping to establish a desired spin rate about the spin axis or X axis and remove transverse rates about the Y and Z axes.A Lyapunov function can be used in the form: where where is the rate vector, s is the desired rate vector about X axis, and is the acquisition spin rate.It can be shown that if the magnetic hysteresis is selected based on Eq. ( 2), the time variation rate of V is negative for an axisymmetric inertia matrix of the form I=diag(I s ,I t ,I t ).It was observed in Fig. (3) that the time rating of Lyapunov function is negative throughout nutation damping via hysteresis rods.The imbalance existing in a spinning body can be in the form of static or dynamic imbalance.A static imbalance is generally an imbalance in a radial direction to the axis of rotation and is produced by a force that remains constant in the orientation relative to the spinning body.On the other hand, a dynamic imbalance is generally produced by a moment created when assembly spins rotate about an axis other than the principle axis.When an asymmetric body rotates about the principle axis, or a symmetric body rotates about an axis other than the principle axis, the outcome is dynamic imbalance.Due to the dynamic imbalance, the spinning body develops an angle of wobble at which the angular momentum vector aligns with the axis with the maximum moment of inertia; although this vector doesn't coincide with the spin axis.In fact, in this situation, the rotating object possesses product of inertia in the body frame system.Since the nutation angle is the angle between the angular momentum vector and the spin axis of the spacecraft, in the balanced case, the spin axis and the principal axis coincide, and the nutation is formed because of disturbance and the initial angular velocity in post separation phase.However, under imbalanced condition, since the angular momentum axis and the spin axis do not align, nutation (which is known as wobble angle) always exists [12].Considering that the spacecraft angular momentum is co-aligned with the satellite principal axis, the direction of the spacecraft major axis is different from the direction of spin axis or hysteresis rods due to the nature of imbalance.Also, for an unbalanced spacecraft, the angular momentum is expressed as:
SIMULATION RESULTS
An effective attitude dynamic design begins with an analysis of external torques by the satellite.The satellite experiences these torques at the height of 500 kilometers.The simulation was for an oblate spin satellite with the following mass properties: bus mass m b = 40 kg, with principal moments of inertia (kg.m 2 ) of Ixx=0.750,Iyy=0.735,Izz=0.725.Under dynamically unbalanced situation, the product of inertia is Ixy=Ixz=Iyz=0.01;these amounts are zero for balanced satellite.The IGRF model for the earth magnetic field is incorporated into the computation of the magnetic torques due to the hysteresis rods.Two hysteresis rods were assumed to be identical, with volume V h = 8.75x10 -7 m 3 and hysteresis constants as stated earlier.
A complete simulation of the satellite during the first 16 orbits after separation was illustrated in Figs.(4)(5)(6)(7)(8), including plots of the body angular rates ( x , y , z ) for dynamically balanced satellite (Figs.4-6) and unbalanced satellite (Figs. 7, 8).According to Figs. (4)(5)(6), when the satellite is principally spinning around its long axis or x axis, damping of un-spin angular velocity is performed exactly and the spin rate is remained constant (Fig. 5).But in unbalanced satellite not only un-spin angular velocities are damping, but also spin rate is decreased (Figs. 7, 8).
CONCLUSION
This paper presented a hysteresis nutation damper for the dynamically balanced and unbalanced spinning satellites.The spin and hysteresis rods dynamic conditions were modeled.This model is a passive solution to nutation damping in spin satellites while the hysteresis rods are aligned with the satellite spin axis.Therefore, as a result of these simulations, hysteresis damper is not appropriate for dynamically unbalanced spin satellite because the spin axis is not aligned with satellite angular momentum.Consequently, in order to use hysteresis rods as nutation damper, it's necessary to dynamically balance the satellite during the ground tests.
=
I xx x I xy y I xz z I yy y I xy x I yz z I zz z I yz y I xz z (13) Consequently, for the nutation angle, we have: y I xy x I yz z ) 2 + (I zz z I yz y I xz z ) 2 I xx x I xy y I xz z (14) where = [ x y z ] is the angular momentum expressed in body-fixed frame relative to inertial frame.
|
v3-fos-license
|
2023-11-16T14:10:26.009Z
|
2023-07-21T00:00:00.000
|
265214183
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://dl.acm.org/doi/pdf/10.1145/3625078.3625080",
"pdf_hash": "38b4c7ca01e91604d1bdbf7cfcc9b051fce1a7ad",
"pdf_src": "ACM",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45850",
"s2fieldsofstudy": [
"Environmental Science",
"Law",
"Political Science",
"Computer Science"
],
"sha1": "1a9a2a11f6c134fe8a2773599e9d314e1cd70322",
"year": 2023
}
|
pes2o/s2orc
|
Implementing Multisignature on a Blockchain-based Land Administration System: Securing Land Rights and Enhancing Transparency
Ownership of land is a complex and multifaceted concept. Land distribution to all social categories is an even more difficult task, as in most cases we lack the instruments to capture land rights information on the ground. As part of the United Nations’s Sustainable Development Goals1 5 (Gender Equality) and 10 (Reduced Inequality), we must think of ways and means to not only capture the rightful owner of a land but also build the framework that will protect those rights regardless of the social categories of its owner. While, on the one hand, many initiatives have been taken to improve the traditional tenure arrangements and other legal aspects towards a more equitable land access distribution, in another hand the technology implication is still lacking. We argue that technology reinforcement is necessary to provide enough protection against corruption. In this paper, we propose to implement a blockchain-backed land administration system based on Hyperledger Iroha. Besides the blockchain’s inherent ability to resist corruption and enhance transparency, the proposed system would provide support for multi-signatory land transfer transactions. Such a mechanism would make sure that each shareholder agrees on any given land transaction, effectively protecting their right to the land.
INTRODUCTION
In many countries land access can be challenging.Some categories struggling more than others due to customs, laws, and gender biases.According to the Food and Agriculture Organization (FAO) [19], women own only 15% of agricultural land in sub-Saharan Africa, and many of these women only have access to land through their husbands or male family members.A report by the International Development Law Organization (IDLO) [29] highlights the various legal barriers that prevent women from accessing and owning land in Africa.These barriers include discriminatory laws that prioritize male inheritance rights, lack of legal documentation to prove land ownership, and limited access to formal land registries.To address these challenges, various organizations and initiatives are working to promote women's land rights and improve their access to land in Africa.These include legal aid programs, land tenure reforms that recognize women's rights to land, and women-led land governance and decision-making initiatives.
In addition to women, there are several other social categories that struggle with land access in various regions around the world.These include: • Indigenous peoples, who make up around 5% of the world's population, often face significant challenges in accessing and owning land due to historical and ongoing colonization, displacement, and exclusionary land policies.According to the International Land Coalition (ILC) [6], indigenous peoples and local communities hold only 10% of the world's land, despite having customary and ancestral rights to much of it.• Small-scale farmers, who are estimated to produce up to 80% [11] of the world's food, often face difficulties in accessing land due to increasing competition from large-scale commercial agriculture and land grabbing by corporations and investors.According to the International Fund for Agricultural Development (IFAD), small-scale farmers own or manage only about 25% of the world's agricultural land.• Rural communities, who often rely on agriculture and natural resources for their livelihoods, may also face challenges in accessing and owning land [28] due to limited formal land tenure systems, insecure land rights, and lack of representation in land governance and decision-making processes.
To tackle these difficulties, a comprehensive strategy is necessary, which encompasses legal and policy changes, community-driven land governance and decision-making, as well as assistance for small-scale farmers and marginalized communities to obtain resources and enhance their ability to efficiently manage and utilize land in a sustainable manner.To ensure the protection of land access from tampering, it is imperative to establish a reliable technological foundation on which to build these approaches.
Blockchain technology has emerged as a potential solution to address these issues.Multiple studies [10,13,33,37] have explored its use for land governance and management, highlighting its potential to improve land tenure security, transparency, and exclusivity.One significant advantage of blockchain-based land registries is their ability to create secure, tamper-proof records of land ownership and use rights.These registries can help prevent fraudulent transactions and disputes over land, while also improving land tenure security.For instance, Allen et al. [1] found that blockchainbased land registries have been successfully implemented in several countries, including Sweden, Georgia, and Ghana, resulting in improved land tenure security and reduced corruption.
In their report Makala and Anand [25] briefly discussed the potential of leveraging multisignature transactions on blockchain to protect the rights of spouses and indigenous people.In this paper, we propose an implementation of a blockchain land administration system that can help fight land access disparity by enabling multiple signature accounts.
The paper is structured as follows: The introduction is followed by a brief literature review on the possibility of using multisignature transactions on blockchain to fight land access disparity.Sections 3 and 4 present the system requirements and the proposed solution, respectively.In Section 5, we discuss key considerations for the implementation.Finally, Section 6 concludes the paper.
RELATED WORKS 2.1 Overview of multisignature in blockchain
Multisignature on the blockchain has been the subject of many studies.Some of them [15,39] focus on improving the scheme used, while others [7,14] explore novel ways of using it in specific domains.In fact, its use has grown as a way to divide up responsibility in the management of digital assets among a group of users.Besides, as noted by Fareed [8], it can be used to protect a user's access to their account.For instance, if we require a threshold of private keys for any operation, as long as only one key is compromised, the user would still hold access to their account.There are many ways of implementing a multisignature wallet on a blockchain platform.The most common way is to implement it at the smart contract level, as is the case for Ethereum [12].In systems where smart contracts are not supported, like Bitcoin, it is achieved through modification of the blockchain protocol.For example, the pay-to-script-hash was necessary for Bitcoin [4].Another example is Hyperledger Iroha 1, which did not have a complete smart contract implementation.However, it had a native support for multisignatures guaranteed by one of its building blocks, namely the Multisignature Transaction Processor [16].
Blockchain and land governance
The use of blockchain technology has been proposed as a potential solution to improve land administration around the world.One key advantage of using blockchain technology is the ability to create secure, tamper-proof records of land ownership and use rights, which can prevent fraudulent transactions and disputes over land.For instance, a study by Ameyew and de Vries [2] analyzed the implementation of blockchain-based land registries in Ghana and found that it improved land tenure security and reduced corruption.
Several studies have explored the potential of blockchain technology in addressing land access and ownership inequality.In a nutshell, most of the literature found regarding blockchain application to land governance revolves around the following subjects: • The feasibility, which is often assumed as a good starting point in any research area.We find, for instance, the paper by Müller and Seifert [27] for Germany's case.Other examples are the study by Vos [38] and the one by Lemieux [24].• The most suitable blockchain technology.We found various land administration implementation case studies in the literature using various private/consortium platforms like Hyperledger Fabric [26,35], Factom [24], Ubitquity [10], or public blockchain platforms like Bitcoin (for document validation [13] and Colored Coin [3]) and Ethereum [22,33].• The way of integrating blockchain into the land administration system.Some propose it to be added as a document validation layer like it was the case for Georgia [13].Others propose a new system that builds around the blockchain platform [37].And some have proposed the middle approach as an add-on to the existing system [20,35].• The usage of smart contract [23,26] to secure or automate part of the process.
A paper by Anand et al. [3], note that the multiparty signature feature or multiparty wallets of blockchain technology can be used to secure women's property rights in the context of marital property.Despite legal provisions guaranteeing women equal rights to land and property, anecdotal evidence suggests that these rights are often disregarded in practice.For instance, husbands may sell marital property without their wives' consent, thereby compromising their property rights.By enabling multisignature wallets, blockchain technology can provide a mechanism to safeguard women's rights to marital property.Additionally, multiparty transactions can be leveraged for property transactions that involve multiple owners, such as small-business loans where the joint property is used as collateral.
In summary, multisignature has proven its efficiency in resolving similar cases in other areas of application like supply chain [5] or finance where the technique was applied even before blockchain [34] for e-checks on joint accounts.Although the report Anand et al. [3] have pointed out the importance of enabling multisignature on blockchain to reduce the gap in land access and ownership inequality, being a report on opportunity and usage, it didn't expand on the specifics of that aspect.This paper is meant to fill that gap.We propose the usage of a novel blockchain technology for land administration that would ensure multisignature transaction support.We proceeded in building a sample network and client application to showcase its feasibility.It should be noted that the use of Hyperledger Iroha for land administration has not yet been documented in the literature.
REQUIREMENTS
This project focuses on one application of blockchain in land administration, namely the use of blockchain to enhance land right protection for disadvantaged social groups.The requirements for a blockchain-based land administration information system were well listed by Sladić et al. [33].They agreed that to establish a blockchain-based LIS (Land Information System), the following points should be considered: • Determine whether the blockchain network should be public or private and authorized, and how identity management will be handled.
• Define what data will be included in one transaction in the blockchain.• Determine how smart contracts will be used and what business logic should be implemented and executed on the blockchain.
Besides those points that have been largely discussed [33,35], in this case, we will focus on building a blockchain-based LIS that has the following features:
PROPOSED SOLUTION 4.1 Architecture
Giving the predominance of mobile platforms and it rapid adoption in web-based services [32,40] worldwide, we proposed an architecture that has a mobile platform as the end client.This client will be directly integrated with the blockchain network without a traditional back-end service.Hence the key pair will be generated in the participants' devices and resides there.Besides the regular participants, a special permission set will have to be implemented for land administration agent.Those participants would have the ability to issue the land titles (see Fig. 1).
The blockchain platform
In our study of the various blockchain platforms that were suitable for our requirements and system architecture, we found Hyperledger Iroha to be a promising and well-fitting one.Hyperledger Iroha is a graduated project from the Hyperledger Umbrella that aims at assets, information and identity managements [17].It supports multisignature transactions out of the box, provides a modular approach of permission and mobile platform SDKs.Its consensus Assets are associated with accounts and can be transferred between them using transactions.Assets can be characterized by their Mintable parameter, their type (fungible or non-fungible) and their value type (see Fig. 2b for the available value).An asset is called mintable if we can produce more of them after the initial quantity was emited.• A signatory in Hyperledger Iroha is an entity or participant that has the authority to approve or reject transactions on the blockchain network.Each account can have multiple signatories associated with it, and transactions must be signed by the required number of signatories before they can be executed.That number is usually addressed as the quorum.
Iroha is made of several components Fig. 3 give a high-level view of how they could be organized based on the four-layer structure.
environment and network configuration
We built our project on an Intel I7-1185G7 @ 3.00GHz processor coupled with 32 g of RAM running a Ubuntu flavor OS.We bootstrapped a sample Iroha 2 network composed of 4 peers running on Docker with a custom genesis block.The genesis block allowed us to: • Define the domain of the land administration; • Create a pair of land administration office agents that have the permission to mint the assets, i.e., issuing land title • Create a default account that has the right to register new accounts.This account is used for first registration of users through the application.
• Register an asset definition for land title.It is defined as a nonfungible, mintable store assets type [16].A store asset type is a special type used to work with metadata.This metadata will allow unique characteristic of each piece of land to be captured in the blockchain.
Feature coverage
In this section we present the main feature that our platform support and how it does so.
Account enrollment.
The process of account creation is pretty straightforward as most of the work is done by the SDK embedded in the app.It is responsible for the user key-pair generation and submitting the transaction to the blockchain.The process is described in Fig. 4.
Listing 1: The asset Definition form the Genesis " Register ": { " NewAssetDefinition ": { " id ": " landtitle # landoffice ", " value_type ": " Store ", " mintable ": " Infinitely ", " metadata ": { " key ": { " String ": " Region " }, " value ": { " String ": " Region Name " }}}} 5a).To do so he will need the public key of the other parties.One signatory is added at a time (Fig. 5b) as under the box, each time we add a public key, the system mint it to the account.For this .mintPublicKey ( id .asAccountId () , publicKey ) buildSigned ( keyPair ) }. also { withTimeout ( timeout ) { it .await () } }} test purposes a MintBox was created, after all the account's signatory (represented by their public key) have been added (answering continue to the modal on Fig. 5c).It sets the SignatureCheckCondition of the account to ContainsAll.This evaluates the list of all the signatories added so far in the account and set the quorum to that number (Listing 3 provide a sample of a MintBox with two signatories).4.4.3Co-owned asset transfer.For an asset to be co-owned, it needs to belong to an account with multiple signatories added to it i.e., the quorum of the account is greater than one.By default if only one person signed the transaction to transfer such assets, the transaction will be stuck in a pending state.Under the hood when the application is run, it checks for every transaction pending that is waiting for the user signatures and list them under pending transaction for his approval(signature).Until the quorum is met, the transaction won't be committed.
Why Hyperledger Iroha?
Hyperledger Iroha 2 is one of the most versatile blockchain platforms.It can operate as both a public and private blockchain [16].This versatility allowed us to configure the right set of permissions.For example, we leveraged it to allow new land titles to be issued (minted) only by the land administration, while allowing anyone to become a member in our test network.
Among many other platforms, we also considered using another Hyperledger project: Hyperledger Fabric.It has been proposed several times in the literature [9,33,35].One of the main reasons we chose Iroha 2 over the other platforms is that it supports outof-the-box multi-signatory accounts.This feature is in accordance with our requirements (section 3).
In our setup, the keys and therefore part of the cryptography operations are handled client-side, as opposed to the usual setup we find in a private/hybrid blockchain where wallets are typically stored server-side.While one can argue about some inherent security risks of having a key pair stored in mobile devices, we've come a long way in electronic signature solutions in mobile devices [31].
Another useful built-in feature is the ability to delegate to another account the ability to transfer assets [16] (via the can_transfer_ my_assets permission).This feature allows proxies to operate on behalf of the owner.This is useful in legal frameworks where notaries are still required to handle land transactions, and can also be used for regular proxy-carried operations.In our sample application with Iroha Java SDK, this is simply done by registering a permissionToken of type CanTransferUserAssetsToken to the targeted account [16].
Regarding performance and scalability, since Iroha 2 is an ongoing project, we cannot yet make a fair comparison to other blockchain platforms.However, the performance goals of the project, as stated in its whitepaper [18], are 20,000 transactions per second (tps) and a block time of 2-3 seconds.The block time refers to the duration a node should wait after submitting a transaction to the leader for block creation before suspecting a potential faulty leader, triggering a view change, and electing a new leader [18].
Identity handling
In this study, we proposed self-enrollment of users.However, this approach needs to be studied more deeply.Iroha has been used for secure digital identity [36], and a similar approach could be coupled with the proposed solution.
Similarly, we could fine-tune the enrollment process to a more streamlined approach using a Public Key Infrastructure (PKI) solution with a front-facing Registration Authority (RA).This approach would allow us to link individuals to their wallets, as land properties can be utilized for money laundering.There is typically a strict requirement to adhere to Know-Your-Customer practices [35].Additionally, we would need to consider how the land administration accounts would be created.
Alternatively, the utilization of a blockchain-based identity management system, such as the one proposed by Pei and Oida [30], could enhance the overall robustness of the solution.
Conflict Mediation
While the use of a multi-signatory account is interesting, there is still a risk of conflict.For example, in the case of marital rights to a piece of land, it could happen that the spouses do not both agree on a transaction.The question of how to technically resolve such a case remains to be answered.In practice, these cases go to court, and a judge decides based on law and regulation how the situation should be handled.
One possible technical solution would be to add an additional signatory to each such account with a quorum set to 2. In that way, in the case of a dispute between spouses, the court would have an emergency account as a tool to enforce its verdict.Of course, the question regarding the integrity of the person in possession of the "tie-breaking" key can be raised.However, the blockchain keeping track of the signatories that allowed each transaction would serve as an audit trail.In the unlikely event of a corrupt court, the fear of leaving one fingerprint behind when signing should limit tampering.
There are more complex cases that require particular attention.We could further investigate the use of "conditional multi-signatory accounts" as a way to tackle complex cases.It could allow at least to implement a similar logic as the one with a quorum of 2. In the case of an arbitrary number of keys, we could set a policy that allows a transaction to proceed if a particular key (to be given to the justice system) signs the transaction or the defined quorum is reached.Still more work needs to be done to determine if such solutions are realistic.As of the time of writing, Hyperledger Iroha 2 does not yet support conditional multi-signatory accounts.
Cost Coverage
It is generally assumed that the cost of blockchain transaction mining will ultimately be borne by the end user.This is particularly true for transactions that are done or even anchored in a public blockchain.In Georgia's case [13], the first year of operation, all transaction fees were supported by Bitfury Group.The government then calculated the actual fee and its budget allocation to cover maintenance and operating expenses.It is still difficult to predict how the fee would fluctuate if we build on a public blockchain.
While the proposed solution is a hybrid one without anchoring to a public blockchain, we should adopt a similar approach to first determine the cost of running the solution.This solution would need to be first deployed by the public authority and define how new nodes would be created to maintain a good level of distribution.Financial institutions, conveyancers, and surveyors are good candidates to create and maintain independent nodes.These entities are already heavily involved in land transactions and already perceive benefits from people making land transactions.Provided that running the solution is not too cost-heavy and the Iroha 2 consensus performance [18] goals are reached, it is possible to lower the fee for the end user.
CONCLUSION AND FUTURE WORKS
In this paper, we have proposed a solution for improving land administration systems towards Sustainable Development Goals2 5 and 10, namely Gender Equality and Reduced Inequality.Our proposed solution utilizes multisignature accounts on a blockchainbased land administration system.We have made it mandatory for each transaction involving assets owned by such accounts to be multi-signed by a predefined quorum.For testing purposes and due to SDK restrictions, we were only able to set the quorum to the number of signatories (public keys) of the account.This ensures that any sale or transfer of land can only occur with the awareness and consent of all other right holders.
To demonstrate the feasibility of this solution, we designed a public permissioned blockchain using Hyperledger Iroha and developed a sample prototype.Our proposed solution offers enhanced security against corruption and fraud and provides a transparent and equitable land administration system.Using Hyperledger Iroha for blockchain-based land administration is novel as no known implementations have been found in the literature.Hence, our study also allows us to assess its feasibility.
Overall, we believe that our proposed solution can be an effective tool for addressing land ownership and distribution issues towards achieving sustainable development goals.Further research is necessary to test the scalability and efficiency of this solution in different contexts and to further study the aspects discussed in section 5. We also plan to work on integrating this system with other systems and tools like the UN-Habitat Social Tenure Domain Model (STDM) Tool and the FAO's OpenTenure software for capturing land rights information on the ground.These steps will further help consolidate the initial data that will be required for the genesis block.
( 1 )
Ability to create one's own account by easing the means of generating a key pair.(2) The ability to sell a property.(3) Ability to create a property with co-ownership.(4) The ability to sell a co-owned property by cosigning the property transfer transaction.(5) The ability to delegate the right to sell a land title to another account.
|
v3-fos-license
|
2019-08-11T13:03:29.125Z
|
2019-08-10T00:00:00.000
|
199518371
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.14537",
"pdf_hash": "295ee335448fb600f40665fc3bdf2cbbab7d76a8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45852",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3fcaefeb03415e41ac18adb8e855df6b6b602790",
"year": 2019
}
|
pes2o/s2orc
|
PON1 hypermethylation is associated with progression of renal cell carcinoma
Abstract In this study, our aim was to exploring the influences of DNA methylation of PON1 on cell proliferation, migration and apoptosis of renal cancer cells. The genome‐wide methylation array of renal cell carcinoma samples and adjacent tissues were obtained from the cancer genome atlas (TCGA) database. By analysing the DNA methylation and conducting the CpG islands array, methylation status expressed in renal tumour samples and normal renal tissue samples were detected. Methylation‐specific PCR (MS‐PCR) and qRT‐PCR were employed to detect the methylation level and mRNA expression of PON1. Wound‐healing assay, transwell assay and MTT assay were utilized to detecting the migration, invasion and proliferation abilities, respectively. The cell apoptosis was testified by Tunnel assay. In addition, the effect of PON1 on renal cancer cells was verified by experiments in vivo. The methylation status of different genes in renal cell carcinoma samples was obtained by CpG islands arrays and hypermethylated PON1 was selected for further study. PON1 was down‐regulated in renal cell carcinoma tissues detected by qRT‐PCR and Western blot. Both in vitro and vivo experiments indicated that the sunitinib‐resistant in renal cancer cells could be suppressed by treat with 5‐Aza‐dC or TSA, and the effect came out more obvious after 5‐Aza‐dC and TSA co‐treatment. In detail, the demethylation of PON1 inhibited the migration, invasion and proliferation of renal cancer cells and also arrested more cells in G0/G1 phase. The vivo experiment indicated that demethylated PON1 suppressed the growth of tumour. Hypermethylated PON1 promoted migration, invasion and proliferation of sunitinib‐resistance renal cancer cells and arrested more cells in G0/G1 phase.
disease. In current study, we tried to revealing the internal mechanism between DNA methylation and RCC.
DNA methylation was heritable and reversible, belonged to epigenetic changes, which represent in the interface between the genome and the environment. 4 In the procession of DNA methylation, cytosine was covalently modified by adding a methyl group to their backbone, forming a 5-methylcytosine nucleotide (5-mC). 5 In addition to play an important epigenetic maker in gene silencing, DNA methylation was also involved in regulating normal growth and developmental processes such as cell differentiation, genomic imprinting and suppression of repetitive elements. [6][7][8][9] There have been some studies about DNA methylation and renal cancers. Malouf et al defined the epigenetic basis for proximal versus distal tubule-derived kidney tumours. 10 Besides, methylation-associated genes such as SETD2, KRT19 and SFRP1 have been studied in renal cancers. [11][12][13] However, the mechanism of DNA methylation in RCC remained plenty of unclear regions.
Human serum paraoxonase-1 (PON1) is a Ca 2+ dependent high-density lipoprotein (HDL) associated lactonase capable of hydrolysing a wide variety of lactones, thiolactones, arylesters and cyclic carbonates. 14 PON1 is a kind of glycoprotein, which composed of 354 amino acids and approximate molecular mass of 43 KDa, and it retains its hydrophobic signal sequence in the N-terminal region which enables its association with HDL. 15 PON1 encodes a member of the paraoxonase family of enzymes and exhibits lactonase and ester hydrolase activity. In regard to the methylation of PON1, hypomethylated CpGs in the promoter of PON1 was predicted to be an underlying risk of bleeding after dual antiplatelet therapy. 16 Moreover, recent investigations indicated that PON1 had considerable effect on molecular disorders connected with cancer. [17][18][19] Some researchers found that measurement of serum PON1 concentration post-radiotherapy could be an efficient prognostic biomarker and an index of the efficacy of the radiotherapy. 20 And in this paper, we devoted to explaining the mechanism of PON1 in KIRP.
5-aza-2′-deoxycytidine (5-Aza-dC) was used in many DNA methylation studies. For example, Gao et al used 5-Aza-dC to investigating the influences of hypermethylated MEG3 in retinoblastoma. 21 Yan et al also used 5-Aza-dC, which was down-regulated the DNA methylation of SPARC, to study the progression of T-cell lymphoma. 22 As a kind of histone deacetylase (HDAC) inhibitors, trichostatin A (TSA) could retard the growth of carcinomas of cervix, colon, rectum and other cancers in vitro. 23 What's more, the co-treatment of 5-Aza-dC and TSA showed a better effect in human gastric cancer cells. 24 In addition, sunitinib, a multitargeted tyrosine kinase inhibitor (TKI), which currently was the standard of care for patients suffering from metastatic renal cell cancer. 25 On the whole, we designed to exploring the impact of 5-Aza-dC and TSA co-treatment in sunitinib-resistant RCC cell.
To sum up, the aim of our study was to find the target gene affecting the progression of KIRP and to study its mechanism.
Furthermore, we also testified the combination therapy in sunitinibresistant RCC to find a better treatment.
| Clinical samples
We obtained 15 pairs of RCC and corresponding para-carcinoma tissues randomly from patients undergoing surgical treatment from May 2016 to June 2017 at China-Japan Union Hospital of Jilin University. We got the approval from the Ethics Committee of China-Japan Union Hospital of Jilin University to collect the samples, and we obtained the informed consent from all the patients. We took the tissues from the patients and stored them in a liquid nitrogen freezer at −80°C, preparing for subsequent experiments. were grown in a 1:1 mixture of DMEM/Ham's F 12 nutrient medium (F12). All media needed to add 10% foetal bovine serum (FBS) (GIBCO Invitrogen). Culture plates were placed at the condition of 37°C and 5% CO 2 in an incubator.
| Genome-wide methylation analysis
The genome-wide methylation array of renal cell carcinoma samples and adjacent tissues were obtained from The Cancer Genome Atlas (TCGA) database to perform the unsupervised hierarchical clustering. The ChAMP R package (http://www.bioco nduct or.org/packa ges/devel/ bioc/html/ChAMP.html) was employed for methylation analysis, which contained limma-based differential methylation protein (DMP) and Probe Lasso-based differential methylation region (DMR) analysis function. By using the Illumina Infinium Human
| Methylation-specific polymerase chain reaction (MS-PCR)
To perform MS-PCR, a DNeasy Tissue Kit (Qiagen) was used to extract DNA from the tissue samples following the protocol described Table S1 showed the primers used.
| Treatment of RCC cells with 5-Aza-dC, TSA and sunitinib
The human RCC cells was cultured in 6-cm dishes and incubated over- Table S2. The results were recorded after the cycle, and the relative gene expression was analysed by 2 −ΔΔC t method.
| Western blot
The total protein was separated by RIPA lysate (Thermo Fisher Scientific). Then use BCA Kit (Sigma-Aldrich) to quantify the protein concentration. Separate the proteins by SDS-polyacrylamide gelelectrophoresis (SDS-PAGE) and transfer them to a polyvinylidene fluoride (PVDF) membrane (Ruiqi). Then block the membrane with 5% (w/v) bovine serum albumin in Tris-buffered saline (TBS) buffer at room temperature for 45 minutes and incubated first antibody anti-PON1 (ab24261, Abcam) and anti-GAPDH (ab8245, Abcam) at 4°C overnight. GAPDH was used for normalization. Wash the PVDF membrane three times by TBS and incubate them with the peroxidase-conjugated mouse anti-goat IgG antibody (Abcam) for 4 hours at room temperature. Immunologically active proteins were visualized with an enhanced chemiluminescence system and analyse the results by using ImageJ software.
| Wound-healing assay
1 × 10 6 SKRC39 cells were seeded in 6-well plates. Then use a pipette tip to draw a straight scratch on the bottom of the plates. Wash the suspension cells twice or three times gently by PBS and take images of the scratch as baseline by the microscope. After that, the medium was replaced to the serum-free RPMI-1640 medium. Add different drugs into each group and take pictures of the same location again after the cells were cultured for 24 hours.
| Tunnel assay
SKRC39 cells treated with four methods (sunitinib group (control group), 5-Aza-dC group, TSA group and 5-Aza-dC + TSA + sunitinib group) were stably transfected, dealt with hunger and washed twice in PBS (Thermo Fisher Scientific) at 4°C, and re-suspended in 250 μL labelling buffer (Haoranbio). Cells from each group were stained with 5 μL annexin V/FITC and 10 μL of 20 μg/mL propidium iodine (Sigma-Aldrich) and were incubated for 15 minutes at 37°C in the dark. The results can be observed by fluorescence microscope (ZEISS).
| Tumour xenograft
Five-week nude mice at the same condition were arranged as seven groups randomly, with five in each group, including normal group, sunitinib group, 5-Aza-dC group, TSA group, 5-Aza-dC + sunitinib group, TSA + sunitinib group and 5-Aza-dC + TSA + sunitinib group.
| Immunohistochemistry
Deparaffinize the tissue samples with xylene and then rehydrate the sections (5 mm) from tissues with ethanol. After that, to block endogenous peroxidase activity, we should immerse them in 3%
| Statistical analysis
We used SPSS standard version 19.0 (SPSS Inc) and GraphPad Prism 6.0 to analyse all results which came from the averages of at least three independent experiments. Statistical analyses were performed using a paired sample t-test and one-way analysis of variance (ANOVA). Data from all quantitative assays were shown as the mean ± standard and P < 0.05 was considered to indicate a statistically significant difference.
| PON1 presented hypermethylated and lower mRNA expressed in RCC
As shown in Figure S1, the hierarchical clustering analysis screened the top 1000 differential CpG islands from 395 412 probes. In addition, Figure S2 Figure 1A). The relative heatmap showed the methylation data as a standard control (from low to high methylated) in a blue-red scale (from low to high methylation level). Analogously, differential mRNA analysis of paired tumour/normal tissues identified the top 50 differentially mRNA-expression-level genes in a green-red scale (from low to high mRNA expression level) ( Figure 2B). The above two heat maps revealed that PON1 presented high methylated and low mRNA expressed in the RCC tissues.
| Methylation status of PON1
The methylation status of 660 genes were detected by CpGs island analysis which revealed that PON1 was hypermethylated among them (Figure 2A). The region of DMR_431 which was one of the differentially methylated regions of PON1 showed that most CpG islands on the PON1 gene were in high methylation level ( Figure 2B). Among the 9 differentially methylated imprinted sites including cg01874867, cg04155289, cg05342682, cg07404485, cg17330251, cg19678392, cg24062571, cg22798737 and cg21856205, these box plots revealed that the methylation level was high in the RCC cells than in the normal cells except cg 24062571 and cg 22798737 ( Figure S3). Kaplan-Meier analysis showed that hypermethylated PON1 had shorter lifespan generally. Calculate the P value by the log-rank test ( Figure 2C). Taken together, the above results showed that the high DNA methylation level of PON1 in RCC tissues.
| 5-Aza-dC and TSA co-treatment in SKRC39/ sunitinib cells overexpressed PON1
MTT assay was employed to evaluate the different treatments on RCC and determine the concentration of reagents in the experiment. The results showed that most suitable dose of 5-Aza-dC and TSA were 1 μmol/L and 100 nmol/L respectively, and the concentration of 1 μg/mL sunitinib was selected for further studies ( Figure 4A-C). Based on the results of MSP and qRT-PCR, the methylation and mRNA expression of PON1 were verified and the results showed that the treatments of 1 μmol/L 5-Aza-dC and 100 nmol/L TSA in SKRC39/sunitinib cells had relative lower methylation level and higher expression level compare with the other two RCC cells ( Figure 4D-E). Collectively, these data indicated that after treated with 5-Aza-dC and TSA, the methylation level was decreased and expression level was increased in RCC cells. Besides that, re-expression of PON1 in co-treatment 5-Aza-dC + TSA group showed a significant increase in SKRC39/sunitinib cells. In other words, the co-treatment of 5-Aza-dC + TSA on SKRC39/sunitinib cells make PON1 de-methylated and re-expressed significantly.
| Impact of PON1 in cell migration, invasion, proliferation and cell cycle in vitro
To exploring the role and mechanism of hypermethylated PON1 in RCC cells, the migration, invasion, cell cycle and cell proliferation in different groups of SKRC39/sunitinib cells were analysed.
The experiment groups were divided into four groups, including sunitinib group, 5-Aza-dC + sunitinib group, TSA + sunitinib group and 5-Aza-dC + TSA + sunitinib group. The cell migration abilities were detected by wound-healing assay, and the results revealed that the migration distance in both 5-Aza-dC and TSA groups was inhibited better than that in the control group, but the group of 5-Aza-dC + TSA in SKRC39/sunitinib cells showed shorter migration distance ( Figure 5A,B). Additionally, the transwell assay was per- Cell cycle was detected by flow cytometry assay, and the results revealed that treated with 5-Aza-dC/TSA could arrested more cells in the G0/G1 phase and co-treatment 5-Aza-dC + TSA group displayed more obvious trend ( Figure 5D,F). And then, cell proliferation was F I G U R E 3 PON1 was hypermethylated in renal tumour cells. A, PON1 was hypermethylated in tumour tissues using methylation-specific PCR. 'Pos' represented positive control; 'Neg' represented negative control. M: methylated; U: unmethylated. B, PON1 was confirmed to be hypermethylated in 786-O, Caki-2, and SKRC39 cell lines compared with HK-2cells. The change of DNA methylation level was maximal in SKRC39 cell line. ** P < 0.01, *** P < 0.001, compared with the HK-2 cell lines. C, The mRNA levels of PON1 in tumour cells and normal cells were analysed by real-time PCR. ** P < 0.01, *** P < 0.001, compared with the HK-2 cell lines. Each data represented mean value ± standard deviation (SD). D-E, The protein levels of PON1 in tumour cells and normal cells were determined using Western blotting. ** P < 0.01, compared with the HK-2 cell lines verified by MTT assay, proved that the cell proliferation curve of 5-Aza-dC + TSA in SKRC39/sunitinib cells was inhibited ( Figure 5G).
In addition, cell apoptosis was determined by Tunnel assay and the results confirmed that 5-Aza-dC and TSA could promote cell apoptosis of RCC cells ( Figure 5H). These in vitro experiments above showed that re-expression of PON1 could restrain the migration, invasion and proliferation abilities and promote cell apoptosis in RCC cells.
| 5-Aza-dC + TSA + sunitinib co-treatment group restrained RCC tumour growth in vivo
To verify the curative effect of different treatment in RCC, tumour xenograft experiment and immunohistochemistry were employed for in vivo detection. The results were showed in Figure 6A-C, and the tumour size and weight were suppressed in 5-Aza-dC group and TSA group. The suppression effect of co-treatment 5-Aza-dC + TSA was inhibited more obvious. Immunochemical staining was conducted to verify that the index of ki67 proteins was down-regulate significantly in the co-treatment group of ( Figure 6D-E). These suggested that the co-treatment group was a promising strategy for RCC therapy.
| D ISCUSS I ON
In this research, we demonstrated hypermethylated PON1 affected on-cogenesis of RCC. Through vivo and vitro experiments, the results F I G U R E 4 5-Aza-dC and TSA co-treatment induced demethylation and re-expression of PON1. A, Minimum effective dose of 5-Aza-dC was determined by MTT and 1 μmol/L showed difference. * P < 0.05, ** P < 0.01, compared with 0 μmol/L group. B, Minimum effective dose of TSA was determined by MTT and 100 nmol/L showed difference. * P < 0.05, ** P < 0.01, compared with the 0 nmol/L group. C, Minimum effective dose of sunitinib was determined by MTT and 1 μg/mL showed difference. * P < 0.05, ** P < 0.01, compared with the 0 μg/mL group. D, 5-Aza-dC and TSA could decrease the methylation level of PON1 compared with the sunitinib group and co-treatment group had more obvious trend. ** P < 0.01 compared with the sunitinib group. E, 5-Aza-dC and TSA could increase the expression level of PON1 compared with the sunitinib group and co-treatment group had more obvious trend. ** P < 0.01 compared with the sunitinib group. Each data represented mean value ± standard deviation (SD) F I G U R E 5 PON1 inhibited cell migration, invasion and proliferation, and arrested more cell at the G0/G1 phase. A,B, The wound-healing assay showed that the migration distance of 5-Aza-dC treatment and TSA treatment were inhibited and the co-treatment group displayed more obvious trend. * P < 0.05, ** P < 0.01, compared with the sunitinib group. C&E, The number of invasive cells in 5-Aza-dC treatment and TSA treatment were decreased and co-treatment group displayed more obvious trend. * P < 0.05, ** P < 0.01, compared with the sunitinib group. D&F, More cells were arrested at the G0/G1 phase in 5-Aza-dC treatment group and TSA treatment group and co-treatment group displayed more obvious trend. * P < 0.05, ** P < 0.01, compared with the sunitinib group. G, Cell proliferation was inhibited in 5-Aza-dC treatment group and TSA treatment group and co-treatment group displayed more obvious trend. * P < 0.05, ** P < 0.01, compared with the sunitinib group. H, The apoptosis cells were increased in 5-Aza-dC treatment group and TSA treatment group and co-treatment group displayed more obvious trend (×40) | 6655
LI and YU
showed that 5-Aza-dC and TSA could better block the tumour growth.
The co-treatment also made PON1 lower methylation level and higher mRNA expression at the same time. Collectively, 5-Aza-dC and TSA could synergize the effect of inhibition of sunitinib-resistant RCC tumour growth by inducing PON1 re-expressed.
Nowadays, DNA methylation was a research hotspot about cancer including renal cancers. The heterogeneity of DNA methylation was observed by Dugué et al, with stronger associations for risk of kidney cancer. 26 Additionally, Kumar et al put forward the hypothesis that IQGAP2 and IQGAP3 were promised to be prognosis therapeutic target of specific cancers including renal cancer for the close connection of their methylation and cancers. 27 It also reports that RPS6KA4/MIR1237 and AURKC promoter regions are differentially methylated in Wilms' tumour. 28 Analogously, we found that the methylation of PON1 was associated with the procession of RCC which is also the base of our research.
As forPON1 and DNA methylation, Fiorito et al found a general inverse relationship between B-vitamins intake and DNA methylation of PON1. 29 The methylation of PON1 was also linked to vascular dementia which confirmed by Bednarska-Makaruk et al. 30 Interestingly, the experimental group suffered from the disease also had lower intake of vitamin B than the normal group.
Although the relative studies about PON1 and DNA methylation are not so much and it has been a forward-looking subject due to the non-negligible role of PON1 played in the development of cancers including RCC.
Furthermore, the purpose of combination therapy in tumour treatment was to enhance the curative effect and reduce the occurrence of adverse reactions. According to the mechanism of anti-tumour drugs and tumour cell proliferation kinetics, carrying out reasonable combination of drugs has been a hot area of cancer treatment in recent years. For instance, a study found that the inhibition of CCDC69 was hypermethylated in ovarian cancer cells and might interfere with the effectiveness of combination therapy with platinum drugs, which means the combination of 5-Aza-dC and other anti-cancer drugs may have better effect. 31 To overcome drug resistance, some researchers explored a promising approach using 5-Aza-dC and the mTOR inhibitor everolimus in Medullary thyroid cancer cells, which showed a strong synergistic antiproliferative activity through the induction of apoptosis. 32 Certainly, the studies of co-treatment in renal cancer are not rare.
The researchers studied how 5-aza-dc and paclitaxel (PTX) synergized against renal cell carcinoma (RCC). 33 In our study, the drug combination of 5-Aza-dC and TSA combined with sunitinib also showed a better effect of suppressing tumorigenesis of RCC from the experimental results. Certainly, the clinical trials are needed to verify the effect.
Hypermethylation of PON1 was present in the progression of RCC. Nonetheless, there was insufficient in the vitro experiments as we only used one RCC cell line among the three renal cell lines we researched. However, we believe this trend will be similar and further investigation will be conducted in the future. Overall, our F I G U R E 6 5-Aza-dC + TSA + sunitinib co-treatment group suppressed tumour growth in vivo. A-C, Tumour growth was inhibited in 5-Aza-dC treatment group and TSA treatment group and co-treatment group displayed more obvious trend. * P < 0.05, ** P < 0.01, compared with the sunitinib group. D-E, The concentration of ki67 was decreased in 5-Aza-dC treatment group and TSA treatment group and cotreatment group displayed more obvious trend. * P < 0.05, ** P < 0.01, compared with the sunitinib group research showed that the DNA methylation of PON1, which is hopeful to be a targeted biomarker in RCC, could influence the development of RCC.
| CON CLUS ION
In short, this is the first study on the hypermethylated PON1 involved in the development of RCC, and it showed DNA methylation was one of the key parts in the proliferation of renal cancer cells especially the RCC cells. What's more, we have clarified the mechanism of how the DNA methylation of PON1, which may be a promising target for gene treatment, involved in the tumorigenesis of RCC. In addition, 5-Aza-dC and TSA co-treatment for sunitinib-resistant RCC is hopeful clinical therapy.
CO N FLI C T O F I NTE R E S T
The authors confirm that there are no conflicts of interest.
AUTH O R CO NTR I B UTI O N S
XL and QY contributed to research design and the acquisition; XL analysis and interpretation of data; QY drafting the paper and revising it critically; XL and QY approval of the submission and final versions.
ACK N OWLED G EM ENT
We'd like to express our deep appreciation to Science and Technology Development Plan Project of Jilin Provincial Science and Technology Department (No.: 20160101032JC), which offered great help in our research process.
E TH I C A L A PPROVA L
The research was carried out according to the World Medical Association Declaration of Helsinki. Written informed consents were obtained from all the participants. This study approved by the Ethics Committee of China-Japan Union Hospital of Jilin University.
DATA AVA I L A B I L I T Y
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
v3-fos-license
|
2022-09-02T15:19:23.200Z
|
2022-08-30T00:00:00.000
|
251998501
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2227-9059/10/9/2128/pdf?version=1662011229",
"pdf_hash": "f0dde779759b26c97803e0e5c89182e0049f2999",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45853",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d049079dcada1190156976687facfc024e5fc040",
"year": 2022
}
|
pes2o/s2orc
|
Blood Eosinophils and Exhaled Nitric Oxide: Surrogate Biomarkers of Airway Eosinophilia in Stable COPD and Exacerbation
In recent years, tremendous efforts have been devoted to characterizing the inflammatory processes in chronic obstructive pulmonary disease (COPD) in order to provide more personalized treatment for COPD patients. While it has proved difficult to identify COPD-specific inflammatory pathways, the distinction between eosinophilic and non-eosinophilic airway inflammation has gained clinical relevance. Evidence has shown that sputum eosinophil counts are increased in a subset of COPD patients and that these patients are more responsive to oral or inhaled corticosteroid therapy. Due to feasibility issues associated with sputum cell profiling in daily clinical practice, peripheral blood eosinophil counts and fractional exhaled nitric oxide levels have been evaluated as surrogate biomarkers for assessing the extent of airway eosinophilia in COPD patients, both in stable disease and acute exacerbations. The diagnostic value of these markers is not equivalent and depends heavily on the patient’s condition at the time of sample collection. Additionally, the sensitivity and specificity of these tests may be influenced by the patient’s maintenance treatment. Overall, eosinophilic COPD may represent a distinct disease phenotype that needs to be further investigated in terms of prognosis and treatment outcomes.
Introduction
Chronic obstructive pulmonary disease (COPD) is an inflammatory lung disease characterized by increased numbers of macrophages, neutrophils and T cells, and enhanced release of inflammatory mediators including cytokines, chemokines, growth factors, oxidants and vasoactive agents [1]. There is evidence that airway inflammation is further increased in acute exacerbations of COPD (AECOPD) defined as an acute, transient deterioration in patients' symptoms and lung function [2,3]. Nonetheless, COPD is heterogeneous in terms of symptoms and underlying inflammatory processes, and the patient's response to treatment, particularly inhaled corticosteroids (ICS), is variable.
In recent years, tremendous efforts have been made to elucidate the inflammatory pathways in COPD to better understand the molecular mechanisms underlying the disease and to lay the foundation for the use of biological agents targeting specific inflammatory pathways [4,5]. However, the identification of distinct inflammatory endotypes in COPD patients has been challenging, and the relationship between inflammatory mechanisms and clinical manifestations of the disease remained uncertain in most cases [6]. Likewise, although different phenotypes of AECOPD have been characterized, this has had little impact on the treatment protocols used in the routine clinical setting for patients with AECOPD [7]. Rather than defining COPD-specific inflammatory pathways, several studies have documented increased sputum eosinophil numbers in a subset of COPD patients; both in stable disease [8] and in exacerbations [9]. Studies have also shown that patients in
Eosinophilic Airway Inflammation in COPD
It is well established that chronic airway inflammation plays a central role in the pathophysiology of COPD. Nonetheless, the airway inflammation in COPD is heterogeneous: the most common inflammatory phenotype is neutrophil-associated COPD with inflammasome, Th1 and Th17 activation, while in a minority of patients increased eosinophilic airway inflammation with increased Th2-transcriptome signature can be observed [20]. Exposure to cigarette smoke induces the recruitment of inflammatory cells into the airways and stimulates innate and adaptive immune responses [21]. Inflammatory changes occur both in the proximal and distal airways and in the lung parenchyma, where they manifest in the increased numbers of inflammatory cells, particularly neutrophil granulocytes, macrophages and CD8+ lymphocytes [22,23]. Activated leukocytes release several inflammatory mediators, i.e., cytokines, chemokines, growth factors, oxidants, proteases, and vasoactive and bronchoconstrictor agents, which in turn may further promote inflammatory cell migration into the airways and contribute to the development of the pathomorphological changes typical for COPD [24]. Inflammation is also present in the pulmonary artery wall and may be involved in the development of COPD-associated co-morbidities as well.
There is increasing evidence that oxidative stress is the main driving mechanism for the development of airway inflammation in COPD [25,26]. It has been proposed that apart from the burden of inhaled oxidants and reactive oxygen species generated in the airways, depletion of antioxidants may also be partly responsible for the oxidant/antioxidant imbalance that characterizes this condition.
Although the predominant feature of COPD is neutrophilic inflammation, there is evidence indicating that a subset of patients (10-30%) have increased numbers of eosinophils in the airways [8], as is typically seen in patients with asthma. The definition of sputum eosinophilia in COPD is variable in the literature, with most studies defining it as 2-3% of sputum leukocytes. It is well known that eosinophils are inflammatory cells consisting of bi-lobed nuclei and large acidophilic cytoplasmic granules, and are produced in bone marrow from CD34+ myeloid progenitor cells [27]. Upon maturation, eosinophils enter the systemic circulation and migrate primarily to the gastrointestinal tract and thymus. The circulating eosinophils are recruited into the airways by immunoregulatory cells and chemokines. The recruitment of eosinophils to the airways is under the control of specific chemokines and their cognate receptors and typically occurs in the context of a Th2 inflammatory response [1,22]. It is believed that the quality and the activation state of eosinophils may be more important than the absolute eosinophil number in the context of the inflammatory response [28].
From a clinical perspective, studies have shown that compared to the neutrophil predominant group, the eosinophilic subgroup of COPD patients respond better to inhaled short-acting β 2 -agonists [29], benefit more from short-course inhaled [8] or oral corticosteroid therapy [10][11][12], and have a lower airway bacterial burden [13] as mentioned above. The number of eosinophils in the airways does not appear to correlate strongly with disease progression as determined by GOLD staging of the disease [30]. However, the SPIROMICS study investigating a large and well-characterized cohort of COPD patients has shown that patients with elevated sputum eosinophil counts have worse lung function and more pronounced emphysema than those with low sputum eosinophil counts [31]. Moreover, the high eosinophil count group had more frequent episodes of AECOPD requiring corticosteroid treatment than the low eosinophil group. Additionally, Siva et al. demonstrated that a COPD management strategy with the additional aim of reducing eosinophilic airway inflammation in stable COPD patients was associated with a reduction in subsequent AECOPD in these subjects [32].
Although AECOPD is typically associated with increased neutrophilic inflammation in the airways, eosinophilia may also be present in the sputum of some patients with AECOPD [9]. Our data indicate that the clinical characteristics of eosinophilic and noneosinophilic AECOPD patients are not alike; we have recently observed less purulent sputum and lower C-reactive protein (CRP) levels in eosinophilic subjects compared to non-eosinophilic AECOPD patients [33]. In terms of systemic inflammatory marker levels, similar findings were obtained by Csoma and co-workers as well [34]. These findings are consistent with the general view that eosinophilic exacerbations are triggered by viral infections, whereas exacerbations of bacterial origin may be associated with neutrophilic inflammation and elevation in systemic inflammatory markers [13,35]. In addition, we have demonstrated that AECOPD patients with sputum eosinophilia exhibit a more pronounced improvement in airflow limitation following treatment [14]. Overall, these data suggest that eosinophilic COPD has a distinct disease phenotype in both clinically stable states and AECOPD.
Assessment of Eosinophilic Airway Inflammation Using FENO
In recent years, FENO has been used extensively as a surrogate biomarker to determine and quantify the extent of airway eosinophilia in various respiratory diseases including COPD [16,17]. From a technological point of view, chemiluminescence-based analysis is the gold standard method for FENO measurement [36]. Although chemiluminescence analyzers are fast-reacting, highly sensitive and specific for nitric oxide gas, their size, cost and frequent need for calibration limit their penetration into routine clinical practice. To overcome these limitations, electrochemical sensors suitable for the detection of FENO in the exhaled breath have been developed and incorporated into handheld measuring devices. Data from our [37] and other [38,39] laboratories demonstrated that FENO values measured with such handheld devices are highly reproducible and in good agreement with those obtained with chemiluminescence analyzers, making them suitable for use in clinical practice.
The rationale for using FENO in the assessment and management of various respiratory diseases is based on two key factors: first, there is a highly significant relationship between FENO levels and the extent of eosinophilia in airway specimens such as induced sputum, bronchoalveolar lavage and biopsy materials; second, there is an equally important relationship between eosinophilic airway inflammation and steroid responsiveness [40]. Although these associations have been established primarily in patients with asthma, apparently they also apply to patients with other respiratory pathologies.
When looking at the stable COPD population as a whole, FENO levels are similar [41] or only slightly increased [42] compared to those in healthy controls, as demonstrated more than two decades before. This is not unexpected since, as mentioned above, COPD is typically associated with neutrophilic airway inflammation, and eosinophilia, which is associated with elevated FENO levels, is present only in a minority of patients. Early studies [43] suggested that FENO may also be a marker of disease severity in COPD; however, subsequent studies have not shown a consistent association between FENO levels and lung function impairment in COPD patients. More recently, extended NO analysis, i.e., the measurement of alveolar and bronchial NO, has been suggested as a more useful method to monitor nitrative stress at different anatomical sites within the airways in stable and exacerbated COPD patients [44].
Despite the correlation between FENO levels and the number of eosinophils in the airways, the predictive accuracy of FENO measurement for eosinophilia appears to be variable and often only modest. For example, Chou et al. found a sensitivity of 62% and a specificity of 71% for FENO measurement in identifying COPD patients with eosinophilic airway inflammation [45]. In our recent study, similar sensitivity (63%) but higher specificity (91%) values were observed when the cut-off value for sputum eosinophilia was set at 3% [33]. Importantly, the negative predictive value (NPV) of the test was high (93%) indicating that the clinical relevance of using FENO lies in its ability to reliably identify noneosinophilic subjects. FENO alone [46] or in combination with blood eosinophil count [47] has been implicated in the differential diagnosis of asthma-COPD overlap (ACO) and COPD as well. However, sputum analysis was not performed in these studies. Finally, it should be noted that FENO could also be a marker of steroid responsiveness in patients with COPD; however, its usefulness is limited to predicting an increase in forced expiratory volume in one second (FEV 1 ) following a short course of oral corticosteroid treatment [48].
Concerning exacerbation, FENO levels are mostly elevated at the onset of AECOPD, while treatment of exacerbation or recovery from exacerbation leads to a decrease in FENO concentrations [49,50]. Additionally, we have demonstrated that FENO levels measured during hospitalization for AECOPD correlate closely with improvements in airflow limitation, as reflected by an increase in FEV 1 after treatment [51]. Moreover, patients with higher FENO levels at the onset of AECOPD were typically discharged earlier, as a better functional response is associated with faster clinical recovery.
In a follow-up study, we further showed that there is a significant association between FENO level and the percentage or number of eosinophils in the sputum of AECOPD patients, and that FENO is a strong predictor of sputum eosinophilia (>3%) in AECOPD (area under the receiver operating characteristic curve [ROC AUC]: 0.89) [14]. Overall, these data suggest that measuring FENO may assist in selecting patients with AECOPD who have sputum eosinophilia and are potentially more responsive to treatment. Although we could not determine what was responsible for the better treatment outcome, we hypothesized that corticosteroids had a predominant effect, as suggested by several studies in stable COPD patients [10][11][12]. In contrast, in patients with clinically stable COPD, FENO measurement has only limited diagnostic value, as noted above. The current Global Initiative for Chronic Obstructive Lung Disease (GOLD) guideline also does not recommend the use of FENO measurement in the management of COPD patients [52].
There may be an association between FENO levels and the frequency of exacerbations in COPD. We found that COPD patients with low FENO levels during acute exacerbations were more susceptible to developing severe AECOPD subsequently, while those with elevated FENO levels during exacerbation were less likely to be hospitalized for AECOPD [53]. Although this was only a retrospective analysis of a relatively small number of patients, it is tempting to speculate that FENO measurement may also serve as a predictive tool for the incidence of exacerbations associated with hospitalization. We believe that is a promising area for further prospective investigation in the future.
Cigarette smoking lowers FENO levels in COPD patients and is widely considered an important confounder in FENO measurements that should be carefully assessed in all studies [16,17]. For example, Gao et al. recently reported a lower predictive value for FENO (ROC AUC: 0.73) than established by many other research groups in the evaluation of sputum eosinophilia in AECOPD [54]. However, in this study, a significant proportion of participants were active smokers, so the smoking status of patients could affect these results.
Another controversial issue that has been addressed in several recent studies is the effect of ICS therapy on FENO levels. Early studies indicated that ICS treatment may affect FENO levels [55,56], although there were also studies that showed no significant effect [57]. In a recent meta-analysis of only randomized clinical trials or two-arm controlled prospective studies, the authors concluded that FENO levels are significantly reduced with ICS treatment in ex-smokers with COPD, while the effect in smokers is less clear and needs further investigation [58]. Again, this suggests that the ICS status of the patient has to be considered carefully in all clinical studies.
Assessment of Eosinophilic Airway Inflammation Using Blood Eosinophils
Although the relevance of blood eosinophils in the management of COPD patients, particularly in guiding ICS therapy to prevent exacerbation, has been extensively investigated in previous years, the evidence on the relationship between local (airway) and systemic (blood) eosinophilia remains controversial. Negewo et al., for example, found that blood eosinophil counts predict sputum eosinophilia with relatively high accuracy in patients with stable COPD [59], while investigators of the SPIROMICS cohort concluded that blood eosinophilia alone is not a reliable marker of airway eosinophilia (or the eosinophilic COPD phenotype) despite the highly significant correlation between the two measures [31]. In this context, we found that blood eosinophil counts are a good indicator of airway eosinophilia in patients with stable COPD, but not in patients with ongoing exacerbation where the sensitivity of the test is poor (20-40%) [33]. This suggests that the diagnostic value of blood eosinophils depends on the patient's condition at the time of sample collection, as eosinophilic airway inflammation in AECOPD does not necessarily lead to systemic eosinophilia.
Despite these uncertainties, several post hoc analyses of randomized controlled trials demonstrate that blood eosinophil count is an independent predictor of response to ICS in patients with severe or very severe COPD and a history of exacerbations [19,28,60]. In line with this view, the current GOLD document provides therapeutic recommendations for blood eosinophil counts and advises that thresholds of <100 cells/µL and ≥300 cells/µL can be used to identify patients with a low and high likelihood of benefiting from ICScontaining therapy, respectively [52]. This is clinically very important, as it allows the blood eosinophil count to be used as a biomarker to maximize the benefit/risk ratio of ICS treatment and to move towards a personalized medicine approach in the management of COPD.
However, some investigators also emphasize that blood eosinophil counts should be considered as a continuum and evaluated in the context of other risk factors for exacerbations, and cut-off values should not be treated as explicit [61]. In the above-mentioned analysis [60], the authors modeled the number of eosinophils as a continuous variable to identify the characteristics that determine both the risk of exacerbation and the clinical response to ICS in patients with COPD. Following this approach, the data from the IMPACT trial were also analyzed, and the investigators found that the response to ICS-containing therapy was modulated by blood eosinophil count, namely that the benefits of ICS-containing treatments in terms of reducing the rate of moderate/severe and severe exacerbations increased with increasing blood eosinophil count [62]. This association was further adjusted for the smoking status of the patients; former smokers showed a greater benefit at all blood eosinophil counts than current smokers. These data emphasize the importance of smoking cessation with the observation that current smokers with lower blood eosinophil count (<200 cells/µL) did not appear to benefit from ICS-containing triple therapy over long-acting muscarinic antagonist (LAMA) plus long-acting β 2 -agonist (LABA) therapy, whereas former smokers showed benefits across the whole blood eosinophil continuum.
There are conflicting results in the literature on the effects of ICS on blood eosinophils: some investigators have found no difference between ICS users and non-users [13], while others have found reduced eosinophil counts as a result of ICS treatment [8]. A recent retrospective analysis of a clinical trial comparing the effects of various bronchodilators concluded that ICS has only a small effect on peripheral blood eosinophils in steroid-naïve COPD patients [63]. Furthermore, in a post hoc analysis of the ISOLDE trial, the authors found that changes in blood eosinophil count after ICS administration predicted clinical response to ICS therapy in patients with moderate-to-severe COPD at risk of exacerbation [64]. Interestingly, an increase in the number of exacerbations and an accelerated lung function decline was documented in the 20% of patients whose blood eosinophil levels increased after ICS administration.
It has been established that the stability of blood eosinophil count and the reproducibility of its measurement may be important confounding factors that may limit the use of this biomarker in the management of COPD patients. The issue has been intensively investigated in recent years. Landis et al., for example, explored the reproducibility of blood eosinophil counts in a large cohort of stable COPD patients over 1 year and concluded that reproducibility was good, although eosinophil counts were variable in a subset of patients [65]. Negewo et al. also assessed the stability of blood eosinophil counts in patients with stable COPD over a median 28-day period and found good agreement between the two measurements [59]. Oshagbemi and co-workers also conducted a study to estimate the reproducibility of eosinophil counts in general practice and concluded that stability after 6 months was around 85% [66]. Two years later, this rate declined to 62%, with a further progressive decline in the subsequent years.
However, other studies have found lower reproducibility of blood eosinophils in COPD patients. For example, in the COSYCONET study, an analysis of a subgroup of COPD patients over routine examinations at 0, 6 and 18 months revealed that 26% of patients were persistently non-eosinophilic, while only 5% remained eosinophilic at all three visits [67]. Similarly, when blood (or sputum) eosinophilia was defined as ≥2% in a retrospective analysis of the ECLIPSE study, only 37.4% of COPD patients exhibited stable blood eosinophilia [68]. Another important question is whether the variability of blood eosinophil count results in the crossing of a given threshold that would assign an individual to a different ICS response category. Investigating this point, Southworth et al. reported that over repeated eosinophil measurements (6 months and >2 years) the majority (>86%) of patients remained in the same category, even at a threshold as low as 150 cells/µL [69]. Nonetheless, higher blood eosinophil counts are associated with higher variability, as indicated by this [69] and other [70] studies.
Maintenance therapy of patients, in particular, ICS-containing treatments, may impact the stability of blood eosinophil count, as they reduce the number of eosinophils, which in turn may improve the reproducibility. However, a variety of other factors such as atopy, co-morbidities, medications, or infections may also affect the reproducibility of eosinophil counts in the blood, especially in the long term.
There is evidence that patients with elevated eosinophil counts during exacerbations respond more favorably to systemic corticosteroid therapy [71,72] and have lower rates of early treatment failure [73]. Whether increased eosinophil counts during AECOPD are associated with recurrent exacerbations is unclear, as studies investigating the relationship between eosinophil counts or ratios and the frequency of exacerbations have revealed conflicting results [73][74][75]. A recent prospective study that addressed this question in a cohort of patients hospitalized for severe AECOPD has indicated that patients with eosinophilic exacerbations did not have an increased risk of early, moderate or severe relapses [34].
Nonetheless, it appears that the blood eosinophil count in patients hospitalized with severe AECOPD is of clinical relevance. In support of this concept, MacDonald and coworkers recently reported that patients with low eosinophil counts (<50 cells/µL) at the onset of AECOPD were more likely to have pulmonary infection, longer hospital stays and lower 12-month survival than those with elevated eosinophil counts (>150 cells/µL) [76].
Finally, it should be noted that in addition to their role as biomarkers for ICS responsiveness, blood eosinophils may also have relevance in predicting future COPD exacerbations. However, a recent study that examined pooled data from 11 clinical trials found no clinically important association between baseline blood eosinophil count and exacerbation rate, so these results do not support the use of blood eosinophils to predict exacerbation risk [77].
Is It Worth Combining the Measurement of FENO and Blood Eosinophil Count?
Since FENO and blood eosinophils are regulated by different inflammatory pathways, it is reasonable to consider whether simultaneous measurement of the two surrogate markers would improve the chances of identifying patients with eosinophilic airway inflammation.
Nevertheless, there are only a few reports that explore this possibility. Colak and colleagues analyzed data from the Copenhagen General Population Study of 4677 individuals with chronic respiratory symptoms after dividing them into six groups based on previous respiratory disease diagnoses, and found that the levels of the two biomarkers were not predictive of airway disease type, and combining them did not provide any benefit [78]. In our experiments, the combination of FENO and blood eosinophil counts resulted in increased sensitivity but decreased specificity and PPV in predicting airway eosinophilia in stable COPD compared to blood eosinophil count alone [33]. Similarly, in AECOPD, the combination of the two measurements did not result in a significant improvement in diagnostic accuracy compared to FENO alone. However, as mentioned before, the combined measurement may have potential in the diagnosis of ACO by improving the overall sensitivity of the test to above 90% [79].
Conclusions
COPD is a chronic respiratory disease characterized by persistent respiratory symptoms, impaired quality of life, chronic airway inflammation, and, in most cases, a progressive decline in lung function. There is evidence that a minority of patients have an eosinophilic rather than a neutrophilic type of airway inflammation, and these patients have a distinct disease phenotype. FENO and blood eosinophil count are both useful surrogate markers of airway eosinophilia in COPD, but, as summarized in Table 1, there are advantages and disadvantages to each test. With regard to the clinical applicability of FENO and blood eosinophil measurements (Figure 1), it depends, at least in part, on the condition of the patient. The blood eosinophil count can help clinicians identify patients with stable COPD who would benefit most from maintenance therapy with ICS, while FENO can be valuable in selecting patients with AECOPD who are more responsive to acute treatment. The literature is equivocal on whether the percentage or the absolute eosinophil count is more predictive for airway eosinophilia. The GOLD document recommends using the absolute eosinophil count with thresholds of 100 and 300 cells/µL; therefore, clinicians have to rely on this parameter. Furthermore, FENO may aid in predicting response to inhaled β 2 -agonists and/or oral corticosteroids in stable COPD. Both markers may assist the clinician in the differential diagnosis of ACO and COPD. taining therapies (ICS/LABA or ICS/LABA/LAMA) to prevent exacerbations in patients with multiple exacerbations.
In conclusion, further studies are needed to explore the clinical benefit of measuring these biomarkers in the management of patients with COPD. In particular, studies that include sputum analysis to provide reliable information on the local inflammatory cell profile of the airways would have real clinical relevance. The concept of treatable traits is a precision medicine approach that has recently been proposed for the management of obstructive airway diseases, including COPD [80,81]. It should be considered as a model of care in which the patient undergoes a multidimensional assessment to identify clinically relevant and treatable problems (traits). As airway eosinophilia is one of the key treatable traits in this model, measurement of both FENO levels and blood eosinophil counts could be useful in the management of COPD patients based on clinical aspects summarized in Figure 1. Nonetheless, the current GOLD document favors eosinophil count over FENO in clinical decision-making; thus, at present, the measurement of blood eosinophil levels is primarily aimed at guiding the use of ICS-containing therapies (ICS/LABA or ICS/LABA/LAMA) to prevent exacerbations in patients with multiple exacerbations.
In conclusion, further studies are needed to explore the clinical benefit of measuring these biomarkers in the management of patients with COPD. In particular, studies that include sputum analysis to provide reliable information on the local inflammatory cell profile of the airways would have real clinical relevance.
|
v3-fos-license
|
2023-11-01T05:08:03.063Z
|
2023-10-27T00:00:00.000
|
264661686
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "7a7ae719297c68b14d956756878d4bc8bd4392f7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45854",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7a7ae719297c68b14d956756878d4bc8bd4392f7",
"year": 2023
}
|
pes2o/s2orc
|
Causes of sudden neonatal mortality disclosed by autopsy and histopathological examination
The neonatal period, or the first 28 days of life, is the most vulnerable time in a child’s life. Neonatal mortality has decreased in recent years. However, this progress varies at the national level, which necessitates actual regional data from different countries to identify local handicaps for life-saving precautions. This study aimed to investigate the causes for neonatal deaths as revealed by autopsy and histopathological examinations. A retrospective cross-sectional study was designed to identify the main causes of neonatal deaths in children who were autopsied at our institution between January 1, 2014, and December 31, 2021. Children who died within the first 28 days after birth (1–28 days of age) were referred to as neonatal cases. The main causes of neonatal death in children were determined via autopsy and histopathological and toxicological examinations. Furthermore, the causes of death were classified according to their manner of death. During this period, 122 neonatal children were autopsied at our institution. This group comprised 57 girls and 65 boys. For the manner of the death, natural causes were the most common cause (n = 91, 74.5%). Among natural causes, pneumonia (n = 66) was the leading one, representing 54% of all neonatal deaths, followed by perinatal conditions (n = 16, 13.1%). One of the pioneering reasons for death was sudden, unexpected postnatal collapse (n = 24, 19.6%), which was categorized under the undetermined group considering the manner of death. Unintentional (accidental) deaths accounted for 0.8% (n = 1) of total deaths, and intentional deaths were responsible for 6 neonates (4.9%) losses. This study shows that newborn children still die from simple and treatable infectious causes, probably arising from various familial and/or public inadequacies. In addition, sudden and unexpected postnatal collapse remains an important cause of neonatal mortality that has yet to be fully resolved. This study points out valuable inferences for caregivers and competent authorities to take preventive measures to prevent avoidable neonatal deaths.
Introduction
The neonatal period-the first 28 days of life-of childhood is the tenderest period of a child's life.Children face the highest risk of dying in their first month of life, with an average global rate of 17 deaths per 1000 live births in 2019. [1]n recent years, neonatal mortality has been decreasing, [1,2] but this progress varies at the national level, necessitating actual regional data for different countries.Disclosing the main medical conditions, injury motives, and insults that affect neonates will help build protection strategies nationwide. [3]ccording to a study published in 2017, Turkey achieved notable improvements, reaching targets of under-5-year mortality, neonatal mortality, and maternal mortality ratio between the years 2000 and 2016.Reducing the number of neonatal mortalities is an important part of the United Nations 'Sustainable Development Goals. [4]But child mortality rates are still high in low-and middle-income countries.Different studies have pointed to infectious motives as the leading cause of neonatal mortality. [5]owever, several other factors should also be considered.Injury-related deaths and undetermined cases (the ones for which the reason for death cannot be explained exactly) are concerning ones that we should also think about.
The objective of this retrospective cross-sectional study was to reveal the leading causes of neonatal mortality from our experience and to contribute to the development of intervention strategies in our country and around the world.
I believe that the data documented in this study are significant and problematic.These data reflect the burden of the most vulnerable demographics in society.
Material and methods
In this study, a retrospective cross-sectional study was designed to identify and classify the main causes of neonatal deaths in The authors have no funding and conflicts of interest to disclose.
The data that support the findings of this study are available from a third party, but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available.
This study was reviewed and approved by an institutional review board and ethical commitee.
a Turkish Council of Forensic Medicine Ankara Head Office, Department of Pathology, Ankara, Turkey.
children who were autopsied at our institution between January 1, 2014, and December 31, 2021.
These autopsied neonatal children were those whose cause of death was unclear, sudden, unexpected, or problematic, according to public prosecutorship, which was the law's competent authority for autopsy deaths.
All autopsies received by the Institute were handled in accordance with the medico-legal complete autopsy procedure.On the other hand, an autopsy started as a medico-legal autopsy by a forensic expert sometimes turned out to be a purely clinical autopsy for cases of sudden death.When ante-mortem efforts have failed to establish the cause of the death, widespread sampling of possible foci of infection is added to the procedure.
Data on deaths were obtained from the archives at our institution.Children who died within the first 28 days after birth (1-28 days old) were designated neonatal.In this study, only live births were included, and stillbirths were not mentioned.
The manner of death is classified based on the type of conditions that cause death and the circumstances under which they occur.The categories of the manner of death are: natural, unintentional (accident), intentional (homicide, suicide), and undetermined (or "could not be determined). [6]The causes of death were classified according to this definition.
All injuries were classified according to the International Statistical Classification of Diseases and Related Health Problems, version 10 (World Health Organization). [7]In this classification schema, unintentional injuries are identified as road injuries, poisoning, falls, fire, heat and hot substances, drowning, and exposure to mechanical forces.Intentional injuries include self-harm, interpersonal violence, collective violence, and legal intervention.Accidental threats to breathing (suffocation, choking, and strangulation) and complications of medical or surgical care cause other unintentional injuries.
This study was reviewed and approved by our review board and ethics committee (No. 21589509/2018/971) on December 25, 2018.
The data that support the findings of this study are available from our institution, but restrictions apply to the availability of these data, which were used under the license for the current study and so are not publicly available.However, data are available from the authors upon reasonable request and with permission from our institution.
During the study, care was taken not to violate the principle of confidentiality and to protect the personal or institutional data obtained from the subjects.In accordance with the ethical rules, the identities of the subjects involved in the study were kept confidential.
Analyses were performed using Statistical Package for the Social Sciences Version 29.0 (IBM SPSS Statistics for MacOS, Version 29.0, Armonk, NY).Nominal variables were reported as n (percentages) and compared using a two-tailed Chi-square.The P value was set at <.05 for statistical significance.
Results
This study covers 8 years, from 2014 to 2021.In a period of 8 years, a total of 122 neonatal autopsies were performed at the institute, and since this is the only authorized institution in Ankara where neonatal autopsies are performed, the total number of neonatal autopsies performed during this period is 122.
According to data from the Turkish Statistical Institution, 4119 neonatal deaths occurred in Ankara between 2014 and 2021. [8]As a result, approximately 0.02% of neonatals underwent autopsies during this time interval.
The study group consisted of 57 girls and 65 boys.Seventyfour of the children came from the city center and 48 from small towns.
Natural deaths were responsible for 74.5% (n = 91) of total deaths.The most common cause of natural death was pneumonia (n = 66), representing 54% of all neonatal deaths.These children did not have any prior medical conditions but undiagnosed and untreated pneumonia, which was diagnosed after autopsy via microscopic examination of the lungs by a pathologist.
Other significant natural reasons were perinatal conditions, causing (n = 16) 13.1% of all the neonatal deaths.Documented "perinatal conditions" included premature birth and respiratory distress syndrome (RDS) (n = 7), umbilical cord entanglement (n = 1), meconium aspiration (n = 6), and perinatal hypoxic-ischemic encephalopathy (n = 2).In the "perinatal condition" group, some children also had pneumonia.However, as in this group, pneumonia was secondary to their primary medical condition, and they were classified under the "perinatal conditions" group.
Three children in the natural death group had fatal congenital heart diseases: 1 died of myocarditis, 1 died of diarrhea, and 1 died of complications of metabolic disorders.Twentyfour (19.6%) cases were found dead in their cribs.Any medical reason or any direct force to explain their deaths could not be found.Microscopic examination of the lungs revealed that 9 of these children had aspirated human milk or baby formula.These cases were classified as sudden and unexpected postnatal collapse (SUPC group, which contained all-suspected accidental suffocation and strangulation in bed and an ill-defined cause of death).Considering the manner of death, these cases were categorized into an undetermined group.
Unintentional (accidental) deaths accounted for 0.8% (n = 1) of total deaths.The child died of severe burns after being trapped in a house fire.As intentional deaths were evaluated (n = 6, 4.9%), one newborn was abandoned to die in a trash bin, and 5 were murdered (1 penetrating, 4 blunt trauma), summing up to 6 children being abuse victims.
Causes of death according to the manner are summarized in Table 1.
The results of the chi-square analysis examining the relationship between the number of deaths of the decedents according to years, gender, and death manners are given in Table 2.When the table is analyzed, it is determined that there is no statistical relationship between the number of decedents according to years and gender (P = .753)and death manners (P = .603).
The distribution of the number of deaths over the years and the main causes of death for each case are presented in Table 3.When the table is analyzed, it is determined that there is no statistically significant relationship between the number of deaths in years and the main causes of death of the decedents (P = .332).
Discussion
The neonatal period covers the first 28 days after birth.11][12] In this study, natural deaths were responsible for 74.5% (n = 91) of total deaths.As a natural cause, the most common one in the group was pneumonia (n = 66), representing 54% of all the neonatal deaths.These decedents had no other preexisting medical conditions.The etiologies of these pneumonia cases were viral and/or bacterial infections.
Pneumonia is the leading cause of child mortality, causing between 152.000 and 490.000 infants to die before the age of one.This is a serious global health burden for developing countries.Immature lung immunity, nonspecific and usually subtle symptoms, and the rapidly advancing nature of infection make it difficult for parents to recognize and take action, which is an important task for clinicians. [13]Caregivers must be informed by professionals about signs, particularly of new couples who become parents for the first time at an early age.Overall, a significant proportion of neonatal deaths due to pneumonia occur in developing countries, and the main cause of community-acquired pneumonia is Streptococcus pneumoniae, and the main viral cause is respiratory syncytial virus. [14]ccording to my study's findings, perinatal conditions (n = 16) (premature birth and RDS, umbilical cord entanglement, meconium aspiration syndrome, and prenatal hypoxic-ischemic encephalopathy) were significant factors responsible for 13.1% of neonatal deaths.
Precise infections, asphyxia, and prematurity are perinatal-related events that are important contributors to neonatal death, especially in developing regions of the world. [15]As it has been practiced in high-income countries, applying antenatal and perinatal controls and determining critical cases to refer them to specialized pediatric clinics is essential.This is because the time a newborn loses is priceless and too costly for both the baby and society.Taking simple steps, such as contraception, vaccination of pregnant women, hygienic delivery at the hospital, and training healthcare workers in resuscitation, will reduce preventable deaths in the neonatal period. [10]e second leading cause of death in this group was "sudden unexpected postnatal collapse of the newborn", as called sudden unexpected death in early infancy (n = 24), representing 19.6% of all deaths.Considering the manner of death, these cases were categorized into an undetermined group.The children in this study did not have any prior medical problems and mostly shared the same story of being found dead in bed.No signs of trauma or toxic substances were observed during the screening tests.
In this study, histopathological examination showed that 9 children had aspirated breast milk or baby formula without any signs of pneumonia.In the SUPC group, evidence of physiological stress in the thymus and/or lungs was detected, expressed as hemorrhage.Although there was no clear evidence of any medical problem, some decedents had a history of prodromal illness.Bed sharing, heavy lungs, and deaths during sleep were other features defined by some authors, and I also detected them in this study. [16]pproximately one-third of the SUPC events occur during the first week of the neonatal period.Mild gliosis of the brainstem, Table 1 The causes and the manners of death for neonatal children.which controls the cardiorespiratory system, has been observed in some cases.These authors suggest that any insult caused by ischemia in these areas or immaturity can make these newborns vulnerable to known risk factors for SUPC. [17]Though any gliosis was not observed via microscopic examination, in my study, some decedents were found to have white matter gliosis during autopsy.
Manner of death
A study from the United States revealed low levels of butyrylcholinesterase-specific activity in the blood of decedents who were classified as sudden unexpected death in infancy deaths.As butyrylcholinesterase is an enzyme of the cholinergic system, they suggest that it can be a good predictive measure of the autonomic (dis)function of the newborn, which may lead to a specific vulnerability to their death. [18]This vulnerable phase must be well explained to caregivers to raise awareness about the risks, and close surveillance of newborns should be encouraged.
Totally, accidental deaths accounted for 0.8% (n = 1) of all deaths.One child died of severe burns after being trapped in a house.In this study, intentional deaths were (n = 6) 4.9% of all deaths.Intentional insult toward children is, unfortunately, a mutual problem for many countries and mostly remains undiscovered, and the real numbers of fatal ones are unknown. [19]eonaticide (killing an infant shortly after birth, usually on the first day of life) and infanticide (the murder of a child aged 1 day to 1 year) are generally performed by the victim's mother. [20] systematic review of the worldwide incidence of neonaticides in Europe found that young maternal age, being unmarried, and primiparous are common features shared by these perpetrators. [21]ne study from Turkey about the sociodemographics of mothers who abandoned their newborn babies at the hospital also found that, low education level, being unmarried, and/or being unemployed were the most important grounds shared by mothers. [22]nother study from the United States pointed out racial/ethnic disparities as a disadvantage for victims.Studies have shown that male and non-Hispanic blacks are more likely to be victims of infanticide and neonatal deaths. [23]ccording to data from the Turkish Statistical Institute, the rates of neonatal deaths decreased over the years from 7.3 in 2014 to 6 in 2021. [24]Looking at 'United Nations International Children's Emergency Found's neonatal mortality statistics for Turkey, although the number of deaths by year does not exactly overlap with the numbers of the Turkish statistical institution, similarly, there has been a decrease in the rate of neonatal deaths over the years, from 7 in 2014 to 4 in 2021. [25]In proportion to this, the number of neonatal cases autopsied in our institution has decreased over the years (Table 4).
Unfortunately, the causes of neonatal deaths are not detailed in the statistics of the Turkish Ministry of Health or in the statistics of the Turkish Statistical Institute, which increases the value of this study. [26,27] addition, as in many countries, the "underlying cause of death" used in the classification of infant mortality in Turkey is not compatible with the International Statistical Classification of Diseases and Related Health Problems, version 10 classification established by World Health Organization.Therefore, the results of the Ministry of Health and the Institute of Statistics are not reliable.However, since the information provided in both sources includes the neonatal period within the infant period, only the records of the neonatal period could be accessed.For this reason, the results of the study could not be compared with the causes of neonatal mortality in Turkey.
There are only a few studies searching for the causes of neonatal mortality in Turkey.In a study on neonatal mortality in Turkey, when evaluated together with infant mortality, the most common causes of death were: prematurity, congenital anomalies, congenital heart diseases, sepsis and perinatal asphyxia, pneumonia, RDS, and sudden unexpected death in infancy. [28]n a 2018 study on infant mortality conducted in Adana, sepsis was found to be the most common cause of death, followed by prematurity and problems related to prematurity (12.6%), congenital heart disease and complications (12.6%), respiratory problems (11.9%),congenital anomalies (9.8%), immaturity (<750 g birth weight or 24 gestational weeks) (8.4%), and sudden death (6.0%). [29]n another study conducted in Bursa, when the causes of neonatal deaths between 2010 and 2012 were analyzed, 68.2% of infant deaths occurred in the newborn period.The main causes of infant deaths were prematurity (36.3%), congenital malformations and chromosomal diseases (34.3%), perinatal causes (12.9%), and sudden infant death syndrome (6.2%). [30]n all of the above studies, the neonatal period is expressed as mixed with infant demise.Since the present study was a postmortem study, the number of deaths due to pneumonia was much higher than in the above hospital studies.Most likely, these cases in my study represent cases that somehow went unnoticed by parents and doctors, so eventually their deaths ended up being judged problematic by the prosecutors.
The rate of cases of sudden, unexpected neonatal collapse was high in my study compared to hospital studies, as these cases' deaths were unexplained and they were frequently referred for autopsy.
A verbal and social autopsy in Indonesia found that the most common causes of neonatal death were prematurity and perinatal complications.However, as this was a verbal autopsy, no further diagnosis could be made. [31]nother verbal autopsy study from the United States reported that delays in access to health care were closely associated with neonatal death but did not elaborate on the causes of death. [32]As a pure hypothesis, inadequate access to health care may also contribute to the high infectious-related mortality in the present study.
In a verbal autopsy study from Kenya, perinatal asphyxia in the early period and infectious causes in the late period were found to be the most common causes of neonatal death. [33]In the present study, since the number of cases fitting the early period was very few (n = 7), the causes were not analyzed as early or late periods.Globally, sub-Saharan Africa has the highest rate of neonatal mortality and accounts for 38% of global neonatal deaths. [34]n a verbal autopsy study conducted in Ghana in this region, asphyxia and prematurity were found to be the most common causes of neonatal death.
In low-and middle-income countries, verbal autopsy is performed instead of full diagnostic autopsy due to infrastructural deficiencies, religious, and cultural reasons, and its reliability is limited. [35]n a neonatal autopsy study conducted in Brazil, postmortem pathological diagnosis added 34% new findings to the neonatal specialist diagnosis; in 6.9% of cases, relatives were referred to genetic counseling; and in 9.6% of cases, results were reached that would have changed the treatment modality if caught in the premortem stage. [36]herefore, this study, which provides complete and diagnostic autopsy results, is a good contribution to the literature.
This study centers only on uncertain, sudden, unexpected, or problematic neonatal deaths.This may lead to a selection bias as it may not be representative of all neonatal deaths.Such a focus may distort the results towards specific causes of death that are more likely to be uncertain or sudden, potentially limiting the generalizability of the findings to the wider population of neonatal deaths.Nevertheless, this feature of the study has revealed topics that are overlooked or underemphasized in the clinic.For instance, the rate of pneumonia in the presented study, which is higher than other studies conducted in hospitals, shows our colleagues what we may have overlooked.Also, since all SUPC cases were autopsied, the study reminds our colleagues once again of the reality of SUPC, which is not often thought of in the clinic.
Conclusions
This study shows that newborn children still die from simple and treatable infectious causes, probably arising from various familial and/or public inadequacies.Health services should be made more accessible and widespread during a vulnerable period, such as the newborn.Besides, colleagues should be more alert about neonatal pneumonia and consider hospitalizing and monitoring patients closely.
In addition, SUPC remains an important cause of neonatal mortality that has yet to be fully resolved.Therefore, this gentle period should be properly explained to parents during their doctor visits, and close surveillance of the baby should be encouraged.
I believe that this study points out valuable inferences for caregivers and competent authorities to take preventive measures to prevent avoidable neonatal deaths.
In my opinion, larger case series are also needed to explain more details.
This study did not receive any specific grants from funding agencies in the public, commercial, or not-for-profit sectors.
Table 2
Manner and gender distribution of the deaths by year.
Table 3
Distribution of main causes of the deaths by year.
Table 4
Number of cases and manner of deaths by years.
|
v3-fos-license
|
2018-12-12T12:55:07.213Z
|
2006-11-27T00:00:00.000
|
55701601
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.animbiosci.org/upload/pdf/20-8.pdf",
"pdf_hash": "5f368b9a7277f5f6a92632ed027f7edc0547d970",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45855",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "5f368b9a7277f5f6a92632ed027f7edc0547d970",
"year": 2006
}
|
pes2o/s2orc
|
Characterization of Leptin Levels in Gestating C allipyge E wes *
The callipyge mutation in sheep is a polar overdominant mutation that results in post-natal muscle hypertrophy in the loin and hindquarters of paternal heterozygotes (+/CLPG). Sheep that are homozygous for the callipyge allele (CLPG/CLPG) do not express the muscle hypertrophy phenotype, but serve as carriers for the mutation. Callipyge sheep are characterized by improved feed efficiencies and leaner carcasses. Leptin is a protein hormone secreted from adipose tissue and has been found to affect appetite and serve as an indicator of body fat mass. To date, very little knowledge is available as to the effect of the callipyge mutation on circulating leptin levels. Due to the interaction of leptin with feed intake and energy availability, and the fact that the majority of fetal growth occurs in late gestation, it is important to understand if the callipyge mutation interacts with leptin production in late gestational ewes. Therefore, our objective was to characterize serum concentrations of leptin in late gestational callipyge ewes vs. non-callipyge ewes. We evaluated genetically verified callipyge (n = 6), homozygous (n = 8) and normal (n = 8) ewes weekly during the last eight wks of gestation through one wk post-partum. Weights were taken and body condition scores were assigned by trained personnel weekly. Blood was collected via jugular venipuncture on each sampling date and subjected to an ovine-specific leptin RIA. Genotype influences on peripheral concentrations of leptin were found to be highly significant (p = 0.0005). Total leptin means for +/CLPG were 5.41±0.40 ng/ml, CLPG/CLPG 8.11±0.70 ng/ml, and +/+ 9.13±0.93 ng/ml. Sampling date was also significant (p = 0.0098) with all ewes showing a decrease in leptin levels throughout gestation and parturition. Using repeated measures, we were able to detect lower levels of plasma leptin in callipyge ewes, which may be indicative of their lower overall body fat content. These results indicate that the callipyge phenotype decreases the levels of adipose tissue and leptin production in gestating ewes. (
INTRODUCTION
Leptin is a 16-kDa protein hormone secreted from white adipose tissue, which influences hypothalamic mechanisms regulating appetite and energy balance (Zhang et al., 1994;Halaas et al., 1995;Vega et al., 2004).In species tested thus far, including sheep, plasma leptin levels are highly influenced by the amount of adipose stored, body condition score and physiological status (pregnancy/parturition). High levels of plasma leptin are associated with high levels of adiposity and cause a decrease in appetite and an increase in activity; ideally burning excess fat stores (Friedman and Halaas, 1998;Delavaud et al., 2000;Estienne et al., 2000;Buff et al., 2002;Vega et al., 2004b;Heravi Moussavi et al., 2006).In human pregnancy studies, plasma leptin levels have been shown to peak at 22 to 27 weeks of gestation and decrease thereafter through the third trimester (Sattar et al., 1998;Tamura et al., 1998).
Callipyge sheep are characterized by postnatal muscle hypertrophy and decreased adiposity (Cockett et al., 1994;Jackson et al., 1997).Callipyge sheep typically have lower daily feed intakes, higher feed efficiencies and at slaughter weight, Callipyge sheep exhibit higher dressing percentages and less subcutaneous fat compared to non-callipyge sheep of the same weight (Jackson et al., 1997).Research efforts devoted to elucidating the mechanisms that account for the abundant muscle hypertrophy and decreased adipose levels in callipyge sheep have been extensive.There is however, very little data that examines the relationship of callipyge sheep in late gestation and early parturition with circulating levels of leptin.Therefore, we sought to characterize serum concentrations of leptin in callipyge sheep vs. non-callipyge sheep to test our hypothesis that serum leptin levels will differ with the expression of the mutation.
Animals
Twenty-four multiparous Rambouillet ewes were selected for this study.All animals were visually and genotypically analyzed to determine callipyge categories.Treatment groups consisted of eight paternal heterozygous callipyge ewes (+/CLPG), eight homozygous ewes (CLPG/ CLPG; these do not exhibit the phenotype), and eight normal ewes (+/+).The maternal heterozygous genotype (CLPG/+) was not used in this study due to a lack of availability.Ewes were housed in partially covered dirt floor pens and maintained on an isocoloric/isonitrogenous ration consisting of 0.45 kg whole corn and 0.23 kg soybean meal per head per day, with access to large round bales of Sudan hay ad libitum.This experimental design was reviewed and approved by the Texas Tech University Animal Care and Use Committee (03013-02).
Data collection
Blood samples, body weights and body condition scores (BCS) were collected weekly for 8 wk prior to lambing, within 24 h of lambing and 5 d after lambing.Gestational measurements were adjusted for each ewe according to her actual lambing date.Two callipyge ewes were dropped from the study due to a lack of offspring.One normal ewe and one homozygous ewe experienced lamb death due to dystocia, therefore data from these ewes were included in only the gestational portion of the analysis.Sampling of ewes for body weights, blood samples and trained BCS measurements (according to the method of Russel, 1991) were taken without restriction of ewes from feed or water at mid-morning prior to receiving grain supplementation.Blood samples were taken via jugular venipuncture using 10 ml Vacutainers.Upon collection, blood samples were stored at 4°C for transportation to the Texas Tech University Meat Science and Muscle Biology Laboratory and then centrifuged at 1,250×g for 20 min within 4 h of collection.Serum was collected and stored in 1.8 ml microcentrifuge tubes and maintained at -80°C for further analysis.Serum concentrations of leptin were determined in triplicate by the radioimmunoassay procedures as described by Delavaud and coworkers (2000).
Statistical analysis
Statistical analyses were conducted using a completely randomized design with repeated measures.Data were analyzed using the General Linear Model procedure of SAS (SAS Inst., Cary, NC), with genotype and week of gestation as variables and BCS, number of lambs born, and body weight as covariables.Means at each week of gestation were separated using a Students t-test.Animal was designated as the experimental unit.Trendlines were produced across time for each genotype in the study using the least squares fit option in MS Excel (Microsoft Office, 2000).
RESULTS
Genotype significantly affected overall mean serum concentrations of leptin among ewe blood samples for the weekly samplings prior to lambing, the samples within 24 h of lambing and the 5 d after lambing samples (Table 1; p = 0.0005).Total serum concentrations of leptin among callipyge, homozygous and normal ewes were 5.41±0.40,8.11±0.70,and 9.13±0.93ng/ml of serum, respectively (Table 1).Intra-assay coefficient of variation was 0.11.
Gestational status significantly effected leptin levels across all genotypes (p = 0.0098).In general, mean leptin levels for all ewes decreased during late gestation and continued to decrease through parturition and early lactation (Figure 1).Body condition scores and number of lambs born also significantly affected the leptin levels in the ewes (p = 0.004 and p = 0.0364, respectively).Callipyge ewes exhibited the highest BCS and lowest average number of lambs born in this study (Table 1).
Trendlines were produced for each genotype across the gestational status (Figure 1).Although the regression models for each genotype were significantly different (p = 0.0004) and the intercepts were different (p<0.0001), the slopes for the trendlines were not significant (p = 0.1621).Individually, the slopes of the trendlines for the callipyge and homozygous ewes were the most different at p = 0.0746.The normal genotype slope was similar to the other genotypes with normal versus callipyge at p = 0.4363, and normal versus homozygous at p = 0.2677 (Figure 1).
DISCUSSION
The callipyge mutation is a polar overdominance condition that results in sheep with hypertrophy of the loin and hindquarters while having lower daily feed intakes, higher feed efficiencies and, at slaughter, higher dressing percentages and less fat compared to non-callipyge sheep of the same weight (Cockett et al., 1994;Jackson et al., 1997).When evaluating a hormone such as leptin, which is produced by adipocytes, it would be expected to detect variation between sheep that are known to have less fat tissue, as in the case of the callipyge mutation (Delavaud et al., 2000).However, very little research has examined plasma leptin levels in sheep affected by the callipyge mutation, especially in relation to late gestation and early post-partum ewes that are utilizing large amounts of fat and energy to support a rapidly growing fetus and later produce milk.
In an attempt to minimize animal variation and elucidate the relationship between the callipyge mutation and circulating leptin levels, the same group of late gestational ewes were utilized over an extended period of time (8 wk prior to parturition through 5 d post-partum).These time points were chosen to emulate the stages of gestation where leptin has shown to peak (late second trimester in humans) and then decrease until parturition (Sattar et al., 1998;Tamura et al., 1998).By measuring the same group of ewes over an extended period of time, we were able to account for most of the animal variation (associated with individual animal differences), and identify significant genotype effects, thus causing us to accept the hypothesis that the callipyge mutation affects circulating leptin levels in ewes physiologically stressed by pregnancy and parturition.Callipyge ewes exhibited the lowest mean serum leptin concentrations when compared to normal and homozygous ewes of the same physiological state (Table 1).
Interestingly, the callipyge ewes exhibited the highest mean BCS and the lowest number of lambs born, both factors that have been associated with elevated leptin levels in normal sheep (Delavaud et al., 2000;McFadin et al., 2002).Note that amount of muscle is taken into account when measuring BCS in sheep (Russel, 1991) so this most likely accounts for the callipyge ewes having the highest mean BCS in the study.Nonetheless, we interpret these observations as possible evidence that there is an influence of the callipyge mutation in late gestational and early postpartum ewes.Since callipyge sheep in general have fewer fat stores than normal sheep, it would be feasible that higher feed efficiencies would result in partitioning of more nutrients towards muscle accretion versus fat deposition.It was also noted that callipyge ewes exhibited the least change in serum leptin levels from the beginning of the study through five days post-partum.Perhaps the callipyge sheep are resistant or less susceptible to changes in body composition relative to their physiological state (i.e pregnancy; Houseknecht et al., 1998).
One possible explanation for the callipyge phenotype in relation to low circulating leptin levels may involve the number of receptors found in the hypothalamus.If callipyge sheep possessed more leptin receptors, this would cause them to be more sensitive to their plasma leptin, possibly explaining the decrease in appetite and adipose stores (Mercer et al., 1996;Schwartz et al., 1996).
By visually appraising the trendlines (Figure 1), it appears that the callipyge and normal lines are most similar in slope, although the intercept is statistically higher for the normal ewes.These curves appear to be almost linear.The homozygous line appears to have more of a quadratic trend, even though the slope is not statistically different from the other two genotypes (p = 0.16).One explanation for this may be that the homozygous ewes as a whole had smaller body size and came from a flock unrelated to the callipyge and normal ewes.The callipyge and normal ewes were raised from the same flock and share some genetics distinct from the callipyge mutation.
IMPLICATIONS
The use of an ovine-specific leptin assay to measure circulating leptin levels in blood samples taken over several weeks during gestation and parturition revealed a genetic influence in relation to the callipyge mutation.This can help us understand the physiological stress response of callipyge, normal, and homozygous sheep using leptin as an indicator of body fat composition.Callipyge sheep are naturally leaner than the other genotypes.However, the callipyge ewes used in this study actually exhibited the highest mean BCS of the study, while still possessing the lowest leptin levels over the gestational period.This may indicate that callipyge ewes of equal BCS would exhibit even lower levels of plasma leptin compared to normal phenotypes.We were also able to examine the response to the stress of lambing and early lactation in these sheep.All of this will contribute to a better understanding of the significantly different phenotype seen in sheep possessing the callipyge mutation.
Table 1 .
Mean serum concentrations of leptin, body condition score (BCS), ewe body weights, and lamb numbers for callipyge, A Means with common superscripts in each column are not significantly different (p>0.05).
|
v3-fos-license
|
2021-05-10T00:04:46.399Z
|
2021-01-15T00:00:00.000
|
234003975
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2021.741175/pdf",
"pdf_hash": "67a185f5f19d4c180e0581278318675e3e22c490",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45856",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "10b9065679afa0316c8593d9582837bbef7b3c62",
"year": 2021
}
|
pes2o/s2orc
|
Identifying New COVID-19 Receptor Neuropilin-1 in Severe Alzheimer’s Disease Patients Group Brain Using Genome-Wide Association Study Approach
Recent preclinical studies show that Neuropilin-1 (NRP1), which is a transmembrane protein with roles in neuronal development, axonal outgrowth, and angiogenesis, also plays a role in the infectivity of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Thus, we hypothesize that NRP1 may be upregulated in Alzheimer’s disease (AD) patients and that a correlation between AD and SARS-CoV-2 NRP1-mediated infectivity may exist as angiotensin converting enzyme 2 (ACE2). We used an AD mouse model that mimics AD and performed high-throughput total RNA-seq with brain tissue and whole blood. For quantification of NRP1 in AD, brain tissues and blood were subjected to Western blotting and real-time quantitative PCR (RT-qPCR) analysis. In silico analysis for NRP1 expression in AD patients has been performed on human hippocampus data sets. Many cases of severe symptoms of COVID-19 are concentrated in an elderly group with complications such as diabetes, degenerative disease, and brain disorders. Total RNA-seq analysis showed that the Nrp1 gene was commonly overexpressed in the AD model. Similar to ACE2, the NRP1 protein is also strongly expressed in AD brain tissues. Interestingly, in silico analysis revealed that the level of expression for NRP1 was distinct at age and AD progression. Given that NRP1 is highly expressed in AD, it is important to understand and predict that NRP1 may be a risk factor for SARS-CoV-2 infection in AD patients. This supports the development of potential therapeutic drugs to reduce SARS-CoV-2 transmission.
INTRODUCTION
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is being evaluated as a thirdhigh-risk contagious infection (Hu et al., 2020). People are still highly vulnerable to the ongoing and life-threatening COVID-19 pandemic, as FDA-authorized vaccines or beneficial treatments remain unavailable (Singh et al., 2020). The risk of severe complications that are eventually associated with high mortality is indicated in older people (Carstensen et al., 2020). Moreover, a bidirectional interrelation between neurological complications and COVID-19 is extensively reported (Verkhratsky et al., 2020).
Age-dependent vulnerability to SARS-CoV-2 has been associated with concomitant symptomatic infections Wu et al., 2020). Alzheimer's disease (AD) is a highly destructive neurodegenerative disorder that mostly affects the elderly and is characterized by a progressive cognitive decline (Masters et al., 2015). Although various hypotheses have been proposed to explain its multifactorial properties (Liu et al., 2019), the exact mechanism and related features of AD remain obscure. An analysis of 627 patients suggests that AD is a risk factor for SARS-CoV-2 infection (Bianchetti et al., 2020).
Angiotensin converting enzyme 2 (ACE2) is required for SARS-CoV-2 infection. Recently, it is reported that the Ace2 gene and protein expression are elevated in AD patients compared with in normal elderly individuals (Ding et al., 2020;Rahman et al., 2020). Consistent with these results, an increase in ACE2 expression results in an increased susceptibility to SARS-CoV-2 infection in elderly patients with AD. Furthermore, a recent study suggests that the transmembrane protein Neuropilin-1 (NRP1) also plays a role in SARS-CoV-2 infection (Daly et al., 2020;Mayi et al., 2021). Biochemical experiments and X-ray crystallography show that NRP1 strongly interacts with a polybasic sequence on the spike protein of SARS-CoV-2, which fits the C-end rule region (CendR) required for NRP1-peptide interaction (Daly et al., 2020;Song et al., 2020). NRP1 depletion with RNAi targeting Nrp1 mRNA inhibits the binding of the SARS-CoV-2 spike protein to NRP1 and, consequently, decreases the rate of viral infection (Daly et al., 2020;Song et al., 2020). In addition, a monoclonal antibody against the b1b2 domain of NRP1 reduces the infectivity of SARS-CoV-2 lentiviral pseudo-particles (Cantuti-Castelvetri et al., 2020). NRP1 is a neuronal receptor associated with the regulation of neurite outgrowth through the binding of vascular endothelial growth factor (VEGF) (Abdullah et al., 2020). When NRP1 is activated by CendR, which is a peptide R/KXXR/K motif contained within C-terminal domains, it enables cells to internalize ligands, such as viruses, containing the motif (Teesalu et al., 2009). Furthermore, NRP1 is expressed in the central nervous system, including the brain olfactory-related regions in which SARS-CoV-2 entry may occur, thereby facilitating COVID-19 infection (Davies et al., 2020).
Thus, we hypothesize that, in addition to ACE2, NRP1 expression might be upregulated in the brains of elderly AD patients. In this study, molecular characterization via highthroughput analysis and biochemical assays reveals that NRP1 is highly expressed in AD, which suggests that NRP1 may be a potential genetic therapy target in AD patients with COVID-19.
Animals
Five × FAD transgenic mice were purchased from the Jackson Laboratory. All animal experiments performed in this study were reviewed and approved by the IACUC committee at the Korea Brain Research Institute (IACUC-20-00018).
Total RNA Sequencing and Human in silico Analysis
The data analysis of total RNA-seq from the mouse cortex was performed as previously described in . Briefly, the brain was extracted from 6-month-old wild-type (WT) and 5×FAD mice and cortex isolated to prepare the pure RNA and total RNA-seq library. RNA-seq libraries were prepared using the TruSeq Stranded Total RNA LT Sample Prep Kit (Illumina Sample Preparation Guide) from isolated mRNA. To profile the insert length of libraries, we used the Agilent 2100 Bioanalyzer, and constructed libraries were sequenced from HiSeq TM 4000 platform (Illumina, United States). Then, converted nucleotide sequences using HiSeq TM 4000 were sorted and the dirty reads filtered from the raw reads. RNA-seq data was accessible using Gene Expression Omnibus (GEO) accession number GSE147792.
In silico data analysis was performed using the Affymetrix Human Genome U133 Plus 2.0 Array . The GSE1297 data sets were derived from human hippocampus and GSE4226 data sets were derived from human peripheral blood mononuclear cells (PBMCs) in normal and AD patients.
RNA Isolation
Total RNA isolation was performed with the mouse cortex according to TRIzol using the commercial protocol. First, phenolbased TRIzol (Invitrogen) is added in the cortex tissue tube for homogenizing. Then, it is separated into three phases by chloroform for the collect only RNA dissolved aqueous phase except the DNA and protein precipitated phases. An equal volume of isopropanol was used to precipitate RNA. After centrifugation, supernatant was discarded, and it was washed with prechilled 75% ethanol once. RNA was dehydrated and crystalized without organic compound contamination and eluted with nuclease-free water. RNA was then denatured in the 65 • C heat block for 10 min. The procedure was performed without RNase contamination.
Complementary DNA Synthesis
Isolated total RNA was synthesized into complementary DNA (cDNA) following the manufacturer's protocol of High-Capacity cDNA Reverse Transcription Kits (Applied Biosystems). Template RNA (2 µg) was prepared to synthesize a single reaction, and reverse transcription kit components were premixed. The premixture contains 10 × RT buffer, 25 × dNTP mix (4 mM), 10 × RT Random Primers, MultiScribe Reverse Transcriptase (50 U), RNase inhibitor, and nuclease-free water for adjusting the total volume for the reaction. Gently mixed template RNA and an equal volume of premixture was placed in the thermal cycler. The condition for reverse transcription was suggested as optimized temperature and time: 25 • C for 10 min, 37 • C for 120 min, and 85 • C for 5 min.
Real-Time Quantitative PCR
Real-time quantitative PCR (RT-qPCR) was performed according to commercial protocol using SYBR Green PCR Master Mix (Applied Biosystems). Primers employed were Nrp1 forward, 5 CCTCACATTGGGCGTTATTG 3 , reverse, 5 CACTGTAGTTGGCTGAGAAAC 3 ; Gapdh forward, 5 AGGTCGGTGTGAACGGATTT 3 , reverse, 5 TGTAGACCATGTAGTTGAGG 3 . Each reaction contains SYBR Green PCR Master Mix, Template cDNA, and forward and reverse primer and is adjusted with nuclease-free water.
High-Throughput Analysis of Nrp1 Expression in Alzheimer's Disease
Given that the gene expression of ACE2 is upregulated in the brains of patients with AD and may be associated with the mortality rate from COVID-19 in the elderly (Fu et al., 2020;, we hypothesize that NRP1, which codes for a newly recognized SARS-CoV-2 spike receptor, may be also increased in AD patients. To assess Nrp1 gene expression in AD, we first used a murine model that mimics AD and performed total RNA-seq using mouse brain tissue and whole blood. Total RNA-seq was analyzed by the HiSeq TM 4000 platform (Illumina, United States) ( Figure 1A). We applied the Nrp1 gene expression level in the brain and blood from AD and WT and then mapped the sequencing reads ( Figure 1B). The track of Nrp1 gene was displayed with University of California, Santa Cruz (USCS) genome browser ( Figure 1B). Interestingly, total RNAseq analysis revealed upregulation of Nrp1 gene expression in the brain of the AD model compared to WT (Figure 1B), and Nrp1 fragments per kb per million reads values are increased in the AD model brain as well ( Figure 1C). Although Nrp1 gene expression was increased by 319% in AD blood compared with WT blood, the endogenous expression levels of Nrp1 in the blood were significantly lower than those in the brain (Figures 1B,C). Collectively, our total RNA-seq results show that FIGURE 2 | The expression of NRP1 in mouse AD brain. (A) RT-qPCR analysis showing the Nrp1 mRNA expression levels in the cortex of WT and 5×FAD mice. Nrp1 mRNA expression is significantly increased in 9-month-old 5×FAD cortex compared with that in WT cortex. No significant differences are observed in the early disease stages of 5×FAD mice (3 and 6 months). The data are shown as the mean ± standard error of the mean (SEM) from n = 3 mice per group; statistical differences were assessed using unpaired t-test. (B) Representative Western blot analyzing the NRP1 protein levels in 5×FAD brains. Endogenous NRP1 is highly expressed in 9-month-old 5×FAD brains compared with that in the WT brain. β-actin was used as a loading control. The arrowhead indicates the NRP1 protein, and the asterisk indicates a non-specific band (n = 5 mice per group). (C) NRP1 Western blot band intensity measured by ImageJ 1.50i software (n = 5 mice per group). Statistical differences were assessed using unpaired t-test.
Frontiers in Genetics | www.frontiersin.org Nrp1 is preferentially expressed in the brain and upregulated in the brains of AD mice.
Nrp1 Is Upregulated in Alzheimer's Disease Brain
Nrp1 is abundantly expressed in the neurons and plays an important role for axon guidance, regeneration, neuronal plasticity, or various human diseases, such as epilepsy and seizure (Kumanogoh and Kikutani, 2013).
We confirmed Nrp1 gene expression in both WT and AD model mouse brains through the total RNA-seq (Figure 1). To further analyze Nrp1 expression during AD progression, we measured the Nrp1 mRNA levels in 3-to 9-month-old AD brains. RT-qPCR revealed an approximately 145% increase in Nrp1 mRNA expression in 9-month-old AD brains compared with that in WT brains (Figure 2A). In addition, NRP1 protein expression was also significantly increased in 9-month-old AD brains compared with that in the WT (Figures 2B,C). Taken together, these findings indicate that NRP1 gene and protein expression levels are significantly increased in the brains of aged AD mice.
Severe Alzheimer's Disease Patients Are Highly Expressed With Nrp1
Having found increased Nrp1 gene expression in the brains of AD mice, we next performed Nrp1 gene expression profiling of brains and PBMCs from human patients with different stages of AD (Supplementary Table 1). To identify the fold change of the ratio for Nrp1 gene from AD patients, we performed in silico analysis FIGURE 3 | In silico analysis of Nrp1 gene expression in human hippocampus and PBMCs from AD patients. (A) Nrp1 expression is significantly increased in the human hippocampus of severe AD patients compared with that in the control group (179%). No statistical difference is observed when WT is compared to incipient and moderate AD patients. Normal control group n = 6, incipient group n = 7, moderate group n = 8, and severe group n = 6. Statistical differences were assessed using post hoc test after one-way ANOVA. (B) Nrp1 expression in PBMCs from AD patients is not statistically different from that in the control group. Normal elderly control, female n = 7 and male n = 7; AD patient group, female n = 7 and male n = 7. Statistical differences were assessed using unpaired t-test. (C) Schematic model of NRP1-and ACE2-mediated SARS-CoV-2 infection in AD. NRP1 and ACE2 mediate SARS-CoV-2 binding to the cell membrane and, consequently, infection. Because these two receptors are highly expressed in AD patients, these individuals may be more sensitive to SARS-CoV-2 infection.
using the GSE1297 and GSE4296 microarray data set. Patients with severe AD showed significantly upregulated Nrp1 gene expression (179%) compared with the control group (individuals without AD), whereas incipient and moderate AD patients did not show increases in brain Nrp1 gene expression ( Figure 3A). Interestingly, we did not find differences in PBMC Nrp1 gene expression between any of the groups (Figure 3B). These data correlate with results from the AD murine model. Together, the results demonstrate that NRP1 mRNA and protein expression is significantly elevated in the brains of late-stage AD patients.
DISCUSSION
Since the beginning of the COVID-19 pandemic, there have been significant efforts to identify unique SARS-CoV-2-associated proteins that could serve as targets for novel vaccines or therapeutic agents. Despite notable studies suggesting the possibility of developing other COVID-19-targeted drugs, the first-generation drugs have mostly focused on the viral spike protein receptor ACE2 (Yin et al., 2020). As high-throughput genomic studies begin to define the abnormal expression of individual DNA in particular diseases, it may become possible to rationally determine disease-specific gene expression and, thus, establish biomarkers for risk prediction in older people with complications, such as AD. Recently, we showed the increase of ACE2 expression in an elderly group with AD; therefore, our in silico analysis accurately predicts high risk for SARS-CoV-2 infection in elderly patients with AD . In addition, our research scheme may be useful for predicting the risk of AD in patients with SARS-CoV-2 infection.
Our findings have implications for the prevention and treatment of SARS-CoV-2 infection in elderly patients with AD. First, both Ace2 and Nrp1 are preferentially expressed in the brain, and their expression level may determine the sensitivity to SARS-CoV-2 infection ( Figure 3C). Interestingly, it was recently suggested that differences in cytokines, such as IL-1β and TNF-α, are less pronounced in peripheral blood in SARS-CoV-2 infection (Totura and Baric, 2012;Tincati et al., 2020). Second, in addition to Ace2, Nrp1 expression was also upregulated in patients with severe AD. Although predictive immune biomarkers are suggested for the clinical treatment of COVID-19 (Fouladseresht et al., 2020), our high-throughput analysis-based approach would probably provide an accurate prediction of SARS-CoV-2 risk in elderly AD patients. Notably, Ace2 gene expression gradually increased with the severity of AD symptoms (from incipient to severe stage) , whereas elevated Nrp1 gene expression was only present in the severe AD patient group (Figures 1B,C). This result indicates that ACE2 may be a more fundamental gene for SARS-CoV-2 infection compared with NRP1.
Recently, the spread of SARS-CoV-2 infection has accelerated worldwide. Efforts on the clinical treatment of SARS-CoV-2 infection are concentrated on the development of vaccines and drugs, including gene therapy (Chugh et al., 2020). To our knowledge, this is the first study examining NRP1 expression in AD patients and reporting its higher expression these individuals. Moreover, it reveals the importance of determining SARS-CoV-2 spike protein receptor gene expression. Our gene profiling could potentially be used to predict the risk for SARS-CoV-2 infection in elderly AD patients.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material.
ETHICS STATEMENT
All animal experiments performed in this study were reviewed and approved by the IACUC Committee at Korea Brain Research Institute (IACUC-20-00018). Written informed consent was obtained from the owners for the participation of their animals in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
|
v3-fos-license
|
2022-08-21T15:03:18.452Z
|
1990-03-01T00:00:00.000
|
251696182
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://recordsofzsi.com/index.php/zsoi/article/download/161681/110925",
"pdf_hash": "4236a1e9aa8908da498af88e8a0ffded0c9e8ba2",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45858",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "00b6dbec2b34cf54c743606004f8f3c016cff921",
"year": 1990
}
|
pes2o/s2orc
|
HITHERTO UNKNOWN MORPHS OF CAY ARIELLA INDICA MAlTY AND CHAKRABARTI (HOMOPTERA: APHIDIDAE) WITH NOTES ON ITS BIOLOGY By
Cavariella indica Maity and Chakrabarti (in Maity et al., 1982) was described from apterous viviparous female and apterous oviparous female morphs infesting the weeping willow, Salix babylonica in the Mussoorie hill tracts of Uttar pradesh, India. This plant is commonly grown as ornamental one in North India, both in the plains as well as in the hills upto an altitude of about 2700 meter (Watt, 1972). During the 3 years' study (1982-1984) at Joshimath (c 1900 m), a locality of the Garhwal range of north west Himalaya, routine observations enabled us to collect its hitherto unknown alate viviparous females and alate males, which are described in this paper. Field observations on some garden willows were made with a view to know the seasonal activities, morph composition and natural enemies of this aphid, results of which are presented here under Morphology and Biology.
Cavariella indica
was described from apterous viviparous female and apterous oviparous female morphs infesting the weeping willow, Salix babylonica in the Mussoorie hill tracts of Uttar pradesh, India. This plant is commonly grown as ornamental one in North India, both in the plains as well as in the hills upto an altitude of about 2700 meter (Watt, 1972). During the 3 years' study (1982)(1983)(1984) at Joshimath (c 1900 m), a locality of the Garhwal range of north west Himalaya, routine observations enabled us to collect its hitherto unknown alate viviparous females and alate males, which are described in this paper. Field observations on some garden willows were made with a view to know the seasonal activities, morph composition and natural enemies of this aphid, results of which are presented here under Morphology and Biology. Hitherto unknown alate viviparous females and alate males are described below ; Alate viviparous female: Body 1.84-2.04 mm long arid 0.67 -0.83 mm wide. Head brown, almost smooth, with flat frons; dorsum with 6 pairs of hairs including 1 pair on frontal sinus, with acuminate apices ; longest hair on vertex 12-14 Pm long and 0.42-0.50 times the basal diameter of antennal segment III. Antennae brown, 0.54-0.62 times the body; segment I with 5 hairs, segment II slightly-scabrous ventrally, segment III with 19-29 round, distinctly protuberant secondary rhinaria distributed over the length except for basal 0.10 portion; longest hair on segment III 7-12 Pm lonJ and 0·25-0'42 times the basal diameter of the segment; processus terminalis 1.31-1.64 times the base of segment VI and 0.44-0.47 times the antennal segment III. Ultimate rostral segment 0'86-0'91 times the second joint of hind tarsus and without secondary' hairs. Abdominal 'dorsum sparsely spinulose; tergites 1, 6, 7 and 8 with separate brown spinopleural bands, tergites 2-5 with a fused spinopleural brown patch, marginal patch separately developed on tergites 2-7 ; dorsal hairs short, 4-6 per segment on anterior tergites, with acuminate apices, longest one on anterior tergites 9-12 Pm long and 0·33-0.42 times the basal diameter of segment III; tergite 7 with 2-4 hairs, longest one 9-14 /lm long and 0·31-0.50 times the mentioned diameter. Siphunculi brown, narrow at basal 0.40 portion and rest distinctly clavate, poorly imbricated, 0.15-0.16 times the body, 1.85-2.14 times as long as cauda, 9.50-13.0 times its basal width, 4.75-6.17 times its maximum width and 9.25-13.0 times its apical width. Supracaudal process on 8th tergite 0.22 . . . 0.26 times the cauda and 1.0-1.67 times its basal width. Cauda with 5-6 hairs. Venter uniformly spinulose. Legs brown; femora and tibiae sparsely spinulose on distal part; tarsi imbricated. Other characters as in apterous viviparous female.
BIOLOGY (i) MATERIALS AND METHOPS
Two plants of weeping willow (S. baby/onica) in a garden at Joshimath were selec~ed for biological observations~ Ten sample leaves collected at randQm were observed fortnightly during spring to winter of 1983 and 1984. The aphid samples were collected .in 70% alcohol and taken to the laboratory fC?r sorting of morphs. As and when necessary, aphids were processed for microscopical studies.
During the full tenure of the study, meteorological informations were collected using maximum and minimum thermometer, dry and wet bulb hygrometer and rain gauge.
(ii) OBSERVATIONS (a) Seasonal activities: With the sprouting of flowering buds and subsequently leaf buds towards the end of January in S. babylanica, the overwintered eggs of C. indica start hatching into fundatrices which we, however, could not collect. In contrast to S. tetrasperma which is the primary host of C. aegopodii (Scopoli), bud sprouting in S. baby/onica occurs much earlier. Thus, the eggs of C. indica overwinter only for a short period, i. e., approximately 45 days.
Fundatrices after maturity give rise to apterous fundatrigeniae which either singly or in a group infest an emerging leaf, usually at the base of its dorsal surface. Later on, the general tendency of this aphid is found to infest the dorsal side of the apical most leaves of the long branches. When apical leaves grow older, the aphids migrate to the immediate tender leaves because of their greater succulence, The apterous fundatrigeniae develop within 10-11 days. The number of youngs pr,oduced by a fundatrigena ranges from 25-32, which are laid for a period of 12-14 days with daily rate of 0-4 nymphs. Alate viviparae first appear in small number in the 3rd generation. Alate production here is related to the spreading of colony rather than to crowding, since the population never tend to reach a high level. The occurrence of low alate morphs was also observed by Rabasse and BruneI (1977) for C. aegopodii.
By the end of October with lowering of temperature, alate viviparous females lay pinkish nymphs which develop into apterous oviparae, while other alate viviparae lay greenish ones giving rise to alate males (Plate 1). Number of alate males is lower than that of oviparae, which may be due to the fact that one male can mate with more than one ovipara. Mating continues for about 3 minutes. After about 8-9 days, the fertilised ovipara lays 3-4 pinkish, elongatedly oval eggs, measuring about 0.7-0.8 mm in length and 0.4 mm in maximum width. Egg laying continues for 1-2 days and the ovipara dies after 6-7 days of egglaying. Males live for about 14 days. Eggs are left at this stage on the buds for overwintering.
(b) Population pattern and composition: The building up of population of C. indica on S. babylonica is initiated by the hatching of overwintered eggs into fundatrices but become established by the progeny of the latter during March in 1983 (Fig. 1) and February in 1984 (Fig. 2) depending on the arrival of spring. In both the years, study could be started only from the middle of February. It is found that aphid population attains a peak during April in both the Records of the Zoological Survey of India years, followed by a slow decline and maintains a low level throughout the summer and monsoon months. Again with the development of sexuals, population shows another low peak in November and subsequently declines in December. It should be mentioned here that as the infestation gradually spreads towards the tender apical leaves, it never becomes so high as to cause malformations. From the available morphs ( Figs. 1 and 2) a substantial change in the composition of alatoid and apteroid morphs is observed during different phases of aphid population growth. It is interesting to note that the nymphs almost always form the bulk of the population. The alate morphs are always low (7.06-19.28 % in 1983 and 6.56-20.00 % in 1984) with two phases, one during spring-summer and another daring autumn with a off-period during monsoon. Incidence of apterous morphs in 1984 maintains a more or less stable condition till (c) Natural enemies: In early spring, Adalia tetraspilota (Hope), Coccinella septempunctata L. and Harmonia (Leis) dimidiata (F.) are found to predate on this aphid, but from summer onwards Oenopia sauzeti Muls., Menochilus sexmaculatus F. and Platynaspis sp. seem to be abundant coccinellid predators. Surprisingly none of the above species are found to breed on this aphid. The only syrphid maggot collected is Metasyrphus confrater (Wied.). (Ghosh et al., 1986). However, its infestation could never reach at the levels which can damage the weeping willow. High temperature and rainfall can affect its population build up in summer and monsoon, when we obtained a low level of population. Production of alate morphs is always low and completely absent in monsoon months. In aphid, alate production depends on population density (Hille Ris Lambers, 1966) and unsuitable condition of host plants through continuous exploitation of the plant sap by the aphids (Way, 1973). But in C. indica, summer production of alate seems to be necessary for dispersion to all available uninfested tender leaves, whereas winter alates account for the production of sexuals.
Predators appear to have some impact on the population decline in summer ,as most of the coccinellids are noted to be quite active in spring and summer after the termination of their hibernation quarters. SUMMARY l'he present paper provides a taxonomic description of so far unknown alate viviparous female and alate male morphs of Cavariella indica Maity and Chakrabarti. Biological activites, morph composition and natural enemies of this aphid in Garhwal range of north west Himalaya have also been discussed.
|
v3-fos-license
|
2018-04-03T02:13:11.077Z
|
2015-11-10T00:00:00.000
|
6669463
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/omcl/2016/9392404.pdf",
"pdf_hash": "0e7f20a89374eba996df920f41a59775e748cd2c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45860",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e76fb8315e9088250336ee956b8740236361ac49",
"year": 2015
}
|
pes2o/s2orc
|
PAN-811 Blocks Chemotherapy Drug-Induced In Vitro Neurotoxicity, While Not Affecting Suppression of Cancer Cell Growth
Chemotherapy often results in cognitive impairment, and no neuroprotective drug is now available. This study aimed to understand underlying neurotoxicological mechanisms of anticancer drugs and to evaluate neuroprotective effects of PAN-811. Primary neurons in different concentrations of antioxidants (AOs) were insulted for 3 days with methotrexate (MTX), 5-fluorouracil (5-FU), or cisplatin (CDDP) in the absence or presence of PAN-811·Cl·H2O. The effect of PAN-811 on the anticancer activity of tested drugs was also examined using mouse and human cancer cells (BNLT3 and H460) to assess any negative interference. Cell membrane integrity, survival, and death and intramitochondrial reactive oxygen species (ROS) were measured. All tested anticancer drugs elicited neurotoxicity only under low levels of AO and elicited a ROS increase. These results suggested that ROS mediates neurotoxicity of tested anticancer drugs. PAN-811 dose-dependently suppressed increased ROS and blocked the neurotoxicity when neurons were insulted with a tested anticancer drug. PAN-811 did not interfere with anticancer activity of anticancer drugs against BNLT3 cells. PAN-811 did not inhibit MTX-induced death of H460 cells but, interestingly, demonstrated a synergistic effect with 5-FU or CDDP in reducing cancer cell viability. Thus, PAN-811 can be a potent drug candidate for chemotherapy-induced cognitive impairment.
Introduction
One of the most common complications of chemotherapeutic drugs is toxicity to the central nervous system (CNS), namely, chemotherapy-induced cognitive impairment or chemobrain. This toxicity can present in many ways, including encephalopathy syndromes and confusional states, seizure activity, headache, cerebrovascular complications and stroke, visual and hearing loss, cerebellar dysfunction, and spinal cord damage with myelopathy [1]. Mild to moderate effects of chemotherapy on cognitive performance occur in 15-50% of the survivors after treatment [2,3]. The cognitive problems can last for many years after the completion of chemotherapy in a subset of cancer survivors. Up to 70% of patients with cancer report that these cognitive difficulties persist well beyond the duration of treatment [4][5][6]. Chemobrain can seriously affect quality of life and life itself in cancer patients.
Among possible candidate mechanisms, oxidative stress (OS) may play a key role in cognitive disorders caused by broad types of anticancer drugs, such as antimetabolites, mitotic inhibitors, topoisomerase inhibitors, and paclitaxel [7]. These chemotherapeutic agents are not known to rely on oxidative mechanisms for their anticancer effects. Among the antimetabolite drugs, methotrexate (MTX) and 5-fluorouracil (5-FU), widely used chemotherapeutic agents, are most likely to cause CNS toxicity [1]. Although there are yet no reports of 5-FU increasing CNS OS, it has been observed to induce apoptosis in rat cardiocytes through intracellular OS [8], to increase OS in the plasma of liver cancer patients [9], and to decrease glutathione (GSH) in bone marrow cells [10]. MTX can also cross the bloodbrain barrier as well [11] and result in an increase of OS in cerebral spinal fluid and executive dysfunction in MTXtreated patients of pediatric acute lymphoblastic leukemia [12,13]. It is well known that ROS, such as H 2 O 2 , can result in neuronal cell death [14,15]. Cisplatin (CDDP) is an alkylating agent. Its cytotoxic effect is thought to be mediated primarily by the generation of nuclear DNA adducts, which, 2 Oxidative Medicine and Cellular Longevity if not repaired, cause cell death as a consequence of DNA replication and transcription blockage. However, oxidative damage has been observed in vivo following exposure to CDDP in several tissues including nervous tissue, suggesting a role for OS in the pathogenesis of CDDP-induced doselimiting toxicities [16][17][18]. Cotreatment with antioxidants (AOs) suppresses the toxic effects of CDDP on several organs [19,20].
Currently, there are no proven treatments for chemotherapy-induced cognitive impairment. Some efforts have been focused on correcting cognitive deficits rather blocking the neurotoxic pathway of chemotherapeutic drugs [21]. Since ROS mediates neurotoxicity in a number of neurodegenerative disorders, one strategy in disease control has been focused on development of antioxidants as preventive and therapeutic molecules. These include vitamin C, vitamin E, glutathione, coenzyme Q (CoQ), carotenoids, melatonin, and green tea extract [22,23]. In contrast to the minimal positive effects of these efforts, antioxidative therapy could be a promising strategy for the treatment of neurotoxicity. Several preclinical studies have shown that AO treatment prevents chemotherapy-induced OS and cognitive deficits when administered prior to and during chemotherapy [24,25].
Our previous research has demonstrated that PAN-811 (known as 3-aminopyridine-2-carboxaldehyde thiosemicarbazone or Triapine), a bioavailable small molecule (MW 195) currently in phase II clinical trials for the treatment of patients with cancer, can efficiently block neurodegeneration. Major underlying mechanisms for the neuroprotection of PAN-811 are blockage of both excitatory pathway and OS [22]. Hence we hypothesized that PAN-811 could protect neurons from anticancer drugs, such as MTX, 5-FU, and CDDP. Since PAN-811 is an anticancer drug targeting ribonucleotide reductase, which is distinctive from intracellular targets of MTX, 5-FU, or CDDP, coadministration of PAN-811 with any of these may also have a synergistic effect on suppression of cancer cell growth.
Neuronal Cell Culture.
Mixed cortical and striatal neurons from embryonic day 17 male Sprague-Dawley rats (tissue obtained from NIH) were seeded into poly-D-lysine coated 96-well plates at density of 50,000 cells/well and initially cultured at 37 ∘ C, 5% CO 2 , in neurobasal medium (NB) with B27 supplement (Invitrogen) containing full strength of AOs to obtain highly enriched (95%) neurons [26]. Since AOs, including vitamin E, vitamin E acetate, superoxide dismutase (SOD), catalase (CAT), and GSH, are additives to culture medium, reduction of AO concentration in culture medium provides an approach to determine the level of OS involvement in a neurotoxic process. In our study, the culture medium was replaced at a 50% ratio with NB plus B27 minus AOs twice at days 7 and 9 to set AO concentrations as 50% and 25%, respectively. At 16 days in vitro (d.i.v.), a fraction of the culture medium was harvested for lactate dehydrogenase (LDH) assay, and then AO concentration was reduced to 12.5% or 17.5%, and cultured for a further 5 hours prior to ending the experiment.
Cancer Cell Culture.
The mouse liver cancer cell line BNLT3 (gift of Dr. Jack Wands, Brown University) and the human lung cancer cell line H460 (ATCC) were seeded into 96-well plates at a density of 4,000 cells/well and cultured at 37 ∘ C, 5% CO 2 , in DMEM (11965, Gibco) supplemented with 10% fetal bovine serum, 20 mM HEPES, 1 mM sodium pyruvate, and 24 ng/mL gentamycin (all reagents came from Gibco).
Quantitative Assays and Morphological Assessment.
Cell membrane integrity and mitochondrial function of either neurons or cancer cells were measured with LDH and 3-(4,5dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4sulfophenyl)-2H-tetrazolium [MTS] analyses, respectively. The latter has been used to quantify cell survival. For the LDH assay, a mixture of a 35 L aliquot of culture supernatant and 17.5 L of Mixed Substrate, Enzyme and Dye Solutions (Sigma) was incubated at room temperature (RT) for 30 minutes. For the MTS assay, 10 L of MTS reagent (Promega) was added to a culture well containing neurons in 50 L of medium. The preparations were incubated at 37 ∘ C for 2 hours. The preparations for both assays were then spectrophotometrically measured at 490 nm using a 96-well plate reader (Mode 550, Bio-Rad). Neuronal cell death was morphologically determined based on the integrity of the cell soma and continuity of neuronal processes. The change in number of cancer cells was judged directly by cell density. Cells were photographed under an inverted phase contrast microscope (IX 70, Olympus) using 10x or 20x objective.
ROS Examination.
Neurons were incubated in 15 M dihydrorhodamine 123 (DHR123, Molecular Probes) for 30 min at 37 ∘ C to determine intramitochondrial ROS levels. Fluorescence was photographed by using a fluorescent microscope and quantified by excitation at 485 nm and emission at 520 nm using a 96-well plate reader (Model 550, Bio-Rad).
PAN-811 Shows No Antagonistic Effect on MTX-, 5-FU-, or CDDP-Induced Cytotoxicity in BNLT3 Cells.
To understand whether PAN-811 could interfere with anticancer efficacy of tested anticancer drugs, the mouse liver cancer cell line BNLT3 was cotreated with each anticancer drug at the concentrations used for elicitation of neurotoxicity and 10 M PAN-811, the highest concentration used for neuronal protection in these experiments. A 3-day insult with 100 M MTX severely reduced the cancer cell number (Figure 4(a)). In the culture treated with 10 M PAN-811 alone or cotreated with 100 M MTX and 10 M PAN-811, cell density was also much lower than that in no-insult control. Quantitatively, MTX at 100 M reduced MTS reading by 85% (Figure 4(d)), while PAN-811 at 10 M reduced MTS reading to the same level as MTX. A cotreatment with both did not cause any further reduction in MTS reading when comparing with MTX alone.
Similarly, 5-FU at 25 M significantly reduced the cell density of the cancer cells, and a cotreatment with both 25 M 5-FU and 10 M PAN-811 significantly decreased the cell number as well (Figure 4(b)). Quantitatively, 5-FU at 25 M reduced MTS reading by 84%, which was less efficient than 10 M PAN-811 group (Figure 4(e)). A cotreatment with both caused a further reduction in MTS reading when comparing with 5-FU alone. No synergistic effect between 5-FU and PAN-811 could be detected.
An insult with 3.5 M CDDP also caused a decrease in the cell density (Figure 4 alone, despite showing much lower reading than CDDP alone ( < 0.01).
In general, PAN-811 did not show any inhibition in the effect of MTX, 5-FU, or CDDP on BNLT3 cells, neither did it demonstrate any synergistic effect with each tested anticancer drug on BNLT3 cell growth.
Discussion
The fate of neurons under an OS condition is dependent on the balance between production of ROS and strength of AO defense systems in vivo. Enzymatic AOs, such as SOD and CAT, and nonenzymatic AOs, exemplified with vitamin E and GSH, are both involved in the defenses [23]. Loss of the balance, under condition such as chemotherapy, can elicit cytotoxicity and organ toxicity in experimental animals [24,36] and in humans [9,12,13]. Administration of anticancer drugs is accompanied by not only an increase in ROS level, but also a decrease in antioxidative enzymes [37,38]. In comparison with the in vivo studies, it is rare to find an in vitro study that examines the direct effects of anticancer drug on neurons in an enriched neuronal culture system. The presence of AOs in the culture medium may sufficiently block the effect of an anticancer drug and therefore the system is not suitable for examining ROS-mediated neurotoxicity and may not reflect the real conditions under chemotherapy in animals and humans. To mimic in vivo conditions under chemotherapy, we reduced AO concentrations in a double-diluted manner to a final AO concentration of 12.5%. It was observed that neurotoxicity of MTX, 5-FU, or CDDP occurred only when neurons were bathed in low AO-containing medium.
Cell membranes seemed to be more fragile to these anticancer drugs under these conditions. When the AO content was reduced to 25%, MTX, 5-FU, and CDDP all resulted in robust LDH release, but neither notable morphological changes nor MTS reading differences in anticancer drug-insulted groups were detected. Only when the AO content was further reduced to 12.5% was there observation of morphological cell death in 5-FU-or CDDP-insulted groups and corresponding 33% and 66% reductions in MTS readings, respectively. These data, together with the phenomenon where coadministration of MTX and 5-FU resulted in a significant intramitochondrial ROS increase, indicate a key role of OS in mediation of the in vitro neurotoxicity. Our study demonstrated that under low AO conditions MTX insult only resulted in membrane leakage but did not show significant detrimental effects on cell viability of neurons in our neuron-enriched culture. This is identical to the previous findings that excitatory neurotoxicity marks MTX-mediated cell death, which only occurs in the presence of glial cells, and is protected by N-methyl-D-aspartate receptor antagonists MK-801 and memantine [39].
PAN-811 can suppress neurotoxicity of all tested anticancer drugs in the present study. Under 12.5% AO condition, PAN-811 dose-dependently blocked MTX-, 5-FU-, or CDDPinduced membrane leakage. In addition, PAN-811 at 10 M fully inhibited 5-FU-induced MTS reduction and elevated MTS reading by 48% for CDDP-insulted neurons under 12.5% AO condition. Furthermore, neurons that were treated with PAN-811 looked to have a healthy appearance even when they were insulted with MTX, 5-FU, or CDDP. Our previous studies have demonstrated that, besides inhibiting excitatory neurotoxicity, PAN-811 can protect neurons from cell death under different OS-involved conditions, such as hypoxia [22], and hydrogen peroxide insult [14,15]. In a cellfree and metal-free system, PAN-811 demonstrated an activity in direct scavenging of stable radical diphenylpicrylhydrazyl (DPPH) [22]. Taken together, the neuroprotection provided by PAN-811 in the anticancer drug-insulted condition is most likely due to its activity in inhibition of intracellular ROS accumulation.
In this study, the blockage of oxidative damage by PAN-811 was shown by not only its neuroprotective effect but also its inhibitory role in anticancer drug-induced membrane leakage. Our results showed that each of MTX, 5-FU, and CDDP can induce membrane leakage of cancer cell H460. Theoretically, ROS can be produced in plasma membrane and other cell compartments [40]. Free radicals can pass freely through cellular and nuclear membranes and oxidize biomacromolecules, including lipids. Lipid peroxidation caused by ROS leads to membrane leakage [41]. Efficient inhibition of membrane leakage of H460 cells by PAN-811 indicates its role in suppression of ROS signal not only in neurons but also in other cell types.
Our result demonstrated that PAN-811 did not suppress anticancer efficacy of anticancer drugs MTX, 5-FU, and CDDP despite suppressing anticancer drug-induced membrane leakage. This indicates that the anticancer activity of the tested anticancer drugs does not rely on intramitochondrial ROS accumulation they induced, which provides a basis for using PAN-811 as a neuroprotectant in chemotherapy. In addition, PAN-811 manifested a synergistic effect with 5-FU or CDDP on suppression of cancer cell growth. Both MTX and 5-FU are antimetabolites or antifolate drugs. MTX inhibits DNA synthesis by competitively binding to dihydrofolate reductase, an enzyme that converts dihydrofolate into tetrahydrofolate [42]. 5-FU acts predominantly as a thymidylate synthase inhibitor and suppresses synthesis of the pyrimidine thymidine, which is a nucleoside required for DNA replication [43]. CDDP is an alkylating agent. It binds to and causes cross-linking of DNA, which ultimately triggers apoptosis [44]. PAN-811 is an anticancer drug by itself with a different intracellular target from MTX, 5-FU, and CDDP. PAN-811 divalently chelates the ferrous ions of ribonucleotide reductase and blocks its bioactivity in conversion of ribonucleotides to deoxynucleotides and therefore inhibits DNA synthesis [45]. The synergistic effect by cotreatment with PAN-811 and 5-FU or CDDP may be due to affecting more than one intracellular target. The synergistic effect by cotreatment with PAN-811 may provide an opportunity in reduction of dose usage of 5-FU or CDDP. In this way, the neurotoxicity of 5-FU or CDDP could be further reduced while retaining equal strength of anticancer efficacy. PAN-811 can be a potential neuroprotective drug for chemotherapyinduced cognitive impairment due to its in vitro inhibition of MTX-, 5-FU-, or CDDP-induced neurotoxicity. A pharmacodynamic study for the effect of PAN-811 on cognitive functions will be carried out in a chemobrain animal model in near future.
Conflict of Interests
Zhi-Gang Jiang, Steven A. Fuller, and Hossein A. Ghanbari are employees of and own stock in Panacea Pharmaceuticals, Inc.
|
v3-fos-license
|
2021-11-18T16:23:35.519Z
|
2021-11-15T00:00:00.000
|
244292398
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2071-1050/13/22/12592/pdf",
"pdf_hash": "7dd2ffc9916477556dbd6a37587b18e6123b781c",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45861",
"s2fieldsofstudy": [
"Business"
],
"sha1": "65da442ddec88346d58ef0e230764bfa7872a9bf",
"year": 2021
}
|
pes2o/s2orc
|
Interpretation in Á sbyrgi: Communicating with National Park Visitors in Iceland
: Iceland has experienced rapid increases in tourism in recent years. This growth earns economic applause, but can come at considerable environmental cost. As Iceland’s unique environment is a drawcard for many tourists, careful management of destinations to ensure a sustainable environment is critical. The Icelandic Government is aware of the need for effective destination management and planning to ensure a sustainable future for tourism development, and the need to couple this with visitor compliance. It is a development that cannot be divorced from the need for environmental sustainability, and responsibility for this lies with all tourism stakeholders. One management tool to assist with such responsibility and compliance in tourism is interpretation: creat-ing and delivering messages to visitors that enhance not only their satisfaction with an experience but also their understanding of it. This paper is based on an evaluation of visitors’ experiences and managers’ perceptions, as is necessary to ensure visitor satisfaction, while determining how best to maintain a sustainable environment. By observing and interviewing visitors, guides, rangers, and managers at Á sbyrgi in the northernmost part of Vatnajökull National Park, Iceland, we were able to discover what sort of information park visitors want to receive, what park managers want to convey, and the preferred way to deliver that information. Overall, most visitors and guides were satisfied with the interpretation in Á sbyrgi and preferred information provided verbally by guides or rangers over other types, such as on signs or in electronic format. Visitors want information about the unique geology and cultural history, as well as directional instructions for hiking trails. Managers want to create an accessible space in which visitors comply with instructions about safety and environmental sustainability. These findings can assist tourism management in Á sbyrgi, and other nature-based destinations, particularly in terms of sustainability of the natural environment.
Introduction
The tourism industry in Iceland is of significant economic importance to the country, and the country's natural environment is a strong drawcard for tourists [1][2][3]. For the first time ever, in 2017 tourism in Iceland was responsible for higher foreign exchange earnings (42%) than exports of marine products (16%) [4]. The annual number of international visitors steadily increased from 1.8 million in 2016 to 2.2 million in 2017 and 2.4 million in 2018. This growth came to a sudden halt when the COVID-19 pandemic struck the world. In 2020, the total number of visitor arrivals in Iceland was just under half a million, a 75.8% decrease from 2019, when numbers were around 2 million. However, while international visitation decreased during the COVID-19 pandemic, domestic travel increased [5,6].
The growth of Iceland's tourism sector has led to some negative social [7] and environmental [8] sustainability impacts, which the country is trying to balance against new economic activities that are more positively viewed [9]. As the Organisation for Economic Cooperation and Development (OECD) identified: "the central challenge for Iceland is
Environmental Sustainability and Tourism in Iceland
Research on the impact of tourism on the environment in Iceland has mostly focused on impacts in protected areas which, considering the fragility of Iceland's vegetation and soils and the important of intact natural sites to the tourism sector, is clearly an important line of research. These studies show that the interaction between this fragility and visitation in those areas is a major cause of concern [14,15]. The Environment Agency of Iceland monitors the condition of protected areas across the country. Their conclusion in 2018, that five areas were in great danger of degradation and 15 locations were threatened due to the impacts of tourism, highlights this fragility [16,17]. Saviolidis et al.'s review of a national indicator set to evaluate environmental sustainability of Iceland's tourism sector, emphasised the importance of also conducting assessments at a local level [18]. This is important because some issues are likely to be more pronounced in areas that are both fragile and highly popular with visitors, as is the case with our study area-Ásbyrgi.
Iceland's Road Map for Tourism 2015-2020 emphasised seven core issues to address for the future sustainable development of tourism [19]. Effective interpretation is a vital component of at least two of these: ensuring positive visitor experiences and promoting nature conservation. The latest policy framework for Icelandic tourism, valid until 2030, further stresses the importance of sustainable tourism development [20]. Effective visitor management at popular tourist sites, delivering quality experiences, and ensuring visitor safety are some of the highlighted measures towards Iceland's goal of being a leading country in sustainable tourism development [20].
Effective Interpretation
Weiler et al. state that "Tourism in national parks is essentially about providing memorable nature-based experiences for visitors", and needs to also offer opportunities to educate and guide appropriate visitor behaviour [21] (p. 122). Without tourism assisting to protect the nature that visitors come to experience, the experience risks being lost or degraded, like the environment, as visitation numbers increase. That memorable experience, education and guidance comes through interpretation: often conveyed via guides, books, shows, brochures, and signs, with online opportunities such as websites and apps a more recent addition to that list.
Common to most definitions of interpretation are concepts of education and meaning: "Interpretation is an educational activity that aims to reveal meanings about our cultural and natural resources" [22] (p. 17). In the context of tourism, "Interpretation broadly refers to educational activities used in places like zoos, museums, heritage sites and national parks, to tell visitors about the significance or meaning of what they are experiencing" [23] (p. 231). Interpretation is about communicating information and is used as a tool to both enhance visitor experience and manage their behaviour [24]. Roberts goals of interpretation as, to (1) ensure visitor satisfaction, (2) increase visitor knowledge, (3) achieve attitude change, and consequently, (4) achieve behavioural change [25].
To be effective, interpretation should simultaneously "stimulate interest, promote learning, guide visitors in appropriate behaviour for sustainable tourism and encourage enjoyment and satisfaction" [23] (p. 231). As such, it is a balanced combination of experience, education and persuasion that can lead to visitor engagement with conservation issues [23]. Effective interpretation should help the visitor to understand why the environment they are experiencing is important, thus resulting in better-informed visitors. When delivering conservation messages, particularly ones that encourage desired behaviour, even if the visitor does not agree with the message being conveyed, it should at least encourage them to better understand the situation.
Using Interpretation as a Management Tool to Enhance Both Visitor Satisfaction and Environmental Sustainability in Iceland
Many visitors to natural areas are eager to support and follow behavioural guidelines and some studies have shown that visitors like, or even expect, to receive information during their nature-based experience [26]. Thus, providing information about ecology and defining appropriate visitor behaviour could fulfil visitor expectations and thereby increase their satisfaction with the experience [27,28]. Support for, and compliance with, guidelines, however, also depends on their design: wording, message content and the manner of conveying information [23,29,30].
Icelandic tourism authorities have launched various projects in recent years to enhance internal destination management and better accommodate the increasing number of visitors. Most notable of these are the National Infrastructure Plan and the Tourist Site Protection Fund, which both aim to improve infrastructure at popular tourist sites [31]. Through these initiatives, paths have been improved, viewing platforms built, and further signs erected. Most recently, a new project called Sites of Merit was launched [32]. This project emphasises a wholistic approach to destination management with the aim to protect natural and cultural resources, and to offer high-quality visitor experiences in agreement with residents and other stakeholders in the region. However, no management projects yet focus directly on interpretation in Iceland's tourism industry.
Use of interpretation for visitors to Iceland's natural areas has also not yet been the subject of much investigation. Consequently, little is known about visitor satisfaction with the available interpretation or their desires for, and receptiveness towards, alternative types of information transfer. A study of signs at a seal watching site revealed that tourists are open to receiving messages that are ontological in nature-providing information about animal behaviour and characteristics that encourages positive human behaviour [33]. However, projects like this are rare. The few that exist provide us with some local data with which to commence, but much greater knowledge is needed about the wants of tourists and their satisfaction with the interpretation currently available in Iceland.
Globally, investigation of interpretation in the context of tourism has been vast, and has been covered by many authors in a wide range of contexts over a lengthy period of time. Tilden [34] established a firm foundation with six principles of interpretation but many others, including Mills [35], had written about interpretation in natural settings previously, and recognition of the importance of effective interpretation burgeoned into many more fields from the 1960s. In the tourism context, for example, work by Weiler et al. demonstrates that interpretation is positively related to desired tourism outcomes [36]. Their study provides useful insight on the effects of cultural background, and of applying specific interpretive principles to elicit specific tourism outcomes, on visitor impact management. Similarly, use of interpretation as a visitor management tool in environmentally sensitive destinations has been investigated for several decades [26,37,38] and the success of interpretation in assisting environmental sustainability has been demonstrated in numerous empirical investigations [39,40]. This comprehensive body of literature provides a solid foundation upon which Iceland can build its own environmentally and culturally relevant strategy.
Aim, Contribution and Structure
The aim of this research was to discover how to enhance sustainability of the natural environment in Ásbyrgi, as an example of a protected area in Iceland, through onsite communication (interpretation) with visitors. This involved determining the information visitors want to receive, and the information managers want to convey, and preferred ways to deliver that information. To address this aim, we observed visitor behaviour in the park and interviewed visitors and guides, as well as park rangers and managers.
Results from the study provide information for managers in Ásbyrgi about how to provide interpretation that includes messages visitors want to receive and are delivered in ways visitors most want to receive them, but the research contribution extends beyond this single context. We hope it starts a dialogue about effective interpretation in Iceland that to date has been largely absent from the tourism literature. Having visitors be receptive to interpretive messages is more likely to have positive outcomes in terms of compliance with desired behaviour, and thus contribute to environmental protection in destinations.
The remainder of the paper describes the study location before exploring the data collection process-through observations and interviews. Presentation of the results follows the three parts of the research aim, commencing with a visitor profile before discussing their satisfaction, the type of information to convey and how best to convey it.
Materials and Methods
This study focuses on one nature-based tourism destination to begin to answer some critical questions about using interpretation as a management tool in Iceland. A mixed method approach was used to collect both qualitative and quantitative data in two stages during August 2018, the year Iceland recorded its highest number of international visitors. In stage one, observational fieldwork was conducted at Ásbyrgi. This enabled the researchers to determine what interpretive material was already available, and in what form, at the site. It also provided data on how visitors use the site, and the material, by observing their behaviour (for example, time spent reading signs). In stage two, interviews were conducted with a range of stakeholders to understand their perceptions of, and satisfaction with, the existing interpretation and what other, if any, content and forms they would prefer.
Study Location
Ásbyrgi is located northeast of the township of Húsavík, in the northern section of Iceland's Vatnajökull National Park. Vatnajökull National Park (VNP), which was established in June 2008, is approximately 13,500 km 2 in size, covers about 13% of the surface area of Iceland, and is the largest national park in Europe [41] (p. 3). The park balances two key objectives of conserving natural and cultural features and enabling visitors to experience and enjoy the area [41] Within VNP, Ásbyrgi is situated in the northern part of Jökulsárgljúfur Canyon, where the key attraction is a horseshoe-shaped glacial canyon, 3.5 km long and over one km wide, with sheer cliff faces up to 100 m high. The canyon was formed by two glacial bursts from the northern part of the Vatnajökull ice cap that resulted in catastrophic floods, one between eight and ten thousand years ago and a second approximately three thousand years ago. According to Icelandic cultural history, the canyon is shaped like a horseshoe because Sleipnir, the Norse god Odin's eight-legged horse, placed one of his hooves on the ground here as Odin rode past [42].
At the innermost, southern end of the canyon lies Botnstjörn, a small pond abutting the cliff face and surrounded by vegetation. The area is covered in woodland consisting mainly of birch, willow, and mountain ash, though more recently planted pines also grow in the area. Eyjan ('the island'), a distinctive rock formation, rises 25 m high and 250 m wide from the centre of Ásbyrgi. In addition to the remarkable geological features and hiking trails, bird life also attracts visitors to the area. Arctic fulmar nest on the steep cliffs, with many other birds frequenting the pond (e.g., Barrow's Goldeneye and Tufted Duck), woods, and meadows (e.g., Eurasian Wren and Meadow Pipit) in summer [42,43].
Visitors to Ásbyrgi most commonly arrive by private car or by bus on an organised tour. Entry is free and although there is a visitor centre at the junction with the main road many do not stop there. The visitor centre, which opened in 2007, contains a detailed exhibition on the natural and cultural history of Ásbyrgi and the surrounding area. The centre also has maps and information about hiking trails, the park service and recreational opportunities in the region, and an information desk attended by staff able to answer visitor questions. The Ásbyrgi camping ground contains 350 sites with access to toilets, showers and electricity, and many marked hiking trails that range in length from a few minutes of walking to several hours are available in the area. The most popular of these is a short, forested, trail to the pond (Botnstjörn) and viewing platforms from the parking lot. Just outside the park is a small shop where groceries and fuel can be purchased (see Figure 1). horseshoe because Sleipnir, the Norse god Odin's eight-legged horse, placed one of his hooves on the ground here as Odin rode past [42].
At the innermost, southern end of the canyon lies Botnstjörn, a small pond abutting the cliff face and surrounded by vegetation. The area is covered in woodland consisting mainly of birch, willow, and mountain ash, though more recently planted pines also grow in the area. Eyjan ('the island'), a distinctive rock formation, rises 25 m high and 250 m wide from the centre of Ásbyrgi. In addition to the remarkable geological features and hiking trails, bird life also attracts visitors to the area. Arctic fulmar nest on the steep cliffs, with many other birds frequenting the pond (e.g., Barrow's Goldeneye and Tufted Duck), woods, and meadows (e.g., Eurasian Wren and Meadow Pipit) in summer [42,43].
Visitors to Ásbyrgi most commonly arrive by private car or by bus on an organised tour. Entry is free and although there is a visitor centre at the junction with the main road many do not stop there. The visitor centre, which opened in 2007, contains a detailed exhibition on the natural and cultural history of Ásbyrgi and the surrounding area. The centre also has maps and information about hiking trails, the park service and recreational opportunities in the region, and an information desk attended by staff able to answer visitor questions. The Ásbyrgi camping ground contains 350 sites with access to toilets, showers and electricity, and many marked hiking trails that range in length from a few minutes of walking to several hours are available in the area. The most popular of these is a short, forested, trail to the pond (Botnstjörn) and viewing platforms from the parking lot. Just outside the park is a small shop where groceries and fuel can be purchased (see Figure 1). Visitation to Ásbyrgi shows a growth in numbers roughly parallel with visitation to the country as whole. In 2012 there were 600,000 international arrivals in Iceland and 50,000 visitors in Ásbyrgi. In 2018 there were 2.2 M in Iceland and 114,000 in Ásbyrgi. The parallel stops in 2020 however, when arrivals decreased in Iceland by 75% from the previous year yet only decreased in Ásbyrgi by 30% (from 116,000 to 82,000) [5,6,44]. This appears to show that although international visitation diminished dramatically, Icelandic nationals continued to visit Ásbyrgi and probably in greater numbers than before. This aligns with similar consequences of the COVID-19 pandemic recorded in other countries. Visitation to Ásbyrgi shows a growth in numbers roughly parallel with visitation to the country as whole. In 2012 there were 600,000 international arrivals in Iceland and 50,000 visitors in Ásbyrgi. In 2018 there were 2.2 M in Iceland and 114,000 in Ásbyrgi. The parallel stops in 2020 however, when arrivals decreased in Iceland by 75% from the previous year yet only decreased in Ásbyrgi by 30% (from 116,000 to 82,000) [5,6,44]. This appears to show that although international visitation diminished dramatically, Icelandic nationals continued to visit Ásbyrgi and probably in greater numbers than before. This aligns with similar consequences of the COVID-19 pandemic recorded in other countries. For example, Fredman and Margaryan [45] cite increasing visitation to parks in Norway and Sweden, and McGivney [46] notes that bans on international travel stimulated interest in domestic tourism, including visiting natural areas, in the USA.
Thus, although the current pandemic decreased the overall visitor number in Ásbyrgi, the attraction of the area remains and, consequently, impacts on the natural environment still require careful management to ensure sustainability. This is being considered at both national and local levels. In 2015, Icelandic tourism authorities launched the development of Destination Management Plans (DMPs). Iceland is divided into seven different tourist regions, and each region has published a destination management plan accompanied by a three-year action plan with prioritized projects. The DMP plan for North Iceland priorities infrastructure development and the marketing of a tourist route called the Diamond Circle. Ásbyrgi is one of the key destinations along that route [47].
Observations
Visitor behaviour was observed at three sites in Ásbyrgi during both stages of the research. The sites included the main trail entrance, the pond, and the visitor centre. Interpretive signage about the natural and cultural features of the area exists in all these locations. The main trail entrance is adjacent to the main car park and is marked by three large signs (Figure 2). The pond (Botnstjörn) includes two wooden viewing platforms and a collection of small signs (Figure 3). Small signs are also present throughout the park. The visitor centre includes a large range of interpretive material and displays (Figure 4). and Sweden, and McGivney [46] notes that bans on international travel stimulated interest in domestic tourism, including visiting natural areas, in the USA.
Thus, although the current pandemic decreased the overall visitor number in Ásbyrgi, the attraction of the area remains and, consequently, impacts on the natural environment still require careful management to ensure sustainability. This is being considered at both national and local levels. In 2015, Icelandic tourism authorities launched the development of Destination Management Plans (DMPs). Iceland is divided into seven different tourist regions, and each region has published a destination management plan accompanied by a three-year action plan with prioritized projects. The DMP plan for North Iceland priorities infrastructure development and the marketing of a tourist route called the Diamond Circle. Ásbyrgi is one of the key destinations along that route [47].
Observations
Visitor behaviour was observed at three sites in Ásbyrgi during both stages of the research. The sites included the main trail entrance, the pond, and the visitor centre. Interpretive signage about the natural and cultural features of the area exists in all these locations. The main trail entrance is adjacent to the main car park and is marked by three large signs (Figure 2). The pond (Botnstjörn) includes two wooden viewing platforms and a collection of small signs (Figure 3). Small signs are also present throughout the park. The visitor centre includes a large range of interpretive material and displays (Figure 4). The time spent by visitors and guides at each of the three observation sites was recorded using a stopwatch. Timing commenced when they arrived at the location and stopped when they left. Group composition was also recorded, in conjunction with activ- The time spent by visitors and guides at each of the three observation sites was recorded using a stopwatch. Timing commenced when they arrived at the location and stopped when they left. Group composition was also recorded, in conjunction with activities the group or individual engaged in (for example, taking photographs). Where possible, gender, age and nationality were estimated. In total, 124 groups, ranging in size from one to 30 individuals, were included in the observational data.
Interviews
Based on the initial observational data collected in stage one, the best approach to data collection in stage two was determined. During the second stage, 120 visitors were interviewed about their experience in the park. In addition, tour guides (n = 7), rangers (n = 4) and park managers (n = 2) (total n = 13) were interviewed about their opinions on the most effective ways to communicate with park visitors and what should be prioritised for communication.
Park visitors were provided with an information sheet about the project and invited to participate in a short interview about their experience in Ásbyrgi. They were informed that the interview would take 3-5 min to complete during which they would be asked a series of questions by a researcher. The visitor-interview framework was designed to ensure a logical flow of questions and to keep the length of each interview under five minutes.
Visitors were asked questions based on three broad subject areas corresponding to our stated objectives. To determine their opinions on the current information available to them about the site they were visiting, they were asked where they searched for information before journeying to the site (e.g., websites, friends, guidebooks), what they expected during their experience, and if those expectations were met. They were also asked what type of information they would like to know about the site and the available experiences. To understand the best options for how information could be presented, visitors were prompted to state their preference for a range of options including hard-copy signs at the destination, an app available on an electronic device, a real-life guide, or hard-copy brochures. They were also asked to describe an example of interpretation that they had experienced in another location and considered best practice.
Of the 120 visitors interviewed, ten had visited the park on a bus tour and were either interviewed on the bus on the return journey or when they disembarked from the bus in Akureyri. Nine visitors were interviewed at the camp site in Ásbyrgi and two at the visitor centre. The remainder (n = 99) were interviewed as they exited the main trail and returned to the car park. In terms of response rates, 19 declined to take part in the interviews: ten refusals were based on the potential respondent lacking confidence to reply in either of the two languages offered (English and Icelandic) and nine were based on the potential respondent being unwilling to spend the time required.
Semistructured interviews were conducted with six park employees: two who held management positions and four rangers. These interviews varied in length from 20 to 45 min. The employee-interview framework was designed to ensure a logical flow of questions organized into six sections corresponding to our stated objectives; however, they were also invited to elaborate on points they considered especially important for the park's management.
Tour guides interviews were shorter, with most less than ten minutes. Seven guides were asked a series of questions similar to those asked of visitors, about both their own opinions of the existing and possible future interpretation as well as about what they thought their customers, as visitors to Ásbyrgi, wanted.
When the data was collected, the diurnal temperature in the park ranged from 6-9 • C. It rained for at least part of every day and there were periods of very strong wind. When the rain and wind eased, small biting midges were abundant. These conditions are likely to have influenced comfort levels of interviewees and thus possibly their responses.
Combined, the results from the data collected reveal who visits Ásbyrgi and what activities they engage in whilst there, as well as the perspectives of a range of stakeholders on current and potential future interpretation strategies. This multimethod and multistakeholder approach was considered the most valuable for determining the best way forward for the park management's dual goal of satisfying visitors whilst protecting the natural environment.
Results and Discussion
This section is structured to follow the three aims of the study. It first presents findings on the visitors and their satisfaction with the existing interpretation in Ásbyrgi (Section 3.1), before discussing what type of further information should be conveyed, from both the perspective of the park users (visitors and guides) and employees (managers and rangers (Section 3.2). How those messages could best be presented is reviewed in the final section (Section 3.3).
Visitor Profile and Satisfaction with Interpretation
This section first provides a profile of the visitors who were observed and interviewed, and establishes how satisfied visitors and guides were with the information available in Ásbyrgi at the time of the study. It then presents results revealing what further information these stakeholders would like to know about the destination.
Visitor Profile
Of the 120 visitors interviewed, approximately one quarter were from either Austria or Germany and just over half were from other countries in Europe, totalling 75% from European countries. Visitors from Canada and North America made up 18% of the sample, with just 7% from the rest of the world. In this 7% were two visitors from Asia, four from the Middle East and four from Oceania. This is reflective of national statistics as most visitors to Iceland come from Europe [4].
Very few children were observed during the study, despite it being school holidays in Iceland. Most visitors were aged between 20 and 60, with younger people more likely to be travelling by car and older people by tour bus.
A popular reason people gave for visiting Ásbyrgi was, as expected, due to the unique natural features of the area-particularly the geology-and access to hiking trails: "Essentially for the nature and hiking areas" (male, mid-30s, Canadian). However, most stated that they visited because they were following an itinerary suggested by a travel agent, or because they were on a bus tour that included a stop in Ásbyrgi. Many had learnt about the location from a guidebook and/or online source. Wanting to visit somewhere that was different, or a location less known about than the more commonly visited sites in Iceland, was also a popular reason for being in Ásbyrgi.
Satisfaction with Current Information
The visitors (n = 120) we interviewed were overwhelmingly positive about the existing interpretation in Ásbyrgi, with 86% saying they were satisfied with it (e.g., "information here is very good", couple, mid-50s, German) and just 8% indicating they were not. The remain 6% were undecided or chose not to respond.
The guides on tour buses (n = 7) also like the interpretation in Ásbyrgi (100%) and their perception was that the visitors they transported were also satisfied. They emphasised that the information was minimal; however, thought that was appropriate for visitors on organised tours. Tour guides provided their customers with extra information before arriving at the park and also while at the site.
Observations of visitors and guides at the three sites (n = 124) revealed that the time spent viewing signs could depend on many factors, but the three most important were the presence of other visitors, the weather conditions and the time the visitor had available. At the main trail entrance, 72 groups were observed over a period of 218 min. Eleven of the groups spent no time looking at the signs, and three groups each spent more than three minutes. Most groups (n = 22) spent between one and two minutes ( Figure 5). At the pond, 20 groups were observed over a period of 96 min. Two of the groups spent less than two minutes on the viewing platform, and two groups spent more than ten minutes. The most popular times to spend there were between two and four minutes and between six and eight minutes (both n = 5) ( Figure 6). Observations of groups at the visitor centre (n = 22) took place over 138 min and revealed that visitors generally spent more time in this location than they did at either of the other observational sites in the park. pond, 20 groups were observed over a period of 96 min. Two of the groups spent less than two minutes on the viewing platform, and two groups spent more than ten minutes. The most popular times to spend there were between two and four minutes and between six and eight minutes (both n = 5) ( Figure 6). Observations of groups at the visitor centre (n = 22) took place over 138 min and revealed that visitors generally spent more time in this location than they did at either of the other observational sites in the park. If visitors perceived crowding at the signs and they had to wait to view them, then they were less likely to spend time reading the signs. If the weather made standing still uncomfortable (rain in Ásbyrgi during August is common), then they were likely to spend less time reading the signs, even if there were no other people present. Visitors who arrived by private car and/or were camping nearby usually had more time to spend than those who were on a tour bus; thus, they were likely to devote some of that time to more thoroughly reading the signs. For example, the longest time spent at the trail entrance was six minutes and fifty seconds. This was made by a group of two, a male and a female, both approximately thirty years of age, as they were leaving park. In comparison with other times observed, this was abnormally long and thus an outlier in the data. For example, a If visitors perceived crowding at the signs and they had to wait to view them, then they were less likely to spend time reading the signs. If the weather made standing still uncomfortable (rain in Ásbyrgi during August is common), then they were likely to spend less time reading the signs, even if there were no other people present. Visitors who arrived by private car and/or were camping nearby usually had more time to spend than those who were on a tour bus; thus, they were likely to devote some of that time to more thoroughly reading the signs. For example, the longest time spent at the trail entrance was six minutes and fifty seconds. This was made by a group of two, a male and a female, both approximately thirty years of age, as they were leaving park. In comparison with other times observed, this was abnormally long and thus an outlier in the data. For example, a group of eight who entered the park with a guide who translated the signs for them into another language were only there for two minutes and forty seconds.
Visitors spent more time looking at signs at the pond than at the main trail entrance. This could be because there were seats on the pond platform and people felt more comfortable to stop there. It may also be because at the entrance they were keen to start the walk without delay, whereas at the pond they were already part way through the walk.
These results suggest that managing the arrival of crowds so that congestion is minimized, for example, by staggering the arrival of large bus groups, may encourage reading of signs. Signs being under shelter may also encourage visitors to spend longer reading the information on them. This is supported by the observation that visitors spent most time engaging with the displays in the visitor centre, where they were comfortable and dry even in adverse weather conditions.
The Type of Information to Convey
Responses to the question about what interpretation visitors wanted was mixed. Although most respondents were satisfied with the current information on the signs in Ásbyrgi and at the visitor centre, some said they would like more. Popular topics for further information were related to the geology; for example, "about the earth" (Family, The Netherlands), and the cultural history, or "about the hidden people" (male, early 20s, Germany) who are said to live in the face of the cliff. A fairly common theme was the perceived need for more signs relating to practical guidance of where to go, rather than information about the area's features, on all hiking trails in Ásbyrgi.
Guides concurred with this finding, stating that more information about the length and difficulty of hiking trails would be beneficial. In the opinion of most guides, parts of the park are lacking signs, particularly good signs in English. They noted that the signs by the pond are small and specific to birds that may not be present at the same time as the visitor. Some guides suggested that general signs about Ásbyrgi be erected on the platforms by the pond-especially the platform which had the least existing signage.
Park managers and rangers were also asked for their views on what content should be interpreted for visitors. Responses across these six employee respondents were broadly in agreement with each other, but also with what guides and visitors want. The geology and nature of the area, along with cultural heritage and mythology, were in the foreground, followed by safety. In terms of desired compliance from visitors, the staff in Ásbyrgi face issues similar to those in National Parks all over the world. They want visitors to: "camp in campsites, use the toilets, don't drive off-road" (Manager 1).
Managers, rangers, and guides expressed concern that visitors do not always behave appropriately. Signs requesting visitors to not use drones and to stay on designated paths, for example, were being ignored. Drones disturb wildlife in Ásbyrgi, which is an important nesting site for migratory birds. Visitors straying from paths can also disturb wildlife and impact greatly on vegetation, which tends to be slow-growing in Iceland's climate and thus highly susceptible to adverse long-term effects from trampling [48]. Staff were keen to find ways to overcome this and thus ensure maintenance of a sustainable environment for the flora and fauna, and for the visitors.
Whilst managerial staff responses were largely concerned with strategic planning, such as developing ranger programs and safety rules and information, rangers were more able to share their experience from the field and communication with visitors. They elaborated on how the majority of visitors followed parks rules, and expressed desire to provide visitors with positive messages about behaviour and cleanliness. Both managerial staff and rangers expressed their commitment to the nature in Ásbyrgi and their eagerness to teach guests to respect and experience it-to "be in the present" (Ranger 1) while visiting there is how one ranger described it.
How the Information Should Be Presented
Responses about what the visitors wanted from interpretation in Ásbyrgi was mixed. Although most were satisfied with the amount of signage and the content of the signs, some said they wanted more, and a few said they would be happy with less. Most were receptive to face-to-face interpretation, and few were receptive to obtaining information through increased handheld technology.
The majority of visitors (66%) shared the opinion that the park should not contain more signs (unless as directions on long trails). Reasons given for this often related to a perception of signs being intrusive on the natural environment and a distraction from the beauty of it, with many respondents sharing the perception that signs were inappropriate for interpretation in a natural area: "avoid unnatural intervention . . . no more signs" (couple, mid-50s, Germany).
Over a quarter of the visitors (29%) said the park would benefit from more signs. As stated above, much of this was related to the desire for directional markers on tracks rather than signs containing information specifically about the history and context of the location. Most visitors thought the existing signs were valuable (e.g., "I like signs", male, early 30s, Italy), though there were a small few for whom the signs held no interest (e.g., "didn't read the signs", female, early 20s, Germany).
Inquiring about preferred other ways to receive information in Ásbyrgi, we prompted visitors to think about electronic methods of communication, such as a downloadable app. While some were receptive to the idea (e.g., "the way to go", male, mid-50s, USA), most were not. When asked if they would use an app on their phone, 35% said yes and 43% said no. The number who were undecided was high (22%) in response to this question. Having limited access to data due to the remote location was a key reason that interpretation available on a handheld electronic device lacked appeal. The perception of wanting to appreciate the nature without interference of technology was a further key factor for many.
Again, findings from the interviews with guides matched those of the visitors. Most guides were not keen on the use of apps or other electronical equipment in Ásbyrgi, though several suggested QR codes on signs as an option for visitors who prefer information delivered in an electronic format.
Asked about how information should be presented, both managers and rangers spoke positively about the continued and expanded use of signs. However, they expressed caution about using signs that might negatively affect the visitor experience, such as very large signs or signs overloaded with text that would likely be ignored. This stakeholder group considered vocal messages, through ranger led information sessions or roving interpretation, an effective way to communicate with visitors. However, they also stressed the importance of rangers (and other visitors) respecting those visitors who do not want this type of interference and just want to "enjoy the sounds of nature" (Ranger 2) as part of their experience. Funding limitations for providing face-to-face interpretation was also a consideration.
None of the staff were enthusiastic about using smartphone apps or similar technology for interpretation of the nature and history in Ásbyrgi. However, some emphasised that certain information needed to be collated and communicated to visitors on a national level. For this purpose, apps or other smartphone solutions were considered a practical option.
From the interviews, it was obvious that managers and rangers in Ásbyrgi are cognisant that "Tourism in national parks is essentially about providing memorable naturebased experiences for visitors" [21] and want to provide those experiences without disturbing the plant and animal life in the park. When visitor numbers were small this was relatively easy to achieve. As visitor numbers have grown, achieving this goal has become a greater challenge. The managers are keen to implement interpretation that stimulates visitor interest and promotes learning. They want to guide visitors in appropriate behaviour that will assist with sustaining the natural environment in Asbyrgi. This follows recognition in the Northeast Iceland Strategic Tourism Plan (2009-2014) that "The development of appropriate infrastructure and signage for safe and secure visitation to properly manage the thousands of visitors that visit these sites annually is needed" [49] (p. 19).
Conclusions
Overall, most visitors and guides were satisfied with the existing interpretation at Ásbyrgi and thought the current amount of signage containing general information about the area was sufficient. Suggestions for more signage was most often related to directional signs on the hiking trails. However, as tourism pressure increases, a small number of signs that are difficult to read under adverse weather and crowding conditions might not be the best way to proceed.
As visitors spend more time at the pond than at the signs at the main trail entrance, this may be a better location for increased interpretation in the future. As visitors were highly appreciative of information communicated verbally by guides and rangers, this interpretation option is worthy of further exploration. If visitors receive information in ways they prefer, then they are likely to have an enhanced understanding about both the park's features and of appropriate behaviour to protect the environment.
Some visitors and guides were receptive to the idea of an app or other method of enhanced technology to obtain information about Ásbyrgi, but most were not. Lack of access to data and not wanting the technology to disturb their experience of the natural environment were key reasons cited for this, with a common theme in comments being that the visitors wanted to look at the nature not look at their phones. It would be useful to explore whether this sentiment is common in other nature-based tourism settings or is linked to the uniqueness of Iceland, its sense of remoteness for most visitors [50], and their desire for this natural experience. An interactive app on a handheld device is perhaps more appropriate for an urban setting, and settings in which access to data is cheaper and more readily available.
The Ásbyrgi visitor centre is underutilized. Many visitors pass it on their way into the park and do not stop there. A requirement stop at the centre as visitors enter the park would enable managers to provide them with information before they commence their in situ experience. A downloadable app could be offered, using the centre's Wi-Fi, so visitors can access it later while they are in the park. Encouraging visitors to engage with interpretive displays in the centre to learn more, and hopefully understand more, about the need to care for the fragile environment before experiencing it, could also assist with sustainability outcomes.
Moving forward, if visitation returns to pre-COVID-19 numbers and continues to grow, then pressure on the natural environment will remain of concern, regardless of the efficacy of interpretation. Managers could consider limiting numbers in Ásbyrgi at any one time. This strategy matches the 2015 Road Map for Icelandic Tourism which states that restricting access to certain sites may be necessary to protect nature [19]. Requiring visitors to stop first at the visitor centre could also be used as a tool to stagger visitor entry.
From this study we learnt that park visitors, and those charged with providing the visitor experience whilst also protecting the environment, want similar things: a safe and accessible space in which nature can be experienced, enjoyed, and appreciated. Interpretation is welcomed in this space but should not distract from its natural beauty. One option to meet this objective would be to not increase signage in Ásbyrgi (other than trail makers for safety) but to make information available prior to entry, ensuring visitors access the visitor centre and can find there the information they want to receive and the messages park staff want to convey. A second option is to increase ranger presence in Ásbyrgi, not just to enforce compliance with appropriate behaviour but also to inform and demonstrate the important of these behaviours for long-term sustainability in a face-to-face context. This research in Ásbyrgi contributes to knowledge about what visitors do in national parks in Iceland and their desired experiences. The collected data was used to assess satisfaction with the current information provided for visitors, which aims to promote environmental sustainability in Ásbyrgi through appropriate visitor behaviour. To date, research on interpretation for visitors to natural sites in Iceland has been limited. This work is positioned to begin to address this gap. The results can assist park managers with plans for future delivery of this information at this site, but are also applicable to other nature-based tourism sites in this country and further afield.
Limitations and Further Research
Framing of messages is important, as demonstrated in the seal-tourism study by Marschall et al. [33]. This study did not analyse the text on the signs. It noted the general topic of the text so that the researchers were familiar with the content visitors were exposed to (e.g., about the geology of the park), but did not focus on how that text was written. This would be a useful follow up study, to ensure messages are conveyed in such a way as to obtain maximum value for the visitor but also for the educational wishes of the managers.
Interviews with visitors in this study were only offered in English and Icelandic, and 10 of the 139 people approached to be interviewed declined based on their lack of proficiency in those languages. Having a greater capacity to collect data in more languages would be useful in further studies.
The study took place during the month of August, which is toward the end of the peak summer tourist season in Iceland. There are likely to be more visitors in July and less in October, for example. Expanding observations to other months, when the origin of visitors and experience of crowding may differ, could add to the depth of understanding about perceptions and satisfaction.
|
v3-fos-license
|
2023-01-13T14:55:06.665Z
|
2016-10-13T00:00:00.000
|
255763833
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12649-016-9718-7.pdf",
"pdf_hash": "a961bfcfa2ae1a6ec2c8cb2926536ca09376ca39",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45864",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"sha1": "a961bfcfa2ae1a6ec2c8cb2926536ca09376ca39",
"year": 2016
}
|
pes2o/s2orc
|
Enzyme Enhanced Protein Recovery from Green Biomass Pulp
Globally, animal feed protein is a key factor for production of meat for human consumption. Protein for animal feed is in many parts of the world not available in sufficient amounts; demand is met only through import of feed protein. Such protein deficit can be minimized through optimized use of local protein resources based on upgrade from e.g. green plant biomass. In present work we consider different strategies for protein recovery from white clover and ryegrass screw press pulps, using aqueous extraction, as well as carbohydrases and proteases enhanced extraction. Protein recovery in these studies was determined as a yield of solubilized protein with regard to the total protein in a screw press pulp. Aqueous extraction at pH 8.0 resulted in approx. 40 % protein recovery, while proteases application (Savinase 16.0L, Novozymes) enabled twice higher protein yield. Application of plant cell wall degrading enzymes (Cellic CTec2 and Cellic HTec2, Novozymes) did not provide detectable protein recovery, while consecutive proteases treatment resulted in approx. 95 % protein yield. RuBisCO peptides were demonstrated by amino acid analysis to be the major component of white clover and ryegrass pulp proteolyzates, generated by Savinase 16.0L protease.
Introduction
Biomass conversion and biorefinery technologies, making value from biomass feed stocks, have so far focused primarily on upgrade of the lignocellulosic components of the biomass. In such processes the plant protein remained underexploited. Protein for animal feed is a key factor for production of meat for human consumption. In several areas around the world, such as e.g. Europe, protein for animal feed is imported at the same time as the local source of plant protein remains underexploited. In this work we investigate an optimized process for recovering of the plant protein from green plant biomass of both monocots (ryegrass) and dicots (white clover). The process is worked out in the perspective of being an integrated part of a value cascading of the green plant biomass, making use of upgraded proteins for monogastric animal feed, and use of the fibers plus residual proteins for cattle feed.
Efficient recovery of plant protein is a key point of green biomass value cascading. Until now protein extraction from leaves was focused primarily on mechanical disintegration, green juice separation and thermal precipitation of protein from green juice, while pulp protein remained underexploited. Upgraded processing of green biomass includes protein recovery not only from green juice, but also from the pulp. Resulting material (fibers ? residual protein) is further used for C5 sugars recovery and high quality cattle feed production.
Nutritional value (bioaccessibility, amino acid profile and lack of antinutritional factors) of green plant protein concentrate is a crucial parameter for economic Electronic supplementary material The online version of this article (doi:10.1007/s12649-016-9718-7) contains supplementary material, which is available to authorized users. potentials of green plant biorefinery. Imported soybean protein in animal diet can be potentially replaced with leaf protein, only if leaf protein nutritional value at least matches the quality of soybean protein. This has still to be proven in commercial scale digestibility tests, however several studies have replaced soy meal to a different degree with green plant protein concentrates and reported promising results with no or little negative effect [1][2][3]. Maybe more importantly, these studies also show how process optimisation can increase quality of the protein concentrate by choosing the right up-and downstream processes [4].
The pivotal work of leaf protein extraction was started by Pirie [5,6], he suggested mechanical disintegration of fresh green biomass, followed by squeezing of juice, and accomplished by protein separation from the liquid obtained. Proteins of plant leaf cell, as proteins of typical eukaryotic cell, are located in plasma membrane (integral and peripheral proteins) and cytoplasm. Cytoplasmic proteins constitute the major part of total protein pool in plant leaf cell. Cytoplasmic proteins are either directly dissolved in cytoplasm or included in organelles. In comparison with other organelles, chloroplasts accumulate the major part of leaf protein (up to 75 % of total protein) [7]. Ribulose 1,5-bisphosphate carboxylase/oxygenase (RuBisCO, EC 4.1.1.39) is the most abundant enzyme of chloroplasts, catalyzing CO 2 fixation in the first step of the Calvin cycle. RuBisCO is composed of eight large and eight small subunits with molecular weight approx. 55 and 13 kDa, respectively [8,9]. RuBisCO was reported as one of the most abundant proteins in the biosphere [10,11], and thus it is of particular interest for green biomass biorefinery. After disintegration of fresh green leaves and juice squeezing cytoplasm dissolved proteins are harvested mainly in juice, while plasma membrane associated proteins and organelles proteins (mainly from chloroplasts) are separated between juice and solid press cake (pulp) in a proportion, depending on plant species and pressing techniques (on average, 50 % of total protein remains in pulp fraction, bound with biomass cellulosic matrix).
Despite several physico-chemical methods of protein extraction from leaves were also suggested (alkaline extraction, [12]; aqueous ammonia extraction, [13]), mechanical disintegration of biomass, followed by juice squeezing, currently seems to be the most relevant method for protein separation from cellulosic matrix. Protein recovery, resulting from biomass common pressing and juice squeezing, is approx. 40-50 % [6,14], while enhanced procedures with higher extent of cell wall disruption provide approx. 75 % protein yield [15]. Even higher protein recovery was achieved for grasses (84 %) after biomass complete mechanical disintegration and tissues fractionating [16], but the latter process industrial application is still questionable because of the high level of energy consumption. Obviously, even after severe mechanical disintegration and liquid separation a certain part of the plant protein still remains in the pulp.
Since separated protein is of high economical interest in the green biorefinery concept, it will make sense to optimize the total yield of extracted plant protein. Thus it will be of particular interest to enhance overall protein yield from fresh green leaves by recovering protein from pulp in a low-cost and environmentally friendly process.
Enzymes are catalytic molecular machines, which application already benefited many industrial processes from economical and technological points of view [17], and it seems reasonable to investigate enzymes potential for protein recovery from green biomass pulp. At least two different strategies may be suggested for enzymatic protein recovery from leaf pulp-cell wall hydrolysis by carbohydrases [18], and protein hydrolysis by proteases [19].
The hypothesis for this study is that a significant proportion of the protein content in green leaves remains in the pulp fraction after screw pressing; and that such protein can be utilized efficiently as animal feed in two different ways. By remaining in the pulp fraction and used for dairy cows; or made bioaccessible also to non-ruminant animals by enzyme hydrolysis and used as feed ingredients for pigs, chickens, fish etc.
In this work we summarize our findings in protein recovery from ryegrass (Lolium perenne) and white clover (Trifolium repens) screw press pulps using aqueous extraction, as well as carbohydrases and proteases enhanced extraction.
Green Biomass Pulps
White clover and ryegrass pulps were kindly provided by Morten Ambye-Jensen (Aarhus University, Denmark). Pulp samples were obtained after fresh plants screw pressing and juice separation. DM content in white clover and ryegrass pulps was 32 ± 1 % and 33 ± 1 %, respectively. Crude Kjeldahl protein content in white clover and ryegrass pulps was 16 ± 1 % and 10 ± 1 % with respect to DM. Pulp samples were stored at -20°C.
Enzymes and Reagents
Cellic CTec2, Cellic HTec2, and Savinase 16.0L blends (liquid form) were produced by Novozymes (Denmark). All reagents used for buffers preparation and for Kjeldahl assay were provided by Sigma-Aldrich unless otherwise stated.
Protein Assays
Four different methods for protein concentration determination in white clover and ryegrass samples were tested in this work (UV absorbance, Bradford, bicinchoninic acid, and Kjeldahl protein assays), while only UV absorbance and Kjeldahl protein assays were chosen for further research. RuBisCO extinction coefficient (Abs 1.7 for 1 g/L concentration in 1 cm optical pathway) was used for protein concentration determination by UV absorbance protein assay (280 nm). Bradford protein assay was performed according to the original work [20]. Bicinchoninic acid protein assay was performed using Pierce BCA assay kit (Thermo Scientific), according to the manufacturer's instructions. BSA and bovine c-globulins were used as standards for bicinchoninic acid and Bradford assays, respectively.
Kjeldahl assay was performed using BÜ CHI speed digester K-425/K-436, scrubber B-414, and distillation unit K-350, according to the manufacturer's instructions. 1 L of digesting reagent contained 134 g K 2 SO 4 , 7.3 g CuSO 4 , and 134 ml H 2 SO 4 . 1 L of ammonia trapping solution contained 500 g NaOH and 25 g Na 2 S 2 O 3 Á5H 2 O. Ammonia containing trapping solutions were titrated using 0.01 N HCl solution and mixed indicator solution (400 mg methyl red indicator, 200 mg methyl blue indicator in 300 mL 95 % ethanol). Total Kjeldahl nitrogen content was converted into total crude protein content by multiplying with the empirical coefficient of 6.25.
Amino acid analysis of pulp proteolyzates was performed at DTU. Data were corrected for the amount of added enzymes (enzymes blank) and presented as amino acid ratios, expressed in mole percent.
Biomass Hydrolysis and Proteolysis
Biomass hydrolysis (Cellic CTec2, Cellic HTec2) and proteolysis (Savinase 16.0L) reactions (reaction volume 20 mL, biomass dry matter concentration 20 mg/mL) were performed in 50 mL plastic tubes under continues shaking (200 rpm). Sodium azide at a final concentration of 3 mM was used for prevention of microbial growth in all samples. Tween 80 at a final concentration of 0.5 % wt was applied for testing of detergent effect on pulp protein recovery. UV absorbance protein assay was applied for protein recovery kinetics determination (24, 48, 72 h points). 72 h point protein concentration was also measured by Kjeldahl assay. Biomass samples were incubated at 50°C at pH 5.0 (0.05 M sodium acetate/acetic acid buffer) and at pH 8.0 (0.05 M Na 2 HPO 4 /NaH 2 PO 4 buffer) with either carbohydrases (Cellic CTec2 and Cellic HTec2) or proteases (Savinase 16.0L) for 24, 48, 72 h and then centrifuged (13,000 rpm, 10 min). Supernatants thus obtained were used for protein determination. Biomass samples, incubated in water for 24, 48, 72 h, were used as a substrate blanks in UV absorbance protein assay (for correction for non-protein UV-absorbing plant components). Enzyme blanks were performed using Savinase 16.0L (0.05 M Na 2 HPO 4 /NaH 2 PO 4 buffer), Cellic CTec2, and Cellic HTec2 (0.05 M sodium acetate/acetic acid buffer) solutions. Biomass hydrolysis and proteolysis were performed in three replicates. Protein concentrations were corrected for the amount of added enzymes (enzymes blank) and presented as mean ± standard deviation.
Protein yield in this work was determined as a ratio of solubilized protein to the total protein in a screw press pulp, expressed in percent. In case of enzymes addition (Cellic CTec2, Cellic HTec2, Savinase 16.0L) protein yield was corrected for the amount of added enzymes (enzymes blank).
SDS-PAGE
SDS-PAGE (4 % stacking gel and 12 % separating gel) was performed following Mini-Protean Tetra Cell system instruction manual (Bio-Rad). Protein bands were revealed by staining with PageBlue staining solution (Thermo Scientific). Page ruler plus (Thermo Scientific) prestained 15-250 kDa proteins were used as molecular weight markers.
Statistical Analysis and Other Computations
For analyzing statistical difference of two data sets Student's t test with unequal variances was performed, using Microsoft Excel 2010 software. For analyzing statistical difference of three and more data sets single factor ANOVA was performed, using the same software. Statistical significance was estimated at p \ 0.05.
Protein Aqueous Extraction
White clover and ryegrass pulps were incubated at pH 5.0 and pH 8.0 for 72 h. Four different methods were used to quantify protein concentration in centrifuged solutions. All applied methods detected protein in pH 8.0 incubated samples (Table 1), while no protein was found in any of the pH 5.0 incubated samples.
According to the data obtained, UV absorbance, Pierce bicinchoninic acid, and Bradford assays resulted in the statistically equal protein concentration for white clover pulp, while lower concentration was obtained by Kjeldahl assay. At the same time, UV absorbance, Bradford, and Kjeldahl assays resulted in the statistically equal protein concentration for ryegrass pulp, while higher protein concentration was obtained by Pierce bicinchoninic acid assay. Due to complexity of plant biomass composition, there are many interfering compounds, which may increase analytical signal in all these methods (reducing agents for bicinchoninic acid assay, aromatic compounds for UVabsorbance and Bradford assays), furthermore, bicinchoninic acid and Bradford assays are not compatible with detergents. Kjeldahl analysis results were taken as reference values for present research, because Kjeldahl analysis is compatible with detergents and much less affected by non-protein compounds, than other methods. Kjeldahl analysis is able to determine organic nitrogen in the form of proteins, oligopeptides, and free amino acids, which was an additional advantage for our study.
SDS-PAGE revealed a single low-molecular weight protein band (B15 kDa) for pulp samples, incubated at pH 8.0, while there were no bands for pulp samples, incubated at pH 5.0 or in water (Fig. 1). Based on the data obtained, we suggest that protein extraction at pH 8.0 may occur due to plant proteases action, and the observed protein band was formed by the front of running liquid, containing the resulting peptides. In living cells proteases are mainly localized in special organelles (lysosomes) and are not freely distributed in cytoplasm. After mechanical processing of green biomass lysosomes should be partially destroyed, which results in proteases liberation into cytoplasm. Majority of plant proteases demonstrate alkaline pH optima [21,22], that's why protein recovery in aqueous extraction was observed at pH 8.0, rather than pH 5.0.
In present research UV absorbance protein assay was chosen for protein recovery kinetics investigation, because this assay can be easily carried out for large number of samples. Despite UV absorbance assay tends to increase real protein concentration, it is still relevant for evaluation of relative protein recovery progress. Protein concentration after 72 h incubation was measured by Kjeldahl assay and taken for protein recovery yield calculation. Protein recovery yields were calculated with respect to the pulp Kjeldahl crude protein content and expressed in percent. As can be seen from the data obtained (Table 2), 43 and 31 % of pulp protein was recovered by aqueous extraction at pH 8.0 from white clover and ryegrass pulps, respectively. Interestingly, Sari et al. [23] found that approx. 7 % of total protein can be extracted from not pretreated ryegrass at pH 10 (25°C, 1 day). Higher protein yield, observed for ryegrass pulp in present work, should be due to mechanical pretreatment of biomass by screw pressing, as well as due to plant proteases action, activated by appropriate pH.
Detergents are amphipathic molecules, which are able to destroy ordered structure of lipid bilayer membrane, facilitating membrane-associated proteins solubilization. Different detergents are widely applied in routine biochemistry practice for hydrophobic proteins solubilization (e.g. for membrane integral proteins purification [24]), thus we decided to test detergent effect on pulp protein recovery. Tween 80 detergent was chosen for all present experiments, because it is nontoxic and widely used in food industry [25]. Tween 80 addition resulted in statistically significant increase of protein yield: 53 and 40 % of protein was recovered from white clover and ryegrass pulps, respectively.
Proteases Enhanced Protein Recovery
Proteases are enzymes, which are involved in numerous metabolic pathways, concerning protein degradation in cells [26]. Sari et al. [19] demonstrated that commercial Genencor (Danisco) proteases (Protex 40XL, Protex P, Protex 5L, Protex 50FP, and Protex 26L) increase protein recovery from soybean, rapeseed, and microalgae. Thus next step of this work was to investigate proteases potential for protein recovery from white clover and ryegrass pulps. Commercial proteases blend Savinase 16.0L (Bacillus sp. proteases) was chosen for corresponding investigations, because it was recommended as the most suitable enzyme for releasing peptides from lentil proteins in comparison with three other commercially available proteases blends [27]. As can be seen from Table 3, Savinase 16.0L proteases resulted in approximately two times higher protein recovery, than was observed for aqueous extraction at pH 8.0. Interesting, Savinase 16.0L proteases provided similar protein yield from white clover and ryegrass pulps (79 and 76 %, respectively), while protein recovery at aqueous extraction was much higher for white clover pulp (43 %), than for ryegrass pulp (31 %). Such phenomenon could rise from different proteolytic activities in pulp samples, which in turn may rise from different extent of pulp mechanical disintegration. Ryegrass leaves demonstrate somewhat higher mechanical rigidity, than white clover leaves, and thus lysosomes disintegration and plant proteases liberation in ryegrass pulp may be lower, than those in white clover.
Tween 80 addition to Savinase 16.0L proteases did not result in statistically significant increase of protein yield, compared to corresponding experiments without detergent. SDS-PAGE of Savinase 16.0L treated samples did not reveal any plant protein bands, all presented bands corresponded to Savinase 16.0L proteins (data are not shown). The latter observation clearly indicated that Savinase 16.0L formed peptides molecular weight was lower than 15 kDa (\15 kDa peptides couldn't be detected by 12 % SDS-PAGE).
Minimal enzyme dosage, sufficient for required degree of substrate conversion, is an important economical and technological parameter of any enzyme catalyzed industrial process. Thus protein recovery dependence on Savinase 16.0L dosage was investigated ( Table 4). As can be seen from Table 4, proteolysis yield was not increased by proteases dosages higher than 5 mg/g. Moreover, 1.25 mg/g dosage resulted in statistically equal proteolysis yield, compared to higher dosages. As already mentioned in the Introduction, RuBisCO is the most abundant enzyme of chloroplasts. Therefore, it was of particular interest to compare amino acid composition of pulp proteolyzate and RuBisCO. As can be seen from the data obtained (Fig. 2), amino acid composition of white clover and ryegrass pulp proteolyzates was rather similar to corresponding RuBisCOs composition. Low methionine and cysteine contents, obtained for pulps proteolyzate, may rise from experimental loss of these amino acids due to their oxidation during sample acid hydrolysis (6 M HCl). High similarity was found for approx. half of analyzed amino acids (Glu ? Gln, Gly, His, Ile, Phe, Thr, Tyr, Val), while some differences were observed for others. In conclusion, the amino acid profiles of these proteolyzates suggest, that the protein in the pulp fractions is very closely related to (or include a major fraction of) the RuBisCO-type protein (see Fig. 2), confirming that RuBisCO peptides form the major part of white clover and ryegrass pulp proteolyzates.
Bearing in mind that many peptides possess biological activity, white clover and ryegrass RuBisCO sequences were compared with database of bioactive peptides. Interestingly, many peptides with various biological activities can be potentially produced from RuBisCO by its digestion with proteases (list of bioactive peptides is provided in Supplementary). A number of RuBisCO peptides demonstrate beneficial healthy activities (e.g. immunostimulating, antioxidative, glucose uptake stimulating activities), which may be an additional advantage of pulp proteolyzate for feed application. Nevertheless, further studies are required for detailed characterization of biological effect of green biomass pulp proteolyzates.
Carbohydrases Enhanced Protein Recovery
All cells are known to have a cell membrane (also referred to as plasma membrane) outside of them, which protects and organizes cells. Plant cells further have a cell wall, which provides additional protection and sufficient mechanical support. Plant cell wall is composed of cellulose, hemicelluloses, and lignin. It is cellulose that provides plant leaves essential elasticity in nature and at the same time complicates their mechanical disintegration in biorefinery [28]. Despite of certain mechanical processing of white clover and ryegrass leaves during screw pressing, partially broken cellulosic cell walls may still create steric hindrances for proteins diffusion outside the cells. To eliminate these steric hindrances, carbohydrases may be applied for plant cell walls hydrolysis. In this work Cellic CTec2 and Cellic HTec2 enzyme blends were chosen for pulp cell walls hydrolysis as a well-known source of efficient blends of cellulases and hemicellulases. White clover and ryegrass pulp samples were hydrolyzed for 72 h (50°C, pH 5.0, 20 mg/ml biomass concentration, 30 mg/g enzymes dosage for Cellic CTec2 and Cellic HTec2). Despite cellulosic cell wall hydrolysis into monomers was almost quantitative (based on glucose yield), no plant protein was detected in supernatants by Bradford and Kjeldahl protein assays after samples centrifugation. In order to exclude any error in protein determination, SDS-PAGE of hydrolyzates was performed. No plant protein bands were identified in the gel, all presented bands corresponded to Cellic CTec2 and Cellic HTec2 proteins (data are not shown). Tween 80 addition in 0.5 % wt concentration did not affect plant protein recovery by carbohydrases.
Obtained results indicate that cellulosic cell wall steric hindrance is not the only one factor, limiting pulp protein recovery. Major part of pulp protein is located in chloroplasts, which are not affected by carbohydrases, because chloroplasts membrane includes 50-60 % of protein and 40 % of lipids [7]. We also suggest that some part of protein may aggregate into insoluble clusters. Chloroplasts membrane, as well as hypothetic protein clusters can be hydrolyzed by proteases. Thus it was of interest to investigate if proteolysis yield can be increased by preliminary cell walls hydrolysis. Cellic CTec2 and Cellic HTec2 mixture was applied for modest and exhaustive hydrolysis of white clover and ryegrass pulps. Samples thus obtained were treated by Savinase 16.0L proteases.
According to the data obtained (Table 5), cell walls modest hydrolysis did not result in statistically significant increase of following protein recovery by proteases, while exhaustive hydrolysis enhanced following protein recovery by proteases approximately 1.2 times in comparison with unsupported proteases action ( Table 3). The latter observation indicates that cellulosic cell walls in screw pressed pulp and even in pulp, modestly hydrolyzed by carbohydrases, create a certain steric hindrances for proteases diffusion inside plant cells.
Conclusions
Currently, imported soybean protein is used in Europe as a major part of protein diet in animal production. At the same time, local high productive source of plant protein remains underexploited. Leaf protein is a valued product for animal feed production, which is potentially able to substitute high cost soybean protein. Substitution of soybean protein with leaf protein will minimize feed protein deficit and support economical sustainability. In present work we considered different strategies for protein recovery from white clover and ryegrass screw press pulps. Approximately 40 % of total pulp protein was recovered by aqueous extraction at pH 8.0, while approx. 80 % of protein was recovered by Therefore, further studies are required for detailed characterization of biological effect of green biomass pulp proteolyzates.
|
v3-fos-license
|
2019-04-24T13:18:26.968Z
|
2019-04-24T00:00:00.000
|
128362690
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2019.00877/pdf",
"pdf_hash": "67151c6a88d3d012ccd4fe3f0757f157fbff6eb2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45871",
"s2fieldsofstudy": [
"Linguistics"
],
"sha1": "67151c6a88d3d012ccd4fe3f0757f157fbff6eb2",
"year": 2019
}
|
pes2o/s2orc
|
Asymmetries Between Direct and Indirect Scalar Implicatures in Second Language Acquisition
A direct scalar implicature (DSI) arises when a sentence with a weaker term like sometimes implies the negation of the stronger alternative always (e.g., John sometimes (∼ not always) drinks coffee). A reverse implicature, often referred to as indirect scalar implicature (ISI), arises when the stronger term is under negation and implicates the weaker alternative (e.g., John doesn’t always (∼ sometimes) drink coffee). Recent research suggests that English-speaking adults and children behave differently in interpreting these two types of SI (Cremers and Chemla, 2014; Bill et al., 2016). However, little attention has been paid to how these two types of SI are processed in a non-native, or second language (L2). By using a covered box paradigm, this study examines how these two types of SI are computed and suspended in a second language by measuring the visible vs. covered picture selection percentage as well as response times (RTs) taken for the selection. Data collected from 26 native speakers of English to 24 L1-Chinese L2-English learners showed that unlike native speakers, L2 speakers showed asymmetries in their generation and suspension of DSI and ISI. That is, L2 speakers computed DSI more often than ISI, but they suspended ISI more frequently than DSI. Furthermore, our RT data suggested that L2 speakers suspended ISI not only more frequently but also significantly faster than DSI. Regarding the asymmetrical behavior among L2 speakers, we consider the number of alternative meanings involved in DSI vs. ISI suspension and different routes to the suspension of SI.
INTRODUCTION
Many linguistic forms are interpreted semantically and pragmatically, which generate more than one meaning from the same form. This forces the hearer to consider all the alternative meanings and choose the meaning that is most appropriate in a given context. Alternative meanings are argued to be accessed and computed separately from the semantic meaning (Rooth, 1985(Rooth, , 1992(Rooth, , 2016. For example, the utterance in (1a) has the semantics of (1b) but implicates the proposition in (1c). Similarly, (2a) can be interpreted semantically as in (2b) and also pragmatically as in (2c).
(The symbol "∼" in this paper is used to indicate implied meaning).
b. Bob went to school at least once and possibly all the time (always). c. ∼ Bob didn't always go to school. (2) a. Bob did not always go to school (ISI).
b. Bob failed to go to school at least once and possibly never went to school. c. ∼ Bob sometimes went to school.
The linguistic phenomenon that involves a set of alternatives in terms of informational strength (e.g., < never, rarely, sometimes, often, always >, < some, most, all >) is called scalar implicature (SI). Generating an implicature from a weaker term like sometimes by negating the stronger alternative always, as in (1a) and (1c), is often referred to as direct scalar implicature (DSI). An implicature derived from the stronger term under negation by considering the weaker alternative like (2a) and (2c) is called indirect scalar implicature (ISI).
An account for why and how we make inferences like (1c) and (2c) beyond what was said in (1a) and (2a) comes from the philosopher Grice's (1975) theory of inferential communication. According to the theory, we conduct our communication based on rational expectations and principles to meet the goals of communication. He called these principles and expectations 'maxims'. One of the maxims, the Quantity Maxim, states that interlocutors are cooperative by making their contribution as informative as is required but no more informative than is required. On this account, saying Bob sometimes went to school while he always went to school is true but underinformative, thus violating the Quantity Maxim. This prompts the hearer to make the inference that the stronger term always does not hold since the speaker would have said Bob always went to school following the Quantity Maxim.
Drawing on Grice's theory of inferential communication, Levinson (2000) proposes a Default Inference account of scalar implicatures to explain how scalar inference arises in real-time communications. According to Levinson (2000), scalar implicatures are default inferences that are generated automatically and are canceled only when the context calls for it. Scale terms such as sometimes are stored in our memory in association with alternative terms like always, often, and rarely due to habitual generation of the implicatures for sometimes (i.e., 'not always') in everyday communications (Gazdar, 1979;Levinson, 1983Levinson, , 2000. Since scalar implicatures are made by default, they require little cognitive efforts from a processing point of view. Some recent psycholinguistic studies on adult native speakers provided evidence for the Default account (Grodner et al., 2010;Lewis and Phillips, 2011).
Arguing against the default view is a context-driven view such as the Relevance Theory supported by Sperber andWilson (1986/1995) and Carston (2004). Within this approach, utterances are enriched with inferences only if they are relevant to reach the speaker's intended meaning in a given context. From the point of view of the Relevance Theory, the implicated meaning of sometimes (∼ not always) in (1c) or the implied meaning of not always (∼ sometimes) in (2c) are not derived automatically by default, but rather are generated effortfully by canceling the initial literal meaning. In short, the context-driven approach argues that mental effort is required to derive contextual effects to generate scalar implicatures. As a matter of fact, a growing number of recent psycholinguistic studies on native speakers indicate that scalar implicature involves an extra cognitive process evidenced by slower response times in sentence judgment tasks (Bott and Noveck, 2004), longer reading times in self-paced reading tasks (Breheny et al., 2006;Bergen and Grodner, 2012), and delayed eye fixations in a visual world eye-tracking task Snedeker, 2009, 2011). For example, Bott and Noveck (2004) examined the generation of SI in adult native speakers of French by measuring response times (RTs) in a sentence-verification task containing underinformative (i.e., pragmatically infelicitous) sentences like (3a). Such underinformative sentences are false with a scalar inference (some but not all in (3b)) and true without the inference (some and possibly all as in (3c)). Therefore, if participants compute SI (some but not all), they would answer 'False' to the statement in (3a) because all elephants are mammals. If participants answer 'True' , it means that participants suspend SI inference and interpret some as some and possibly all as in (3c).
b. ∼ Not all elephants are mammals. c. Possibly all elephants are mammals.
Additionally, to investigate the speed of responses, participants in the experiment 1 were asked to judge such a sentence under two different instructions. Under the 'Logical' condition, participants were instructed to interpret some as some and possibly all whereas under the 'Pragmatic' condition, participants were instructed to interpret some as some but not all.
The results supported the Relevance Theory account. That is, when participants were asked to judge pragmatically, they spent more time in evaluating the underinformative sentences than when they were under the Logical condition. It further indicated that maintaining the SI inference was not effortless in processing and SI computation required extra cognitive effort, as evidenced in longer RTs. This finding was also confirmed by subsequent studies using various methodologies (Degen and Tanenhaus, 2011;Bott et al., 2012).
By employing event-related potentials (ERP) techniques, a large number of studies have investigated the integration of semantic interpretation and pragmatic inference of sentence processing. Noveck and Posada (2003) suggested a smaller N400 effect in underinformative sentences than both semantically and pragmatically acceptable sentences. However, Nieuwland et al. (2010, Experiment 1) reported a similar pattern of N400 in reading underinformative sentences only among participants with low pragmatic ability. By using a picturesentence verification methodology, Politzer-Ahles et al. (2013) tested Mandarin Chinese speakers' interpretation of the Chinese scalar item you de 'some of ' in underinformative sentences.
The ERP results showed a sustained negativity effect when the pragmatic interpretation of scalar items was not consistent with the context, indicating that suspending pragmatic meaning and activating semantic meaning required extra cognitive effort. More importantly, the authors found a qualitatively different ERP pattern of Chinese scalar items in semantically infelicitous sentences compared to pragmatically infelicitous sentences. It indicates that the reanalysis process of canceling or suspending the pragmatic interpretation is distinctively different from the process of accessing the semantic meaning.
It has been suggested that canceling SI may require additional cognitive efforts (Bill et al., 2015). Being an inference, not linguistically encoded meaning, scalar implicatures can be explicitly canceled without logical contraction. For example, in (4), the inference not always of the DSI item sometimes is explicitly canceled in Speaker B's utterance. Similarly, the inference sometimes of the ISI item not always in (5) There are two routes to the no-inference interpretation. The first route is the following. Under the assumption that a literal meaning without SI is default as proposed by the Relevance Theory, a no-inference reading can be done simply by not generating SI. This way of computing a no-inference interpretation is argued to be cognitively less demanding than generating SI since no-inference is the default interpretation. This is why young children, unlike adults, often prefer literal, noinference interpretations for scalar items (Smith, 1980;Chierchia et al., 2001;Noveck, 2001;Papafragou and Musolino, 2003). The second route to the no-inference interpretation is to cancel SI after it has been generated first. Whether one's no-inference interpretation is computed through the first route (i.e., not generating SI at all) or through the second route (i.e., canceling SI) can be teased apart via measuring and comparing response times. We will return to this issue in the methodology section. In this paper, the term SI suspension is used generally to refer to the no-inference reading achieved either by not generating SI (in young children's case) or by canceling SI via re-calculation.
Traditionally, DSIs and ISIs are considered to be the same type of inference; thus, it was assumed that they are involved in the same mechanisms and similar processing efforts. However, recent studies have shown that adults and children behave differently between DSIs and ISIs. One proposal made by Spector (2007) and Chierchia et al. (2012) is that ISIs are obligatory implicatures while DSIs are non-obligatory. According to this proposal, generating DSIs should require more efforts than generating ISIs, but suspending obligatory ISIs should be harder than suspending non-obligatory DSIs. That is, since ISIs are obligatory, interpretations with ISIs (not always ∼ 'sometimes') should be easier to process than interpretations without ISIs (not always ∼ never). These two approaches make different predictions about how DSIs and ISIs are generated and processed. 2 To test whether DSI and ISI are the same kind of inference, Cremers and Chemla (2014) examined the generation of ISI and compared with DSI using a sentence verification task. In their second experiment, participants were asked to judge whether sentences with ISI inference like (6) are true or false against a cover story.
(6) Not all of the [land animals] were fortified.
All sentences were expected to be true under the logical reading (not all and none) by suspending the inference but false under the pragmatic reading (not all but some). In addition, participants also received explicit instruction on how to interpret these sentences. Half of the participants were assigned to the No-SI group (equivalent to the Logical condition in Bott and Noveck, 2004) and the other half belonged to the SI group (equivalent to the Pragmatic condition). The findings suggested that ISI computation was cognitively more demanding and further indicated a general uniformity for the mechanism that gives rise to both DSI and ISI: scalar implicatures are associated with a delay regardless of the type of SI.
While DSI and ISI seem to be generated in a similar way, their suspension appears to be done through different mechanisms or require varying degree of cognitive efforts as shown in Bill et al. (2016). Instead of using the truth-value judgment paradigm like a sentence verification task, Bill et al. (2016) employed a covered box method developed by Huang et al. (2013). The covered box paradigm differs from the truth-value judgment methodology in that it explicitly offers the non-dominant no-inference interpretation, which encourages participants to consider both inference and no-inference interpretations for the test sentence. That is, while the truth value judgment paradigm is good for examining inference computation, the covered box method is well suited to an investigation of inference suspension.
Using the covered box method, Bill et al. (2016) examined and compared three types of inference: presupposition, DSI and ISI. However, we limit our attention here to Bill et al.'s comparison of DSI and ISI since discussion of presuppositions falls outside the scope of our paper. In Bill et al. (2016), participants were given a test sentence with a visible picture and a black covered box. They were asked to choose the visible picture if it matches the test sentence and choose the covered box if the visible picture does not match the test sentence. Example trials of DSI and ISI conditions are provided in Figures 1, 2, respectively (Figures 1, 2 are adapted from Bill et al., 2016). The selection of a covered box in each condition indicates the generation of SI whereas the selection of a visible picture suggests the suspension of SI. For instance, in Figure 1, the visible picture depicts a no-inference reading of some lions, i.e., some and possibly all lions, thus selecting the visible picture indicates suspension of DSI. If participants compute DSI (some but not all), they would reject the visible no-inference reading and select the covered box. In Figure 2, the visible picture shows a no-inference reading of not all, i.e., none of the rabbits. Selecting the visible pictures indicates ISI suspension and choosing the covered-box suggests ISI computation.
There were three groups of English-speaking participants: adults, 4−5 year olds, and 7 year olds. Results showed that adults generated DSI significantly more often than ISI whereas 4-5 year olds and 7 year olds computed ISI significantly more frequently than DSI. Adults were more likely to suspend the inference in ISI than in DSI (a low percentage of selecting covered-box in ISI vs. a high percentage in DSI), while children were more likely to suspend the inference in DSI than in ISI (the opposite percentage pattern to adults).
In sum, there is a general uniformity of processing behavior between DSI and ISI computation such that DSI and ISI are computed at similar rates. However, there are asymmetries between DSI and ISI suspension. English-speaking adults are more likely to suspend ISI than DSI whereas children are more likely to suspend DSI than ISI.
Understanding how DSIs and ISIs are computed and suspended is important not only in linguistic and psycholinguistic theory but also in L2 acquisition theory. Previous research into SI in L2 acquisition has shown that SI computation is not a problem for L2 speakers. In fact, L2 speakers tend to generate SIs more than native speakers do (Lieberman, 2009;Slabakova, 2010;Miller et al., 2016;Snape and Hosoi, 2018). Slabakova (2010) hypothesizes that L2 speakers compute SI more than native speakers because SI cancelation may present challenges to L2 speakers. This issue, however, has not been tested empirically. The present study aims to test whether differences between native speakers and L2 speakers lie in SI suspension rather than SI computation using the covered box paradigm (the logic of this method will be discussed in the next section). Moreover, while there is an increasing number of L2 studies on DSI, little research has been done on ISI in L2 acquisition. To fill this gap, this study examines and compares computation and suspension of DSI vs. ISI by focusing on scalar items like < sometimes, always > . Thus, findings of this study would advance our understanding of how alternative meanings are considered in the generation or suspension of SI in an L2.
SCALAR IMPLICATURES IN ADULT L2 SPEAKERS
The experimental work on the inference computation in adult L2 learners is rather limited. The first study is Slabakova's (2010) study on how L1-Korean L2-English learners process scalar expressions, such as quantifiers some and all in their L1 Korean vs. L2 English. The critical experimental item on some is (7), which is logically true but pragmatically infelicitous. If participants reject such sentences, it provides clear evidence that participants are able to derive SI and compute the pragmatic reading of some as some but not all. Acceptance of these sentences indicates that participants suspend SI and generate the logical meaning of some as some and possibly all.
(7) Some elephants have trunks. (Slabakova, 2010(Slabakova, , p. 2452 The results showed that Korean learners of English successfully acquired scalar implicatures in their L2. However, differences in response patterns still existed between native speakers and learners. That is, L1-Korean learners of L2-English were more likely than monolingual English or Korean speakers to reject pragmatically infelicitous sentences like (7). One possible explanation proposed by Slabakova (2010) is that it is easier to conjure up situations to make underinformative sentences plausible. For example, if one can think of a situation where some elephants' trunks got cut due to accidents, the sentence in (7) is felicitous. Another possibility is differential ability to SI suspension. That is, if one cancels the [not all] implicature, the statement in (7) should be interpreted as ' At least one and possibly all elephants have trunks' , which is true. Since SI suspension arguably requires more cognitive efforts, it might be more difficult to do in an L2 under the assumption that less cognitive resources are available for L2 processing than L1 processing (Green, 1986(Green, , 1998 A similar study was carried out on L1-English L2-Spanish learners' interpretation of Spanish quantifiers (Miller et al., 2016) and potential L1 influence in this domain. Unlike Korean that has only one lexical item roughly equal to the English scalar term some, Spanish has two: algunos and unos. While both words have the pragmatic interpretation some but not all, only unos has the additional logical interpretation some and possibly all. With an inherent partitive feature, algunos cannot be inferred logically. Imagine a situation that someone has four dogs. When a postman arrives, three out of four dogs barked at the postman in front of the door. In this situation, using either algunos or unos to mean some but not all is felicitous in Some dogs barked at the postman. If all the four dogs barked at the postman, the logical interpretation is desired. Thus, it is only felicitous to use unos, as in (8a), but infelicitous to use algunos, as in (8b).
(8) Context -All four dogs bark at the postman.
"Some dogs barked at the postman." (Miller et al., 2016, p. 131) The fact that Spanish and English do not have a one-to-one mapping on some may present further challenges to L2 learners of Spanish. Miller et al. (2016) tested L1-English L2-Spanish learners' acquisition of the two Spanish scalar terms algunos and unos through a truth-value video acceptability judgment task. They discovered that English learners were able to obtain a native-like judgment on the two Spanish scalar terms irrespective of the fact that English has a different scalar implicature system. Specifically, not replying on a 1:1 mapping between English and Spanish scalar terms, English learners were less likely to accept algunos in non-partitive contexts but were equally likely to accept unos despite partitive or non-partitive contexts.
Similar findings were obtained in Snape and Hosoi's (2018) study on L1-Japanese speakers' interpretation of some in L2 English. The Japanese quantifier ikutsuka translates into some in English, as in (9).
(9) Akai maru no naka ni banana ga red circle-POSS inside of banana-NOM ikutsuka arimasu ka some to be Q ' Are some bananas in the red circle?' However, unlike English some (or Spanish algunos), Japanese ikutsuka does not have a partitive meaning (not all), that is, it does not implicate the some but not all meaning. Using a picture-based acceptability judgment task, Snape and Hosoi (2018) examined whether intermediate-level L1-Japanese L2-English learners overaccept pragmatically infelicitous sentences due to L1 transfer and whether such L1 influence would disappear as the proficiency level increases. Conforming to previous studies, Snape and Hosoi (2018) found that L1-Japanese speakers had no for discussion on inhibitory control/linguistic inhibition in bilingual language performance).
Frontiers in Psychology | www.frontiersin.org difficulty in deriving scalar implicatures despite the mismatches between L2-English some and L1-Japanese ikutsuka 'some'. Moreover, there was no proficiency effect.
Lin (2016), employing a series of real-time psycholinguistic experiments on Chinese learners' acquisition of some, contributed to knowledge of L2 speakers' processing mechanism of scalar implicatures. The first experiment used a Truth Value Judgment task. After reading a context sentence "John has many dictionaries. Some of the dictionaries are used", participants were asked to judge whether the following target sentences were true or false: "Some and possibly all of the dictionaries are used" or "Some but not all of the dictionaries are used." Results of the first experiment showed that it was faster for Chinese speakers to compute the pragmatic interpretation of some as some but not all and it took them more time on rejecting this interpretation. When suspending SI and generating the logical reading (some and possibly all), Chinese participants spent almost twice as much time as they did in responding to the pragmatic interpretation. Additionally, they were more likely to reject the logical interpretation. The findings were in line with previous experimental results that adults favor the pragmatic interpretation where the SI inference is present.
The second experiment in Lin (2016) was motivated by the fact that when participants were given unlimited time to respond, they were able to come up with an alternative plausible situation that would fit the sentence at hand. In order to prevent additional brainstorming, in the second experiment, participants were required to respond within a certain amount of time. What is interesting about the finding was when Chinese speakers were pressed for time, the rejection rate of the logical interpretation some and possibly all was noticeably increased. In other words, Chinese participants were more likely to reject the suspension of SI when they were under the time pressure. This revealed that suspending scalar items (the logical interpretation) required more cognitive capacity and when L2 speakers' processing capacity was artificially constrained (e.g., when they were pressed for time), they preferred the cognitively less demanding reading (the pragmatic reading) by computing SI.
In brief, L2 research on SI has shown that generating SI inference is not difficult for L2 speakers and suggests that suspending SI inference may be challenging to L2 speakers.
However, the methodology used in previous L2 studies could not tease apart whether differences between L1 and L2 speakers in their rate of SI interpretation is due to difficulties associated with SI suspension in an L2. Our study aims to examine this issue through an investigation of L2 learners' time course of generating and suspending DSIs and ISIs by employing the covered box paradigm. In this study, we focus on only one type of scalar expressions, namely frequency adverbs like < never, sometimes, always >.
Research Questions
In light of prior research on DSI and ISI, the present study addresses the following research questions: RQ1: Do native and L2 speakers differ in generating DSI or ISI? RQ2: Do native and L2 speakers differ in suspending DSI or ISI?
Methodology
The method used in this experiment was the covered box paradigm (Huang et al., 2013), as discussed in Section 2. This paradigm has been successfully applied to explore implicatures (Huang et al., 2013) and presuppositions (Schwarz, 2014;Zehr et al., 2016;Romoli and Schwarz, 2015), especially regarding suspension of an inference. Compared to a traditional pictureselecting task, the difference with the covered box paradigm is that it includes a covered box (see the invisible or the hidden picture on the right in Figure 3). Participants were told that there is one picture hidden under the black box. In the current experiment, the instruction on the covered box paradigm was if the visible picture matches the stimuli, participants should choose the visible picture. If the visible picture does not match the stimuli, the match must be under the black box and participants should choose the covered box. The advantage of using a covered box is that it is ". . .useful for testing for the availability of non-dominant interpretations. . ." (Romoli and Schwarz, 2015, p. 225). The non-dominant interpretation, or the suspension of an inference, is the No-inference visible meaning where the SI inference is absent in the current study. By employing FIGURE 3 | A test trial of the stimulus Thomas sometimes went to the hospital last week.
Frontiers in Psychology | www.frontiersin.org the covered-box paradigm, the SI suspension reading can be displayed explicitly through a visible picture and participants are forced to consider whether the shown picture corresponds to the stimulus. A rejection of the No-inference visible picture (instead choosing the covered box) clearly indicates that the SI suspension or no-inference interpretation is not available to the participants. The same rationale also applies to the dominant interpretation (the Inference visible meaning in the present study). The visible picture in Figure 3 displays a suspension, or No-inference interpretation which is not compatible with an Inference reading that the implicature is present, Thomas didn't always go to the hospital.
Test Design
In this experiment, two factors were manipulated in a 2x2 design: SI type and Visible picture. The SI type factor has two levels which are the two kinds of SI we discussed, DSI and ISI. The Visible picture factor has two levels, depending on whether the visible picture shows the SI inference (Inference) or does not display the inference (No-inference). These two factors were crossed to create four conditions: (i) DSI with a visible picture depicting the inference in (10b), (ii) DSI with a visible picture depicting a no-inference reading, like (10c), (iii) ISI with a visible picture depicting the inference in (11b), and (iv) ISI with a visible picture depicting a no-inference reading, as in (11c).
(10) a. DSI: Thomas sometimes went to school last week.
b. Inference: ∼ Thomas didn't always go to school last week. c. No-inference: Thomas always went to school last week. (11) a. ISI: Thomas didn't always go to school last week.
b. Inference: ∼ Thomas sometimes went to school last week. c. No-inference: Thomas never went to school last week.
To convert (10b-c) and (11b-c) into visual stimuli to fit the covered box paradigm, the 5-day calendar-strip design was adapted which has been commonly used commonly to investigate the availability of presupposition interpretations was adapted for our study (Schwarz, 2014;Bill et al., 2015;Romoli and Schwarz, 2015;Bacovcin et al., 2016). In this experiment, the calendarstrip contains icons of various activities and locations from Monday to Friday 4 . A continuous appearance of an activity or a location means that this action has been repeated everyday whereas a mixture of activities or locations indicates that the first action has been stopped at some point and a new action has started 5 . Table 1 displays four sample visible pictures for 4 A reviewer commented that the calendar strip does not include Saturday and Sunday which could leave room for participants to wonder whether the character might do something over the weekend. We provided our participants with the instruction that the calendar shows the character's activities last week from Monday to Friday. Sat and Sun were not in the scope of consideration. However, we acknowledge that the 5-day calendar might have triggered some participants to have a partitive meaning. 5 Participants were told that the icon for one day represented that the character went to the place only or did that one activity only. For instance, if a hospital icon appears on Monday, it means that the character only went to the hospital on Monday, nowhere else. the four target conditions. 6 The two Inference pictures (12-13) were consistent with a SI interpretation, as in (10b) and (11b). The two No-inference pictures (14-15) illustrated (10c) and (11c) where the icon of hospital in (14) and circus in (15) was shown from Monday to Friday, blocking the SI interpretation. Half of the visible pictures of DSI and ISI were in the Inference condition and were predicted to be selected by both native and L2 speakers, given the preference of the inference or pragmatic interpretation of scalar items in the literature. The other half of the visible pictures were in the No-inference condition and, based on suspension or computation of SI, different response behavior was predicted. Selecting the No-inference visible picture indicates suspension of the SI inference whereas rejecting the No-inference visible picture (instead selecting the covered box) suggests the computation of SI.
In addition to target conditions, we also included controls and fillers, using the same covered box method. Half of the visible pictures of controls and fillers matched the stimuli and the other half did not, calling for the selection of the covered box. Controls were used to check if participants understood the task correctly and the sentence stimuli were simple negated and affirmative sentences. For instance, in Table 2, the visible picture of Louis went to the train station on Wednesday and Friday had a train station icon on Wednesday and Friday and thus triggered the visible picture selection. The visible picture for Edward didn't go to the movies on Thursday and Friday had a movie icon on Thursday and Friday and participants were expected to choose the covered box.
Two types of fillers were included in this experiment. The first type was created using a presupposition trigger stop in both affirmative and negated sentences, e.g., Thomas stopped going to the hospital on Wednesday and Bob didn't stop going to school on Wednesday. The second type of fillers had again, such as Phoebe went to the gym again on Wednesday during the week.
Procedure and Participants
Twenty-six native English speakers and twenty-four L1-Chinese L2-English learners participated in this study and they were students at a Midwest University in the United States. After signing consent forms 7 , all participants finished three tasks: a brief background questionnaire, a proficiency test and a coveredbox task. The background questionnaire collected participants' information about gender, age and years of studying English. The proficiency test was based on the Common European Framework of Reference for Languages (CEFR) containing 40 items with a maximum score of 40. The summary of participants' information is shown in Table 3.
All participants completed the covered-box task on a computer where the program E-prime was used to display stimuli and collect data. The choice of pictures was achieved by clicking 6 As pointed out by a reviewer, activities described in test items vary considerably from going to school to playing guitar. While 'always going to school' entails 'going to school every day' , 'always playing guitar' may entail 'playing guitar every day and all day long' , that is, playing guitar every day for 5 minutes would not be described as 'always playing guitar'. We acknowledge that this methodological issue could possibly influence participants' interpretation. 7 Participants were all above 18 years old and gave written informed consent. For ease of exposition, appropriate SI interpretations for each condition are added here in parentheses but they did not appear in the actual experiment. on the selected picture via a mouse. A fixation cross for 1000ms was presented at the center of the screen before the display of every stimulus sentence. Prior to the experimental trials, first, participants finished an icon recognition task which was used to make sure that participants understood the icons correctly. Secondly, participants completed six practice items using the covered-box paradigm to familiarize themselves to the task. Regarding the experimental trials, each participant finished a total of 52 items (16 targets, 16 controls, and 20 fillers) for about 15 min.
Data Analysis
For the purpose of the analysis, the percentage of selecting covered or visible pictures and response times (RTs) were the two dependent variables in the study. Responses were coded regarding whether the visible or the covered picture was selected. RTs were calculated as the time taken to select a picture. The data were trimmed in two steps. First, participants who selected pictures which obviously did not match the test sentences were planned to be removed, but this did not result in removing any data. The data were further trimmed at +/− 3 standard deviations (SDs) or more from the mean subject RTs. The trimming of extreme data points resulted in the loss of 2.6% of trials in each analysis for L1-Chinese L2-English learners and 2.4% of trials in each analysis for English speakers.
The percentage of selecting visible picture or covered box was analyzed using a generalized logistic mixed-effects regression model. The model had Percentage as the dependent variable, SI type (2 levels: DSI and ISI) and Group (2 levels: Native and L2) as fixed effects, participants and items as random factors.
To correct the skewed distribution of the data, RTs were log transformed and analyzed using linear mixed-effects regression model with log-transformed RTs as the dependent variable, SI type (2 levels: DSI and ISI) and Group (2 levels: Native and L2) as fixed effects, participants and items as random factors 8 .
Percentage of Picture Selection
To recapitulate the logic of the covered box method, if participants computed SI, they were expected to choose the visible picture when it depicted the inference and to choose the covered box when the visible picture illustrated no inference. Conversely, if the participant suspended SI, they were expected to choose the visible picture when it portrayed no inference.
When the visible picture showed an inference (as (12-13) in Table 1), both groups selected the visible picture 100 % of the time in the DSI condition and over 97% of the time in the ISI condition. This indicates that both native and L2 speaker groups computed DSI and ISI without any difficulties. 8 The mixed-effects generalized logistic model and linear model first included proficiency as a (continuous) fixed factor. However, the results indicated that proficiency was an insignificant factor for both models and, therefore, the simpler models without proficiency were refitted. 9 Tables reporting fixed effects parameters appear in Appendix A.
The percentage of selecting the covered box in the Noinference condition in DSI and ISI for both groups is visualized in Figure 4. When the visible picture showed an image of noinference (as (3-4) in Table 1), both native and L2 groups behaved similarly by selecting the covered box about 86% of the time in the DSI condition. There was no significant difference between the two groups (z = 0.106, p = 0.916). However, the two groups differed in the ISI condition. While native speakers chose the covered box 86.2% of the time, L2 speakers selected the covered box only 72.2%, as visualized in Figure 4. It further suggested that in the No-inference condition of ISI, Chinese speakers were more likely to choose the visible picture than English speakers (Chinese: 27.8% vs. English: 13.8%).
Results from a generalized logistic mixed-effects model suggested a main effect of SI type (β = 1.42, SE = 0.53, z = 2.65, p = 0.008) and an interaction between SI type and Group (β = −1.57, SE = 0.74, z = −2.11, p = 0.035). Posthoc comparisons indicated that the percentage of covered box selection in the No-inference condition of ISI between Chinese and English speakers was significantly different (z = 2.082, p = 0.037), as well as the percentage of Chinese speakers between DSI and ISI (z = −2.65, p = 0.008).
What stood out from the results was the higher percentage of selecting the visible picture in No-inference condition of the ISI by Chinese speakers. The visible picture in this condition represented a logical no-inference interpretation where the SI was suspended. The L2 group was significantly more likely to select the visible picture than the native speaker group in this condition. This could be interpreted in two ways. As noted in the introduction, SI suspension can be achieved through two routes: no computation of SI at all or cancelation of the SI that was initially computed. First, this could mean that L2 speakers, compared to native speakers, have more difficulties in computing ISI. Secondly, this could mean L2 speakers are better at canceling ISI. We will return to this issue in Discussion. suggested that a comprehensive RT analysis requires the comparative examination between visible picture and covered box selection, in particular when the two types of SI are compared. The reason is that the prediction of RTs is not that RTs will be the same or different between DSI vs. ISI in that we compare two substantially different scalar items, i.e., one with negation and one without negation. Rather, the prediction is whether the overall RT patterns that are categorized by SI computation (choosing the visible picture in the Inference condition and the covered box in the No-inference condition) and SI cancelation (choosing the visible picture in the Noinference condition) are similar or different. Thus, in this study we analyze and compare RTs for selecting the visible picture and RTs for selecting the covered box. Native speakers and L2 speakers' RTs are summarized in Tables 4, 5, respectively.
Response Times (RTs)
As shown in Tables 4, 5, the mean RTs for the covered box selection in the DSI-Inference condition is 0 for both native and L2 speakers since no one selected the covered box in this condition. Tables 4, 5 are further visualized into Figures 5, 6, respectively, by using the ggplot2 package (Wickham, 2016). Selecting the visible picture in the Inference condition of DSI was fast for both groups (English: 1273ms vs. Chinese: 1770ms). In the Inference condition of ISI, selecting the visible picture was faster than selecting the covered box for both groups. These results are not surprising since visible pictures in the Inference condition of DSI and ISI were compatible with the reading that SI inference was present. What is more interesting is the RTs in the No-inference condition (in bold in Tables 4, 5) since RTs of visible picture selection represents the time to suspend SI whereas RTs of covered box selection represents the time to compute SI. It seems that both native speakers and L2 speakers were faster in selecting the covered box (computing SI) than the visible picture (suspending SI) in both DSI and ISI 10 . 10 The outlier RTs of both groups are quite long, especially for L2 speakers whose outlier RTs are twice as long as native speakers. As a reviewer suggested, this calls for more implicit online measures (e.g., eye tracking, ERPs) since they would provide more insight on the real-time processing behavior of SI and the integration of semantic and pragmatic meanings.
To investigate RTs of computing SI statistically, logtransformed RTs of the covered box selection in the No-inference condition were fitted for a linear mixed-effects regression model. Type III tests of fixed effects reported significant main effects of SI type (F(1, 184.50) = 17.142, p < 0.001) and Group (F(1, 44.89) = 10.12, p = 0.002) without significant interaction effects between the two factors. It reflected that RTs of selecting the covered box in the No-inference-visible condition of ISI were significantly longer than those of DSI (β = 0.14, SE = 0.042, t = 3.288, p = 0.001) and RTs of English speakers were significantly shorter than Chinese speakers (β = −0.15, SE = 0.053, t = −2.863, p = 0.006). It is not surprising that native speakers were faster than the L2 group. Post-hoc comparisons suggested that it took longer to select the covered box in the No-inference condition of ISI than of DSI for both groups (English: t = −3.44, p < 0.001; Chinese: t = −3.28, p = 0.001). In other words, it took longer to compute ISI than DSI for both groups when the non-dominant alternative (no-inference) meaning was explicitly offered. Another linear mixed-effect regression model was constructed to explore RTs of suspending SI, i.e., RTs of selecting the visible picture in the No-inference condition of DSI and ISI. Type III tests of fixed effects reported significant main effects of SI type (F(1, 53.474) = 5.22, p = 0.026) and Group (F(1, 23.916) = 14.079, p < 0.001) with a marginally significant interaction between the two factors (F(1, 55.579) = 3.439, p = 0.069). It indicated that RTs of selecting the visible picture in the No-inference condition of ISI were significantly faster than that of DSI (β = −0.29, SE = 0.09, t = −3.118, p = 0.003). RTs of English speakers were significantly shorter than Chinese speakers (β = −0.489, SE = 0.121, t = −4.031, p = 0.0002). Post-hoc comparisons revealed that Chinese speakers were significantly faster in selecting the visible picture in ISI than in DSI (t = 3.118, p = 0.003) whereas English speakers' RTs did not contrast significantly between ISI and DSI (t = 0.338, p = 0.736). It means that unlike native speakers who did not show RT differences in suspending DSI vs. ISI, Chinese speakers were significantly faster to suspend ISI than DSI. The next section moves onto the discussion of these findings.
DISCUSSION
This study aimed to investigate computation and suspension of two types of SI in L2 acquisition. In this section, we discuss results of the experiment by revisiting the research questions formulated in Section 3.1.
RQ1: Do Native and L2 Speakers Differ in Generating DSI or ISI?
By employing the covered box method, the ability of generating the SI inference was indicated by participants' selection of the visible picture in the Inference condition (the inference was present in the visible picture) and the selection of the covered box in the No-inference condition (the inference was absent in the visible picture). For DSI, both groups selected the visible picture 100% when the visible picture showed the inference and preferred the covered box when the visible picture did not show the inference (English 86% and Chinese 86.9%). Moreover, RTs of selecting the visible picture in the Inference condition and the covered box in the No-inference condition revealed that the SI inference was rapidly available to both native and L2 speakers (English: visible picture 1273ms, covered box 1566 ms; Chinese: visible picture 1770 ms, covered box 2569 ms). There was no difference between the two groups in DSI computation. This seems to be in line with findings from previous studies on L2 speakers' DSI computation (Slabakova, 2010;Miller et al., 2016;Snape and Hosoi, 2018).
As for ISI, both groups selected the visible picture above 97% when the visible picture showed the inference and preferred the covered box when the visible picture did not show the inference (English: 86.2% and Chinese: 72.2%). It is interesting that English speakers were more likely to select the covered box than Chinese speakers when the visible picture showed no-inference and this difference was statistically significant (z = 2.082, p = 0.037). This seems to suggest that compared to native speakers, it is difficult for L2 speakers to compute ISI when the alternative meaning (no-inference reading in this case) is explicitly offered. In terms of response times, and similar to DSI outcomes, both groups quickly gained access to the SI inference in the Inference condition (English 1810 ms vs. Chinese 2391 ms) and the No-inference condition (English: 2260 ms vs. Chinese: 3499 ms). In short, while L2 speakers computed DSI at nativelike levels, they did not compute ISI as frequently as native speakers.
RQ2: Do Native and L2 Speakers Differ in Suspending DSI or ISI?
The ability to suspend the SI inference was suggested by the selection of the visible picture in the No-inference condition where the visible picture showed a No-inference reading.
For DSI, both groups selected the visible picture in the No-inference condition at a similar percentage (English 14% and Chinese 13.1%). However, it took significantly longer for Chinese speakers to select the visible picture in the No-inference condition (Chinese 7180ms vs. English 2513ms; t = 4.031, p = 0.0002). Since the visible picture selection percentages are similar in both native and L2 groups, RT differences between the two groups seem to be a mere quantitative difference. That is, L2 speakers are simply slower than native speakers in suspending DSI.
As for ISI, the two groups differed in the selection of the visible picture. Chinese speakers selected the visible picture at 27.8% whereas English speakers selected at 13.8%. This difference was significant (z = 2.082, p = 0.037). RT analysis also showed a difference between the two groups of selecting a visible picture (Chinese 5084 ms vs. English 2440 ms; t = 1.989, p = 0.054). Unlike the quantitative RT differences in suspending DSI, the RT differences between Chinese and English speakers in suspending ISI are qualitative, indicated by the fact that L2 speakers opted for interpretation lacking ISI more than did native speakers.
Taken together, the two types of SI inference were rapidly available to both native and L2 speakers suggested by the quick acceptance of the visible picture that was compatible with an inference reading. It also should be noted that when the visible picture displayed a No-inference reading, the rejection of the visible picture (thus selection of the covered box) was rapid as well for both groups and both types of SI. It further indicated that participants' preference of an inference reading of SI did not depend on the display of the visible picture (Inference vs. Noinference). Generating SI was overall preferred by both native and L2 speakers. The situation where we observed a significant slowdown for L2 speakers was during selection of the visible picture in the No-inference condition. In this situation, L2 speakers were faced with pressure of opposing alternatives when there was a conflict between the general preference of an inference reading and the visible No-inference reading. What is more interesting is that the pressure of the conflict seemed to be more outstanding for L2 speakers in DSI than in ISI since RTs of selecting the visible picture was significantly longer in DSI than in ISI (DSI 7180ms vs. ISI 5084ms). More importantly, since acceptance of visible pictures in the No-inference condition represents SI suspension, L2 speakers seemed to be able to 'suspend' ISI faster than DSI. Another asymmetrical behavior by L2 speakers was that L2 speakers did not compute ISI as frequently as native speakers in that L2 speakers' percentage of selecting the covered box in the No-inference condition was lower than native speakers (Chinese 72.2% vs. English 86.2%; z = 2.082, p = 0.037). According to the design of the coved box method, selecting the visible picture in the No-inference condition indicates the suspension of SI. However, as we mentioned in the introduction, there are two substantially different routes that lead to the same behavior (suspending an implicature) and we will discuss the two routes in detail in the following paragraphs. We propose that, in fact, Chinese speakers did not truly suspend the ISI inference because they did not generate the inference in the first place, suggested by their short RTs in selecting the visible No-inference picture of ISI. Instead, they simply selected the interpretation that was visibly offered at hand (the visible Noinference picture).
In sum, comparing DSI and ISI, L2 speakers differ from native speakers in interpreting sentences containing ISI items but not DSI. While L2 speakers did not compute ISI as frequently as native speakers, they 'suspended' ISI more frequently and faster than native speakers. These asymmetries between DSI and ISI observed among L2 speakers but not among native speakers pose the following two questions. First, why does ISI computation present more challenges to L2 speakers than DSI? And secondly, why and how do L2 speakers 'suspend' ISI more frequently and faster than DSI?
As for the first question, recall the two approaches to DSI vs. ISI discussed in the introduction: the traditional view that treats DSI and ISI as the same type of implicature and the ISI as obligatory implicature (Spector, 2007;Chierchia et al., 2012). According to the traditional view, there should not be any asymmetries between DSI and ISI in their generation and suspension. While our native speaker data seem to support this view, the L2 speaker data clearly suggest that DSI and ISI do not belong to the same group of implicature. Our L2 data cannot be explained within the ISI as obligatory approach either. If DSIs are non-obligatory and ISIs are obligatory implicatures, ISIs should be computed faster and more frequently than DSIs. And DSIs should be suspended more frequently than ISIs. L2 speakers in our study showed the opposite patterns. That is, they computed DSIs more often than ISIs and suspended ISIs more often than DSIs.
To account for our results, we would like to consider differences between DSIs and ISIs in terms of the number of alternative meanings involved. Let us think about structural differences between DSI and ISI. Sentences containing a weaker term that triggers DSI as in (1a), repeated here as (16), are affirmative sentences. ISIs are triggered by negating the stronger term, as in (2a), repeated here as (17). ISIs arise in negative sentences.
Within alternative-based approaches to interpretation, negation is one of the linguistic phenomena where alternatives are computed in order to reach the interpretation by the hearer and numerous psycholinguistic studies have provided empirical evidence to support the claim (Fischler et al., 1983;Hasson and Glucksberg, 2006;Kaup et al., 2007;Lüdtke et al., 2008;Dale and Duran, 2011;Tian, 2014;Tian and Breheny, 2016). For example, understanding the utterance "John didn't buy a car" requires the hearer to compute the alternative, non-negated meaning "John bought a car" first and then negate it. That is, when interpreting (17), the hearer first computes the non-negated meaning "Bob always went to school" and then negates it. The negated sentence "Bob didn't always go to school" has two alternative meanings: inference ('Bob sometimes went to school.') and no-inference ('Bob possibly never went to school.'). In other words, in interpreting (17), three meanings should be computed: non-negated meaning, the literal meaning of the negated sentence, and the inferred meaning of the negated sentence. The affirmative utterance in (16), on the other hand, evokes only two alternatives: "Bob sometimes went to school" and "Bob possibly always went to school". Under the assumption that the more alternative meanings are involved in understanding an utterance, the more cognitive effort is required, (17) containing an ISI item should be more difficult to process than (16) containing a DSI item. This could explain why L2 speakers generated ISI less frequently than DSI.
This issue relates to the second question about SI suspension. As discussed in Bill et al. (2015), speakers go through the following steps in interpreting sentences containing scalar items: (1) accessing the no-inference or literal interpretation; (2) generating SI by default; (3) suspending or canceling SI if needed. We briefly mentioned in the Introduction that achieving no-inference interpretation can be done through two ways: (1) not generating SI at all; (2) canceling SI that was previously generated. Given the three steps proposed in Bill et al. (2015), it suggests that speakers who suspend SI via the first way (not generating SI at all) stop at the first step and therefore, they rapidly generate the no-inference interpretation. On the other hand, speakers who suspend SI via the second way (canceling previously computed SI) must, first, have gone through the derivation of the SI inference and then suspend it. Thus, the recalculation of meaning is cognitively costly and thus takes longer to undergo all the steps. The asymmetry of suspending DSI and ISI observed among L2 speakers in the present study is that it took Chinese speakers significantly longer to suspend DSI than ISI (DSI 7180ms vs. ISI 5084ms; t = 3.118, p = 0.003). The longer RTs of suspending DSI by Chinese speakers suggested that they were likely to suspend DSI through the second route, i.e., generating SI and then canceling it. In other words, in reading DSI sentences presented with the alternative no-inference meaning in the visible picture, L2 speakers were able to compute the inference and the alternative reading, and then cancel the inference. Shorter RTs of ISI cancelation were due to the suspension through the first route, i.e., not generating SI at all. When the no-inference reading was offered through the visible picture in the ISI condition, it was difficult for L2 speakers to compute all the alternative readings relevant to the target sentence. So, rather than computing alternatives, L2 speakers opted for the interpretation that was visibly offered.
Finally, it is important to bear in mind that our study only examined frequency adverb scalar items; thus, our results may not be generalizable to all DSIs and ISIs. According to Van Tiel et al.'s (2016) proposal on 'scalar diversity' , not all DSI items behave the same. For example, Van Tiel et al. (2016) tested 50 participants (20 males and 30 females aged 18-67) on a sentence evaluation task using Mechanical Turk. Participants saw a sentence like John says: She is intelligent and were asked a question like Would you conclude from this that, according to John, she is not brilliant? Results showed that 100 % of the participants derived SI for < cheap, free > and < sometimes, always > (i.e., 50 out of all 50 participants), while only 6 % of the participants (i.e., three out of 50 participants) computed SI for < intelligent, brilliant > (See Van Tiel et al., 2016 andSchaeken, 2017 for a detailed discussion on factors influencing the rate of SI derivation).
Furthermore, we would like to consider experimental task effects and potential individual differences in interpreting data. Studies on children showed that children's logical vs. pragmatic responses differ depending on the task type, instruction, training or experimental setting (see Huang and Snedeker, 2009 for discussion on experimental task effects on inference computation in children). The patterns observed in our study may in part be due to extraneous task effects of the covered-box paradigm related to overall cognitive processing. Task effects in pragmatic inference computation suggest inference processing is closely related to cognitive abilities. In fact, recent studies have also shown that there are individual differences in L2 speakers as well as in native speakers in their computation or suspension of inferences and identified working memory as a main factor affecting inference computation . Additionally, many studies have split responders into distinct groups, e.g., pragmatic responders and logical responders or responders with high or low pragmatic abilities, since participants do not have the same threshold of informativeness (Noveck and Posada, 2003;Nieuwland et al., 2010;Tavano and Kaiser, 2010). Experimental task effects and individual differences are therefore important issues for future research on pragmatic processing.
CONCLUSION
The main goal of the current study was to examine how L2 speakers compute and suspend the two types of SI, DSI and ISI. While native speakers did not compute or suspend differently between DSI and ISI, L2 speakers showed asymmetrical behaviors to DSI and ISI. More specifically, L2 speakers computed DSI more often and faster than ISI, but suspended ISI more frequently and faster than DSI. The asymmetries of the percentage and time to suspend between DSI and ISI further revealed that L2 speakers went through different routes to suspend ISI and DSI, depending on the extent of alternative meanings involved in the suspension. DSI arises in affirmative sentences while ISI arises in negated sentences which evoke computation of more alternative meanings and re-calculation. It is cognitively more demanding to generate multiple alternative meanings, re-evaluate these meanings and eventually cancel one of them.
ETHICS STATEMENT
The protocol (#2018-0330) was approved by the ED/SBS IRB (Education and Social/Behavioral Science Institutional Review Board) at the University of Wisconsin-Madison. All subjects gave written informed consent in accordance with the Declaration of Helsinki.
AUTHOR CONTRIBUTIONS
SF and JC contributed to conception and design of the experiment. SF implemented the experiment and oversaw data collection. Both authors handled analysis of the experiments, contributed to manuscript revision and approved the submitted version.
|
v3-fos-license
|
2016-04-30T01:05:23.138Z
|
2014-11-14T00:00:00.000
|
9805316
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://f1000research.com/articles/3-279/v1/pdf",
"pdf_hash": "0ad554cc082e80d4e0c44a48989e48e9195d98cd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45873",
"s2fieldsofstudy": [
"Education"
],
"sha1": "a5c68b028d6ce9b22f5f7f39e584a9e9cc33f700",
"year": 2014
}
|
pes2o/s2orc
|
Connecting undergraduate science education with the needs of today’s graduates
Undergraduate science programs are not providing graduates with the knowledgebase and skills they need to be successful on today’s job market. Curricular changes relevant to today’s marketplace and more opportunities for internships and work experience during students’ secondary education would facilitate a smoother transition to the working world and help employers find graduates that possess both the hard and soft skills needed in the workplace. In this article, we discuss these issues and offer solutions that would generate more marketplace-ready undergraduates.
Main text
The premium for an undergraduate degree is high: compared to high school graduates, college graduates in Science, Technology, Engineering and Mathematics (STEM) fields earn on average $1.5 million more over their lifetime (Austin, 2014). This effect remains even after controlling for family background and other variables that could differentiate the population of students that pursue a college education from those who do not. Thus, attending college and studying a STEM field is still worth the cost (Daly & Bengali, 2014) despite the ever-increasing tuition rates, the increasing burden of student debt (Ernst, 2014), and the bad job market students encounter upon graduation (Weissman, 2014). Notwithstanding, successfully obtaining an education certainly does not guarantee success in today's job market (Bersin, 2014).
Undergraduate education is badly in need of reform. Receiving an education is not the same as receiving job training, and too many students graduate with heavy debt and are ill-equipped to thrive in today's job market (Carpenter, 2014). The US Census Bureau has documented that many students cannot find jobs after graduation, and many of those who do find themselves employed in work that does not fully match their education/training. Students would be better served by an education that is integrated with the job market they will encounter post-graduation, and one that provides not only technical skills but also the soft skills that are most in demand by employers such as communication and interpersonal skills; decision-making skills; time and project management skills; problem-solving skills, and the ability to learn new skills quickly (The Association Of American Colleges and Universities, 2010; The Association Of American Colleges and Universities, 2013;Tugend, 2013;White, 2013). In other words, science training at the undergraduate level should move beyond rote memorization of facts and personal character building such as persistence, perseverance, or motivation; it needs to become specific and relevant to jobs.
Most departments still use an old curriculum to teach traditional chemistry, biochemistry, biology, and molecular biology. Most students receive the same general curriculum no matter what they want as a career: find a job in industry, go to graduate school to do research, go to medical school to become a practicing physician, etc. As a consequence of undergraduate institutions doing a poor job of preparing students to be competitive for meaningful jobs upon graduation, many students pursue additional graduate training simply because they are not aware of other ways in which their undergraduate science degree could be used.
Currently, many agencies central to biochemistry and molecular biology have made curriculum recommendations. For example, the National Research Council has made some recommendations but these have not been widely implemented and miss the mark in terms of preparing highly functional, work-ready graduates, because they are too focused on traditional curricula and classroom-learning (2010). Although funding agencies, such as the National Science Foundation (NSF), push for education and outreach activities in the "broader impacts" criteria for grants, they have not sufficiently emphasized professional development of trainees specifically with respect to today's job market. To reform undergraduate science education, we discuss below our suggestions of updating curricula and integrating work experience into programs.
Curricular changes
At many universities, the current curricular model is outdated and employers frequently complain that graduates do not emerge with the skills they need (Dostis, 2013). Disciplines are largely compartmentalized for historical reasons, yet most creative and innovative work comes from bridging disciplines and using concepts and tools from a variety of fields to solve important problems.
One solution is to build in interdisciplinary topics within standard STEM courses in a way that will allow students the opportunity to explore current problems in environmental science, energy fields and/or public health. For example, green/sustainable chemistrycurrently a central theme in all the divisions at the Environmental Protection Agency (EPA)-could be incorporated into traditional biochemistry curriculum. Green chemistry is an interdisciplinary topic and needs to be addressed from a variety of perspectives: chemical synthesis, environmental health, and the biochemistry and molecular biology of mechanisms of action. Evidence suggests that students show great interest in the research opportunities in green chemistry and risk assessment, and students themselves clearly are pushing for incorporating current issues in energy, environment and health into their core science curriculum (Goodman, 2009). These are excellent topics for teaching biochemistry and molecular biology students about how interdisciplinary life science topics interconnect with public health.
Current research and marketplace issues are highly interdisciplinary, and thus, students should be trained in interdisciplinary work. Another example of this is in the collaboration between mathematicians and biologists to understand metabolic systems (e.g., folate metabolism, or insulin signaling) in cells. The function of the network is an emergent property that cannot be understood at the level of individual components. The response of metabolic networks to perturbations cannot be analyzed by verbal arguments; instead, it is necessary to describe the network using a system of differential equations. This allows researchers to study its dynamic behavior with simulations. The simulations will in turn suggest interesting predictions about network function to test experimentally in the lab. The feedback between experiment and theoretical modeling is a powerful approach to complex biological problems and is only possible when interdisciplinary teams work together.
Interdisciplinary training in teams provides students the opportunity to develop soft skills such as communicating with researchers in different fields-each of which has specialized language and concepts. In addition, coursework in mathematical biology is an opportunity for STEM students to receive adequate training in quantitative skills (mathematics, statistics and data analysis) and computer programming. These skills are not only critical for pursuing a research career, but are also highly transferable skills that are valued by employers in a variety of fields.
Undergraduate programs could also take lessons from innovative graduate school initiatives. A course co-organized by the Society for
Introduction
Cell Biology and the Keck Graduate Institute, and funded by the biotech company EMD Millipore, Inc., provides a "crash course" for 40 selected graduate students and postdoctoral fellows interested in transitioning to careers in the biotechnology industry. The course provides MBA-style training, professional development workshops, and a team-based project. Funding from EMD Millipore is a generous investment in the training of scientists that the company may ultimately recruit. The demand for such programs is extremely high and there is clearly a need for more programs like this because STEM graduate programs currently fail to prepare their students (or postdoctoral fellows) for jobs outside of academia. Similar programs could be established in the undergraduate setting to fill a similar gap. We are aware of some institutions that are moving in this direction. For example liberal arts colleges such as Mount Holyoke, which are traditionally not focused on job-training, are creating an entrepreneurial track and developing a program focused on environmental sustainability (Weir, 2014). Connecticut College, another liberal arts college, has created a program (Connecticut College's Career Enhancing Life Skills) to help undergraduates identify and develop a career path starting from their first year in college and to establish connections with potential employers throughout their undergraduate career. In addition to helping students, in almost every context, enhancing the communication between potential employers and faculty could help identify the skills that are currently lacking in many of the graduates currently produced by universities and lead to productive dialogue about curricular changes to remedy this issue.
Work experience
In addition to incorporating curricular changes, departments and institutions should be providing bridges to the workplace such as internships. These provide critical work experience leading to the development of skills that students cannot get in the classroom, such as firm-specific technical training, but also soft skills such as working collaboratively, facilitating group decision-making, serving customers, and sales/marketing. Internships and work experience also provide critical networking opportunities that may lead to job opportunities (job offers, referrals, recommendations, etc.). Some universities, such as the Ira A. Fulton Schools of Engineering at Arizona State University, have already created partnerships with industry for mentorship and internships. Urban universities could readily incorporate internships into their programs as they have the advantage of being surrounded with companies which can offer internship experiences; rural universities could create programs with willing companies that could help students with logistics such as transportation and housing. Ultimately, how better for undergraduates to obtain the real world working experience needed to successfully gain employment after graduation than by working in the real world as part of their education?
Conclusions
In today's competitive job market, students need to emerge from their undergraduate STEM education with relevant technical skills as well as soft skills such as creativity, resourcefulness, intellectual curiosity, respect for others, ability to be self-directed yet able to work effectively as part of a team. Most importantly, they should emerge with a good understanding of the job options they have in a variety of sectors, work experience, and a network of professional contacts that will help them move forward in their careers with confidence, clarity and purpose.
We propose the following recommendations for changes to undergraduate STEM curriculum to better prepare students to thrive in the job market they will have to navigate upon graduation: 1) Universities/departments need to update traditional core curricula to include interdisciplinary topics that highlight connections between the standard curriculum and current, real-world STEM issues. To achieve this, there are three levels of change that institutions could invoke; these levels increase in difficulty and impact both on the institution and on students, but ultimately these changes would add significant value to students' career development.
First, topics such as green chemistry and computational biology could be the focus of at least one lecture per semester in standard chemistry, biology/molecular biology, and/or biochemistry courses. This would be an easily change in the core curricula that would introduce students to topics and skills that directly apply to currently trending marketplace issues.
Second, STEM programs could encourage students to take non-science courses that are directly relevant to the job market. These courses could be taken as part of students' elective coursework. We suggest that STEM programs should encourage students to take courses that would build business acumen (for example, courses on organizational behavior, leadership, entrepreneurship, strategy, and operations management); develop interdisciplinary teamwork skills through the integration of topics covering biochemistry/molecular biology, math, and computer programming/coding, public health; and lastly, enrich workplace readiness through career development topics including interviewing, resume building and networking. Universities could develop a "Preparing STEM Professionals" certificate program that would give students' incentive to enroll in these types of courses.
A third, stretch solution, would be for institutions to create entirely new courses that address the intersection of the standard core curricula with today's most important global topics. Some institutions are taking steps in this direction. For example, the chemistry and biochemistry courses at California State University at Fullerton include such offerings as biotechnology: science, business, and society; environmental pollution and solutions; introduction to computational genomics; advanced computational biochemistry; and internships in chemistry and biochemistry. Other institutions should move in similar directions. As part of these courses, students should be given opportunities to work collaboratively on projects in interdisciplinary teams, as the ability to work as part of a team is highly valued by employers. Training in quantitative data analysis and programming-sorely lacking in too many undergraduate biology/chemistry/biochemistry programsshould also be emphasized.
Ultimately, building the interdisciplinary and "soft" skills employer's desire should be the focus on these curricular changes. The curriculum should teach students to think critically and creatively about current and future problems that need solving and that will be valued by employers.
There are likely existing programs that are achieving the outcomes we are suggesting. It would be useful for publishers to coordinate a series of articles on this subject to build awareness of the curricular changes that are already being implemented in institutions across the country and to develop guidelines and best practices for universities as they reform and update their STEM curricula to make them work-ready.
2) Universities should provide impactful opportunities and support for internships and work experience. It is through these types of experiences that students will truly gain the most useful work preparedness during their undergraduate career. Students will build real work skills and develop contacts that will be important for future employment. Perhaps the least challenging way to accomplish meaningful internships is for institutions to form formal partnerships with local or regional companies. To further incentivize integrating work experience into undergraduate curricula, we believe that funding agencies, such as NSF and the National Institutes of Health, have a key role to play. In the same way that funding agencies have promoted education and outreach in the "broader impacts" criterion for grants, they should also emphasize the need for clear, actionable career development opportunities (in academic and non-academic settings) for students. For example, in addition to NSF funding Research Experiences for Undergraduates (REUs) which are largely at academic institutions, NSF and NIH could also organize bridging experiences for students to explore research in industry, the world of science policy, and careers in science writing and editing. Funding agencies could develop workforce innovation funding opportunities that could incentivize the creation of unique solutions to creating work experience for undergraduates and these novel programs could serve as models for other institutions. Ultimately, funding agencies could drive a culture of creating practical work experience as part of undergraduate education.
Again, some institutions have found unique ways to successfully incorporate work experience into undergraduate STEM curricula in a way that benefits both the institution and students. Publishers could commission articles from such programs across to demonstrate their success, highlight challenges faced in development of such initiatives, and to establish discussions that may lead to the development of guidelines and best practices for undergraduate internship programs.
Given the rising costs of a college education, it is imperative that students emerge with their degrees with skills relevant to the job market. Too many employers complain that they can't find the right talent and too many graduates are un-or under-employed. Changes in the undergraduate education system-curricular changes and integrated work experience-could remedy this problem. We encourage institutions and organizations to discuss the success and challenges they have faced in implementing such changes to the undergraduate education experience.
Author contributions VC, RHS, and NLV conceived and prepared the manuscript and have approved the final content.
Competing interests
No competing interests were disclosed.
Grant information
The author(s) declared that no grants were involved in supporting this work. Given the importance of these issues, I would really encourage the authors to look at how the argument might be strengthened, in particular with support from empirical and peer-reviewed sources. The authors are clear in their views in a way that is appropriate for an opinion piece, but the factual claims that are made in service of the overall argument need better supporting sources. I understand that this is an opinion piece and am not suggesting an exhaustive review of the literature, just attention to a few key findings related to the arguments that are made here. For example, some claims are supported with weak evidence from opinion articles from media sources rather than empirical and peer-reviewed sources. E.g., "Too many students graduate with heavy debt and are ill-equipped to thrive in today's job market" (p. 2) citing only Carpentier (2014), an opinion piece from the NYT that is supported with only an online survey from a job search website. Similarly the claim about the substance of employers' complaints about graduates (p. 2) is supported with a brief news article about a poll commissioned by an online homework help website. It would be important to at least examine to ensure that the data is appropriate the full report for making claims in an academic opinion piece.
Other claims are made with no support at all, e.g., "Many students pursue additional graduate training simply because they are not aware of other ways in which their undergraduate science degree could be used." (p. 2). A claim that quantitative skills are an example of "highly transferable skills that are valued by employers" is also unsupported. Off hand I don't know of any studies that specifically address quantitative skills as valued by employers but it could fall within the mismatch that HernándezMarch (2009) find in et al field-specific practical skills that employers desire but perceive that students lack. Sagen, Dallam and Laverty (2000) also find that quantitative training is related to job search success for undergraduates though they do not examine employers' desires directly.
The paragraph that spans p. 2-3 describes several good examples such as Mount Holyoke and Keck Graduate Institute. It is a good illustration of beginning share examples and best practices, as the authors advocate in their conclusion. It could, again, be stronger if there were connections to some of the published case studies that try to assess claims like these in relation to specific programs, e.g., Junge et (2010). Links to published case studies would also be very valuable in supporting the suggestion that al publishers offer more venues for sharing best practices, challenges and successes. It would be important to acknowledge the venues that do exist, while also advocating for more. publishers offer more venues for sharing best practices, challenges and successes. It would be important to acknowledge the venues that do exist, while also advocating for more.
Overall, I commend the authors on tackling a very important issue and encourage their efforts to push this discussion forward. I think that their argument could be greatly strengthened, however, with better attention to at least a few key pieces of the literature in the area of workplace and employability programs in undergraduate education. They would find good support for their overall aims but also be able to make more nuanced arguments about how the important goal of improving undergraduate science education can be accomplished.
To that end, here are a few pieces that might be of interest to the authors: . Enhancing graduate employability: best intentions and mixed outcomes. Cranmer, S. (2006) Studies in , (2), 169-184.
Higher Education 31
A study of university departments examining their faculty members' practices for teaching employability skills with attention to how well their goals are achieved.
Research in Science & Technological Education 24
Study of various stakeholders (including both employers and faculty) on what skills and competencies they prioritize, with a specific focus on "work-integrated learning". , (2), 132-144. Active Learning in Higher Education 6 This is a case study of students at one undergraduate institution, examining their perceptions of the skills they have developed during their degree programs and how confident they are in their abilities to transfer those skills to a workplace environment.
Research in Higher Education 41
A large study looking the factors that predict employment success of college graduates one month after graduation. Their regression model looked at a wide variety of factors from internship experiences to personal characteristics.
CBE-Life Sciences Education 9
A long term evaluation study of a summer research program that aimed to increase student preparedness for both graduate school and industry.
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
No competing interests were disclosed.
Competing Interests:
, University of Kentucky, USA Nathan Vanderford Dear Dr. Shanahan, We thank you for reviewing our article and for your comments. You clearly have a detailed understanding of the literature focusing on these issues. We have given your critique a great deal of thought, and we have decided to forgo submitting a revision to our article based on your comments primarily given the fact that reviews can be independently cited.
F1000Research
Ultimately, we feel that a revised version of the article would add no additional value beyond what is already captured in your critique. We therefore encourage readers to refer to and authors of subsequent work to reference your referee report. This is more of an opinion piece than a research article. The idea of incorporating practical applications into STEM education is obviously a good idea, but the obstacles to implementation have not really been addressed, which is crucial. Statements such as "science training at the undergraduate level should move beyond rote memorization of facts" seem rather naive. No one, for a long time, has argued that undergraduate education should be rote memorization.
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
No competing interests were disclosed.
Competing Interests:
Author Response 16 Dec 2014 , University of Kentucky, USA Nathan Vanderford Dear Dr. Lurio, We thank you for reviewing our article and for your comments. As you note, the article does contain a few statements that could, arguably, be controversial, and as an opinion article, we feel that we are warranted in expressing our views on the current state of undergraduate education and on how we see ways to improve its future state. We agree with your point that "no one, for a long time, has argued that undergraduate education should be rote memorization" yet too often that is still what we see in the classroom, and it remains a problem. We also agree that there will be still what we see in the classroom, and it remains a problem. We also agree that there will be challenges/obstacles to implementing our recommendations and we believe that these may vary widely from institution to institution. We hope that by specifically mentioning in the article that publishers should help commission articles from programs that are implementing practical applications -such as updated curricula and the integration of work experience -that such articles would address associated challenges/obstacles, best practices, and success stories. As such, we believe that future articles will best address your point. This is an interesting and timely article that presents viable solutions to some of the challenges being faced by higher education. The two main solutions presented -increasing the interdisciplinarity of STEM undergraduate curricula and providing more work experiences for students -are consistent with accepted high impact educational practices. The paper is quite brief, but nonetheless presents several concrete examples, and the authors rightfully encourage educators to share what they are doing in order to develop guidelines and best practices.
In terms of the interdisciplinary curriculum argument, I would encourage the authors to broaden this even further to include disciplines such as sociology, political science, and psychology, as many of today's scientific issues (such as climate change and the challenge of feeding the planet) can be addressed from a multitude of different angles.
In terms of the suggestion of increasing work experience for students, it would be helpful to see more discussion of the different approaches that can be taken in this regard. For example, consideration of co-operative programs, internships, externships, and course-embedded community engaged learning projects.
Overall, this paper achieves the stated objective of describing strategies to connect undergraduate science education with the needs of today's graduates, and should prove informative to educators in STEM fields.
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
No competing interests were disclosed.
Competing Interests:
Author Response 16 Dec 2014 , University of Kentucky, USA Nathan Vanderford Dear Dr. Newton, We thank you for reviewing our article and for your comments. We agree that a number of interdisciplinary topics could (and should) be integrated into STEM curricula. We have limited the scope of our article to a detailed discussion of a few example topics and, via our suggestion within the article that others should report on their programs that have novel curricula, we hope to hear a variety of other examples that integrate a wide range of topics/disciplines into STEM curricula. This is also the case regarding your comment on additional methods for integrating work experience into STEM programs; we hope that our specific call for others to report on their programs leads to a number of other articles that share specifics about how institutions are integrating work experience into STEM curricula through a number of different methods including co-operative programs, inter/externships, etc. As such, we look forward to subsequent articles that can further address your comments.
Thank you again for your time and comments. Viviane Callier, Richard H. Singiser, Nathan L. Vanderford No competing interests were disclosed. Competing Interests:
|
v3-fos-license
|
2019-03-09T14:18:16.932Z
|
2009-09-07T00:00:00.000
|
71720649
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.banglajol.info/index.php/BJO/article/download/3281/2753",
"pdf_hash": "b5dc5d7b2219d970122e0a1cb738a8a211666e0e",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45877",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b5dc5d7b2219d970122e0a1cb738a8a211666e0e",
"year": 2009
}
|
pes2o/s2orc
|
Ludwig ’ s Angina : a study of 50 cases
Objective: To evaluate the clinical outcome, morbidity and mortality of patients diagnosed as Ludwig’s angina. Study design: Retrospective study. Setting: Department of Otolaryngology & Head and Neck Surgery, Dhaka Medical College Hospital and Apollo Hospitals Dhaka. Patients and Methods: 50 patients were included in this study (36 males and 14 female) between the ages of 8 and 78 years (mean, 45.5 years) who were treated between January 2007 and December 2008 in the department of Otolaryngology and Head-Neck surgery, Dhaka Medical College Hospital and Apollo Hospitals Dhaka. Etiology, microbiology, associated systemic diseases, treatment, airway management, duration of hospital stay and outcome were reviewed. Results: Most common age group was 3rd decade (42%) and 72% patients were male. Most patients came from poor socio-economic condition and rural area of Bangladesh. 100% patients presented with neck swelling, pain, tenderness and fever. Dental infection was documented as the most common cause (70%) of Ludwig’s angina, followed by infection of the tonsils (10%) and submandibular gland (6%). Systemic illnesses included diabetes mellitus (30%) and chronic renal failure (4%).Streptococcus is commonest organism found in culture of pus. Intravenous antibiotics were started immediately in all patients. 4 patients underwent emergency tracheostomy. 40 patients underwent incision and drainage. Infected tooth/ teeth were also removed at the same time. Postoperatively, the airway was secured by endotracheal intubation in 1 case and by tracheotomy in 5 cases. In 88 %( 44 patients) of the cases, no artificial airway was used. 9 patients were managed in the intensive care unit for 1 to 3 days. All except 1 patient made uneventful recoveries and they were discharged after 3 to 26 days of hospitalization (mean, 14.1 days). Conclusion: Airway protection, aggressive antibiotic therapy and surgical decompression can significantly alter the mortality rate of Ludwig’s angina.
Introduction
Ludwig's angina is an infection of the submandibular region, manifested by swelling of the floor of the mouth and elevation and posterior displacement of the tongue.A brawny edema and cellulitis of the suprahyoid region of the neck develops later 1 .Deep neck abscesses such as Ludwigs angina are less common now than 50 years ago because of the development of effective antibiotics and improved dental care.Deep neck infections in the antibiotic era most commonly result from odontogenic infections 2 .The most common predisposing factors for the development of Ludwig's angina are carious and abscessed teeth, periodontal disease and extractions of the lower molars.Uncommon etiologies include upper respiratory infections, floor-of-mouth trauma, mandibular fractures, Bangladesh J of Otorhinolaryngology 2008; 14 (2) : 51-56 .peritonsillar abscess and sialadenitis 3 .The second and third mandibular molars have roots which lie at the level of the mylohyoid muscle, either adjacent to or below the submandibular space.Abscesses of these lower molars may perforate the mandible and spread into the submandibular and submental spaces, leading to Ludwig's angina.The causative organisms are generally those of the oral flora, including: Streptococcus spp., Staphylococcus aureus ,Bacteroides spp.,Fusobacterium spp., Actinomyces spp.and Haemophilus influenzae 4 .All age groups may be affected, but young adults have the highest prevalence rates 5 .The disease is unusual in children.The infection begins unilaterally 6 .Swelling of the tissues occurs rapidly and may block the airway or prevent swallowing of saliva.The patency of the airway is the main concern with Ludwig's angina and patients may require a tracheostomy to prevent airway obstruction.Infections of the submandibular space may spread to the lateral pharyngeal and retropharyngeal spaces.From the retropharyngeal space, the infection can dissect down fascial planes to the mediastinum.Aspiration of infectious particles and septic embolism to the pulmonary vasculature is other possible modes of extension to the chest 7 .A CT scan of the neck may be recommended.Plain radiographs can be used to assess the degree of soft tissue swelling and airway obstruction 8 .Culture of fluid from the tissues may show bacteria.Ludwig's angina can be life threatening.However, it can be cured with proper protection of the airways and appropriate antibiotics.
Patients & Methods
This study included fifty patients of Ludwig's Angina that were treated medically and surgically in the department of Otolaryngology and Head-Neck Surgery at Dhaka medical College hospital and Apollo Hospitals Dhaka from January, 2007 to December 2008.The data of each patient included age, sex, socio-economic status, etiology, microbiology, airway management, presenting symptoms and signs, preoperative investigations like complete blood count, blood sugar, serum creatinine etc., operation notes, complications of surgery and state at follow up.
Result
The observation of study shown in following tables.Most Common Age group was 3 rd decade (42%) followed by 5 th and 4 th decade.
Sex of Patients
No of patients % Male 36 72 Female 14 28 Table-II shows 72% patients were male and 28% were female.
Socio-economic status
No of patients % Most common cases were poor socio-economic condition (70%).
Treatment
CT neck has been done for 8 patients and X-ray neck has been done for 25 patients.
Discussion
Ludwig's angina was first described by the German physician Wilhelm Friedrich von Ludwig in 1836.At that time, the condition was almost always fatal. 9 With the advent of contrast-enhanced CT, which has allowed earlier identification and prompt antibiotic therapy, the mortality rate has significantly declined, from more than 50% to less than 10% of patients. 10,11In our study 1 patient(4%) died of mediastinitis.
Our study shows that male are affected more than female and it mainly affects in poor socio-economic conditions.In our study, 72% patients were male and most patients were from poor socio economic condition (70%) and from rural area of Bangladesh (82%).40 patients (80%) presented with Ludwig's angina were under nutrition.
Ludwig's angina is a rapidly spreading, indurated, bilateral cellulitis that begins in the floor of the mouth and involves both the submandibular and sublingual spaces. 12It spreads along fascial planes rather than by lymphatics and rarely involves the glandular surfaces. 13e primary site of infection is odontogenic in 70% to 80% of cases. 14In our series, 35 patients (70%) presented with dental problems followed by diabetes mellitus (30%) and tonsillar infections (10%).3 patients presented with infection of oral mucosa and 3 patients presented with submandibular adenitis.The second and third molars are most frequently involved, because their roots extend below the level of the mylohyoid muscle, thus crossing both the sublingual and submandibular spaces. 15The majority of the patients are adults who have no significant comorbidities.In our study, most common age group was 3 rd decade (42%) followed by 5 th and 4 th decade.This condition has also been associated with systemic diseases, such as chronic glomerulonephritis, systemic lupus erythematosus, aplastic anemia, neutropenia, immunodeficiency (eg, HIV infection) and diabetes mellitus. 16e bacteriology of Ludwig's angina is polymicrobial.
The most common organisms identified include Streptococcus, Staphylococcus and Bacteroides species.Other microorganisms that have been isolated are gram-negative bacteria, such as Klebsiella species, Hemophilus influenzae, Proteus species, and P aeruginosa. 17In our study 32 patients pus has been sent for culture and sensitivity.Maximum organisms found were Streptococcus (40.62%) followed by staphylococcus and E.Coli.
Pain in the floor of the mouth and anterior neck, dysphagia, odynophagia and respiratory distress are common symptoms. 18Clinical findings include fever, tachypnea and tachycardia and patients may also have fetid breath.Stridor, hoarseness, respiratory distress, cyanosis and decreased air movement are harbingers of impending upper airway compromise.
Palpation of the submental and bilateral submaxillary spaces reveals firm, nonpitting induration of the suprahyoid neck bilaterally.Inspection of the malodorous oral cavity is limited because of trismus, but a firm, raised floor of the mouth may be evident. 18n our study, most common symptoms were neck swelling, pain, tenderness and fever.(In all patients).Other presenting symptoms were dysphagia, dental problem, foul smell, trismus, muffled voice and respiratory distress.More than one symptom was present in all patients.
Maximum hospital stay was between 1-2 weeks.They were discharged after 3 to 26 days of hospitalization (mean, 14.1 days).
Diagnosis is based on clinical findings, although contrast-enhanced CT can help determine the extent of the infection, especially in the presence of an abscess. 19Clinical examination has a low sensitivity (55%) for predicting drainable collections of pus in deep neck infections, but when combined with CT findings, the accuracy is 89%, sensitivity is 95%, and specificity is 80% for identifying a drainable collection. 20Plain radiographs of the neck may show soft-tissue swelling, the presence of gas and the extent of airway narrowing.In our study, CT neck has been done for 4 cases and X-ray neck has been done for 25 patients.Although CT neck is an excellent guide to assess the extension of the disease, most of our patient could afford to do the same.9 patients were managed in the intensive care unit for 1 to 3 days.
Because of the risk of rapid airway compromise, all patients with Ludwig's angina should be admitted to the ICU.Again that was not always possible for us as we have few ICU facility.Death is usually the result of hypoxia or asphyxia, not overwhelming sepsis, 12 although mediastinitis was the likely cause in our patient.Airway management is the most important aspect of immediate care.Tracheostomy using local anesthesia has been considered the gold standard of airway management in patients with deep-neck infections.However, cellulitis of the neck with involvement of the tracheostomy site makes the procedure difficult. 20In our study, 5 patients underwent tracheostomy.A recent study included 17 patients with Ludwig's angina showed that tracheal intubation with a flexible bronchoscope using topical anesthesia provided good airway management.The authors recommended tracheostomy using local anesthesia when fiberoptic intubation is not possible. 20itial antibiotic therapy is targeted at gram-positive organisms and oral cavity anaerobes.Empiric therapy with IV penicillin G, clindamycin or metro-nidazole is recommended before culture report is available. 20ome experts recommend the addition of gentamicin. 21Antibiotic treatment before hospital admission often results in sterile cultures.IV steroids given for 48 hours, can decrease edema and cellulitis and thus help maintain the integrity of the airway and enhance antibiotic penetration.We used IV steroid in 6 cases in non-diabetic patients.If an abscess is present, the definitive treatment would be incision and drainage and if applicable, removal of the abscessed tooth or teeth.We used combination of Inj Ceftrizone and Inj Metronidazole in all patients.Inj Gentamycin has been used additionally in 5 patients.
Surgical drainage is indicated in the presence of clinical fluctuance or crepitus or radiologic evidence of fluid collection or air in the soft tissues.A relative indication is the lack of clinical improvement within 24 hours of initiation of antibiotic therapy. 21Removal of infected teeth facilitates the complete drainage of fluid.In our study, 40 patients (80%) underwent incision and drainage.10(20%) patients have been cured with conservative measure.
Conclusion
Ludwig's angina usually resolves without complications, but the condition can be fatal.Prompt diagnosis, appropriate airway management, aggressive IV antibiotic therapy, incision and drainage and close monitoring promote good outcomes in most patients.
Table - I
Age of patients n= 50
Table - V
Common Symptoms of patients n = 50 Most common symptoms were neck swelling, pain, tenderness and fever.(In all patients).More than one symptom was present in all patients.infections (10%).10 patients of diabetes mellitus had associated dental infection.Isolated diabetes mellitus has been found in 5 cases.
|
v3-fos-license
|
2021-12-16T03:18:31.412Z
|
2021-12-10T00:00:00.000
|
215779650
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1097/md.0000000000027992",
"pdf_hash": "6d02f1fa41f2ac1b235c05c2fd57e1520263bc54",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45881",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6d02f1fa41f2ac1b235c05c2fd57e1520263bc54",
"year": 2021
}
|
pes2o/s2orc
|
Multiseptate gallbladder
Abstract Rationale: Multiseptate gallbladder (MSG) is a rare congenital gallbladder anomaly. Between 1963 and June 2021, only 56 cases were reported. There is currently no treatment guideline for pediatric or adult cases of MSG. Patient concerns: A 14-year-old woman visited our out-patient clinic in September 2020 for epigastric pain that last for 6 months. Honeycomb appearance of the gallbladder was noted under ultrasonography. Diagnosis: The patient was diagnosed with MSG. The diagnosis was confirmed through computed tomography and magnetic resonance cholangiopancreatography. Interventions: Cholecystectomy was performed. Outcomes: Epigastric pain showed limited improvement after the surgery. Since she was diagnosed with gastritis at the same time, a proton-pump inhibitor was prescribed. Epigastric pain was eventually resolved. Lessons: MSG cases can undergo cholecystectomy and show good recovery without complications. However, concomitant treatment may be required to resolve in the presence of other symptoms such as epigastric pain.
Introduction
The multiseptate gallbladder (MSG) is a rare gallbladder anomaly. Between 1963 and June 2021, only 56 cases were reported in the English literature. These published case reports and case series describe the clinical presentations of MSG, the features of the diagnostic workup, as well as treatment and prognosis of MSG. Simon and Tandon reported the first case of a 32-year-old woman with upper abdominal and back pain that lasted for 3 weeks, revealing a "honeycomb-like" appearance within the gallbladder under ultrasonography (USG). [1] The first pediatric case was published 3 years later, in which a 15-year-old woman with MSG had recurrent abdominal pain. [2] Congenital anomalies of the gallbladder can be categorized based on their size, shape, position, and number. MSG is a rare congenital anomaly with distinct shapes. Since no malignant cases have been reported to date, MSG is considered a benign disorder. [3] However, patients with MSG can suffer from other biliary anomalies. There have been several postulations regarding the mechanisms that contribute to the formation of MSG. [4][5][6][7] However, the exact etiology remains unclear, and there is no consensus on how MSG should be treated. A 14-year-old previously healthy Asian female visited the outpatient department with a chief complaint of epigastric cramping pain that lasted for 6 months. The patient did not have fever or jaundice. At the abdominal examination, epigastric tenderness was noted. Results of whole blood count, erythrocyte sedimentation rate, C-reactive protein, and biochemical tests including transaminase, bilirubin, amylase, lactic dehydrogenase, and alkaline phosphatase levels were within normal ranges.
Case presentation
USG showed a multiple thin septa-bridged gallbladder with a honeycomb appearance, which is consistent with the clinical feature of a MSG (Fig. 1). The thickness of the gallbladder wall was normal, with no stones in the lumen. Neither pericholecystic fluid nor biliary dilatation was observed. No focal tenderness was observed in the gallbladder. To further examine the structure and rule out relevant anomalies, we arranged computed tomography and magnetic resonance cholangiopancreatography (MRCP). Computed tomography revealed a fine septum over the distal body of the gallbladder, some tiny polypoid hyperintensities along the inner wall of the gallbladder sac, and fluid-fluid level in the gallbladder. MRCP excluded intra-and extrahepatic biliary or pancreatic anomalies.
In the workup for epigastric pain, we performed an esophagogastroduodenoscopy. The patient was diagnosed with gastritis and gastric ulcers with no evidence of Helicobacter infection. She was treated with a proton pump inhibitor. Upon diagnosis of MSG, the patient chose to undergo laparoscopic cholecystectomy, even though the MSG can be left untreated and monitored through regular follow-up ( Fig. 2A, B). The specimen was sent for pathology study. The histopathologic diagnosis revealed smooth serosa and trabeculated mucosa, with a muscle layer extending into the septa, indicating a multiseptate gallbladder (Fig. 3A, B). The surgery was uneventful, but her abdominal pain persisted after surgery. The epigastric pain eventually subsided as the patient continued to take a protonpump inhibitor.
Discussion
To the best of our knowledge, this was the first case of MSG we cared for at our hospital. To better understand this rare anomaly, we conducted a literature review using the PubMed medical database with keywords "multiseptate gallbladder." Only English literature was considered. Forty-two articles were included in this review. Data on the 57 cases in these 42 articles are summarized in Table 1.
In this discussion, we defined choledochal cysts and anomalous arrangement of the pancreaticobiliary duct as pre-cancerous anomalies, given the risk of malignant progression. Biliary symptoms were defined as either right upper quadrant pain or epigastric pain, fever, nausea, vomiting, or jaundice. Individuals with "recurrent abdominal pain" and/or "abdominal pain" were sorted into group that did not have biliary symptoms.
Patient demographics
Out of the 57 cases, 19 cases (33%) were pediatric cases, with a gender ratio close to 1 (female:male = 9:10). The median age at diagnosis was 10 years (range: 15-day-old-16 years). Among the 38 adult cases (66%), the youngest case was 19 years old and the oldest was diagnosed at the age of 70 years. The median age at diagnosis was 35 years (Table 2A). Unlike pediatric cases, MSG is 2.8 times more prevalent in females than in their male counterparts.
Pathogenesis
There are several postulations to explain the formation of MSG. First, some suggested that MSG results from incomplete cavitation of the solid embryonic gallbladder because MSG cases do not have the muscle layer in the septa. [1,5] Second, the "wrinkling theory" states that the gallbladder has a wrinkling appearance and creates invagination that fuses with the solid intraepithelial structure. [6] Third, the "Phrygian cap theory" postulates that during the solid stage, the gallbladder grows at a faster pace than the structure surrounding it. [7] Wrinkling and kinking therefore take place due to lack of space. The "wrinkling theory" and the "Phrygian cap theory" can be deduced by the presence of muscle fibers within the septa. [6] 3.
Clinical presentation
Among the pediatric cases, 12 of the 19 cases had biliary symptoms. In the adult population, approximately 71% (n = 27/ 38 cases) of patients reported biliary symptoms ( (Table 2B). Three pediatric cases had jaundice as one of their clinical presentations, while none of the adults presented with jaundice at diagnosis. An anomalous pancreaticobiliary ductal union, which relates to choledochal cyst and biliary tract carcinoma, was found in a 46-year-old woman with gastric carcinoma, who further showed no tumor involvement in MSG. [8] Three adult cases had a hypoplastic gallbladder, and 4 cases were complicated with gallstones. Additionally, 7 of the 57 patients had cholelithiasis. Three of these cases were found in the pediatric population (Table 1). [9][10][11] The mechanism of pain is not well understood, but the consensus is that slow bile flow and increased intraluminal pressure lead to the sensation of pain. This might be supported by the delayed passage of bile observed under biliary manometry and scintigraphy. [12] Normally, MSG is not accompanied by malignancy. However, MSG can be complicated by a choledochal cyst or anomalous arrangement of the pancreaticobiliary duct, thereby increasing the risk of malignant transformation. [13,14] Therefore, an advanced evaluation of the associated ductal anomalies should be done. MSG can coexist with choledochal cysts in both pediatric (3/19 cases) and adult (2/38 cases) populations (Table 1). [15][16][17][18]
Hsieh et al. Medicine (2021) 100:49 Medicine manometry with scintigraphy were used to show the bileexcreting function of the liver as well. Results of the hepatobiliary iminodiacetic acid scan showed normal gallbladder emptying, while impairment of gallbladder filling and contraction was revealed on biliary manometry with scintigraphy. [3,7,12] Endoscopic retrograde cholangiopancreatography (ERCP) and MRCP can be used to fully visualize the intra-and extrabiliary tracts. However, ERCP cannot fully establish the MSG structure in some cases. [14,16] In contrast to ERCP, Nakazawa et al suggested that MRCP seems to be a superior and more commonly used imaging modality in recent years due to its noninvasive nature, low radiation, and ability to identify the biliary and pancreatic pathology simultaneously, which affects our treatment decision making. [14] However, adjustments should be made according to hospital resources and weighing the advantages and disadvantages of the patient.
Treatment and prognosis
Excluding 4 cases whose treatment was not described in the articles, about half of the pediatric cases received surgical treatment. Among the 8 children undergoing cholecystectomy, most had biliary symptoms (n = 7/8).
Excision of the extrahepatic biliary tree combined with hepaticojejunostomy, choledochoduodenostomy, or Roux-en-Y anastomosis due to choledochal cyst was done in 3 cases. [15][16][17] In the 3 patients who had biliary symptoms but chose not to undergo surgical treatment, the symptoms were self-limiting over time. [10,23,25] In adult patients with biliary symptoms, 90% of the adult population underwent surgery. Among them, a 53-year-old woman underwent an additional Roux-en-Y procedure due to co-existing choledochal cysts.
In the case we presented, a 14-year-old girl who had biliary symptoms and was diagnosed with MSG along with gastritis underwent cholecystectomy, and her symptoms persisted after the surgery. This suggests that in the presence of other gastrointestinal conditions, the patient should be treated for such symptoms first while MSG can be managed through active monitoring. Cholecystectomy can be considered after other symptoms are resolved or under control.
Conclusion
In summary, MSG is a rare congenital biliary anomaly that can occur in children and adults. Most cases are presented with biliary symptoms, but some cases can be asymptomatic. For all MSG cases, it is important to rule out the associated biliary tract anomalies, especially those with a higher risk of malignant transformation. Lab imaging is a vital tool to diagnose MSG and to identify associated biliary tract anomalies. MRCP can be considered a superior imaging modality, such as ERCP, due to its non-invasive property and high resolution of biliary anatomy.
Based on the 57 cases reviewed, asymptomatic cases can remain asymptomatic, and cases with biliary symptoms can recover without treatment. Therefore, regular follow-up is sufficient for asymptomatic MSG without associated biliary tract anomalies. When symptoms occur, they can either be treated with cholecystectomy or left untreated with regular follow-up.
|
v3-fos-license
|
2021-02-18T06:17:05.219Z
|
2021-02-17T00:00:00.000
|
231943951
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/2211-5463.13117",
"pdf_hash": "c4f0c04530a8b7559f3b82d4ac0ece3a41ade78a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45883",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "6d6f75743930f5ffa97165c4229e572fef979b01",
"year": 2021
}
|
pes2o/s2orc
|
Accurate detection of Newcastle disease virus using proximity‐dependent DNA aptamer ligation assays
Detecting viral antigens at low concentrations in field samples can be crucial for early veterinary diagnostics. Proximity ligation assays (PLAs) in both solution and solid‐phase formats are widely used for high‐performance protein detection in medical research. However, the affinity reagents used, which are mainly poly‐ and monoclonal antibodies, play an important role in the performance of PLAs. Here, we have established the first homogeneous and solid‐phase proximity‐dependent DNA aptamer ligation assays for rapid and accurate detection of Newcastle disease virus (NDV). NDV is detected by a pair of extended DNA aptamers that, upon binding in proximity to proteins on the envelope of the virus, are joined by enzymatic ligation to form a unique amplicon that can be sensitively detected using real‐time PCR. The sensitivity, specificity, and reproducibility of the assays were validated using 40 farm samples. The results demonstrated that the developed homogeneous and solid‐phase PLAs, which use NDV‐selective DNA aptamers, are more sensitive than the sandwich enzymatic‐linked aptamer assay (ELAA), and have a comparable sensitivity to real‐time reverse transcription PCR (rRT‐PCR) as the gold standard detection method. In addition, the solid‐phase PLA was shown to have a greater dynamic range with improved lower limit of detection, upper‐ and lower limit of quantification, and minimal detectable dose as compared with those of ELAA and rRT‐PCR. The specificity of PLA is shown to be concordant with rRT‐PCR.
Introduction
The World Organization for Animal Health has defined Newcastle disease (ND) as an infection of poultry with virulent strains of ND virus (NDV). This virus presents a perpetual threat to poultry, causing a high death rate during a very short time. The genome of NDV is linear, nonsegmented, single-stranded RNA, encoding six proteins [1]. The hemagglutinin/ neuraminidase (HN) and fusion (F) proteins are inserted in the envelope and represent the most important factor that determines the virulence and the infection cycle of the virus [2,3]. HN is a multifunctional molecule that promotes the attachment of the virus to its sialic acid-containing receptors, and hydrolyzes the sialic acid molecules from progeny viral particles to prevent viral self-aggregation through the neuraminidase (NA) activity [4,5]. In addition, HN is Abbreviations Apt, aptamer; ELAA, Enzyme Linked Aptamer Assay; LLOQ, lower limit of quantification; LOD, limit of detection; MDD, minimal detectable dose; NDV, Newcastle Disease Virus; PLA, proximity ligation assay; rRT-PCR, reverse real-time PCR; SD, Standard deviation; SELEX, Systematic Evolution of Ligand by Exponential enrichment; ULOD, Upper limit of quantification. responsible for the membrane fusion through its interaction with the F protein, thereby facilitating the entry of viral RNA into the host cell [6,7].
The hemagglutination (HA) followed by the hemagglutination inhibition (HI) tests is usually used to detect and identify NDV. These assays use red blood cells (RBCs) as indicators for the binding of the antigen to its antibody [8]. Besides, virus isolation in embryonated chicken eggs is a laborious and time-consuming diagnostic method, which may take up to 2 weeks. Additionally, ELISA technique, which allows detection of highly abundant and specific viral proteins, is used for disease surveillance and/or monitoring.
Currently, NDV detection is conducted primarily at specialized diagnostic or reference laboratories, employing nucleic acid-based methods including cDNA amplification and sequencing [9]. Compared to virus isolation, the nucleic acid-based methods allow more rapid diagnosis. However, specific handling and transports of potentially infectious materials are required, and the integrity of the unstable RNA genome of the virus must be preserved to avoid false negative results. Application of the proximity ligation assays (PLAs) as a proteomic method for microbial detection [10] holds a great promise because of its specificity and superior detection sensitivity, as compared to PCR-based diagnostic techniques, besides avoiding the need for genome extraction of targeted infectious agents. Hence, PLA-based methods may provide suitable means for powerful veterinary diagnostic settings.
In previous publications, PLA was established using a pair of DNA aptamers as affinity probes to detect the target proteins [11]. The DNA aptamers were extended with additional DNA oligonucleotide sequences to form PLA probes. Upon binding of the DNA aptamers in proximity of the target protein, the two extended DNA sequences would hybridize to a common connector DNA oligonucleotide, allowing the ends to be joined covalently by enzymatic ligation. The ligation products were then amplified and quantified using real-time PCR, while unreacted probes remain unamplified and undetectable. In this manner, conventional detection of protein molecules or infectious agents with limited sensitivity by other methods such as ELISA may be replaced by a highly sensitive and specific detection of reporter DNA molecules, using real-time PCR [12]. To date, few examples of PLA-based tests for the detection of viral pathogens have been reported. For instance, in an early study, porcine parvovirus (PPV) was detected with high sensitivity and specificity [13], and later, PLA was used to detect foot-and-mouth disease virus [14].
There is a great need for further improvement of NDV detection by establishing a sensitive method for rapid identification of the virus and immediate preventive measures to curtail spread of the virus. We therefore have developed in this study, an assay for antigenic detection using aptamer-based PLA. The assay was based on aptamers targeting the surface proteins of the intact virus. The assay was then validated in farm samples to detect NDV.
Aptamer-assisted proximity ligation assay concept
The conception of the PLA based on aptamers is illustrated in Fig. 1. The PLA probes were constructed by connecting the two biotinylated aptamers against the NDV [15] to two streptavidin-conjugated DNA oligonucleotides via biotin-streptavidin interaction. The DNA oligonucleotides have previously been designed and empirically evaluated for use in PLA [16]. In the presence of the NDV in the sample, the two aptamer-based PLA probes bind to their target epitopes on the surface of the viral particle, allowing hybridization of a DNA connector oligonucleotide and a subsequent enzymatic DNA ligation of the free ends of the extended aptamers (Fig. 1). The newly formed DNA molecule is then used as template for signal amplification using real-time quantitative PCR. In the absent of the viral particles, however, the PLA probes will not be in close proximity and no amplifiable DNA template is formed [11]. The numbers of PCR product are directly proportional to the ligated DNA amplicons reflecting the concentration of the NDV in the sample.
Performance of aptamer-assisted proximity ligation assay
The binding affinity, specificity, and compatibility of selected aptamers were described in our previous work [15]. To determine the limit of detection (LOD) of the sandwich Enzyme Linked Aptamer Assay (ELAA) test, LaSota vaccine strain titrating 10 6 EID 50 ÁmL À1 was used, demonstrating that using such aptamers, the sandwich ELAA was able to detect as low as 1.2 (EID 50 ÁmL À1 ), as compared to the gold standard for NDV detection, the reverse real-time PCR (rRT-PCR) that provides a LOD of 0.6 (EID 50 ÁmL À1 ), when using the aptamers as affinity reagents.
We used the selected aptamers to establish PLAbased tests for detection of NDV in farm samples. Both homogenous-and solid-phase PLAs were utilized for highly sensitive and specific detection of NDV. The LOD of homogeneous PLA was 0.58 (EID 50 ÁmL À1 ), while that for solid-phase PLA was slightly lower with 0.4 (EID 50 ÁmL À1 ). The reproducibility and the sensitivity of the PLA-based tests were compared to those of sandwich ELAA and rRT-PCR using data of triplicate reactions. The results revealed that solid-phase PLA with a LOD of 0.4 (EID 50 ÁmL À1 ) is three times more sensitive than the sandwich ELAA and one and a half times more than rRT-PCR and homogenous PLA ( Fig. 2 and Table 1).
The results showed that homogeneous and solidphase PLAs were more sensitive than sandwich ELAA and rRT-PCR. Furthermore, the solid-phase PLA demonstrated greater dynamic range with better values of LOD, the lower limit of quantification (LLOQ), the upper limit of quantification (ULOD), and the minimal detectable dose (MDD), as compared to the other tests.
Detection of NDV in farm samples
To test the applicability of the PLA tests with the selected aptamers as affinity binders for farm samples, 40 nasal per cloacal swabs were collected and analyzed using sandwich ELAA, homogenous-and solid-phase PLAs. The sensitivity and specificity of developed methods were compared to those obtained by rRT-PCR test as the gold standard. The results are summarized in Table 2. The sensitivity was calculated as sensitivity = TP/(TP + FN), whereas the specificity was calculates as specificity = TN/(TN + FP), where TP = true positive; FP = false positive; TN = true negative; FN = false negative.
The sandwich ELAA and the PLA tests, developed in this study, successfully detected 26 NDV positive samples out of 40 tested swabs. The results were statistically significant (P < 0.01). The obtained results for sandwich ELAA and for the PLA tests were 100% concordant with rRT-PCR. Detailed analysis of the results relative to the diagnosis of NDV in farm samples are presented in the Tables S1 and S2.
The distribution frequencies of obtained Ct values for the 40 farm samples tested by PLAs and rRT-PCR are displayed in Fig. 3.
Discussion
Sensitive techniques for detecting viral surface proteins or nucleic acids are critical for early and accurate NDV diagnosis. Antibody-based techniques, such as ELISA and hemagglutination assay, have proven to be important tools for the detection of viral proteins. However, these methods require significant amounts of starting materials, and are vulnerable to nonspecific background signals, which may result in lower assay sensitivity. Besides, the production of antibodies is time-consuming, and the accuracy of such antibody- Table 1. Comparison of the LOD, the LLOQ, the ULOD, the MDD, and the dynamic range between sandwich ELAA, RT-PCR, and solid-phase and homogenous PLAs for the detection of NDV. The values presented in the table are the intersection of the calculated values as described in Fig. 2 based assays depends on the quality of the used antibody batches. Nucleic acid-based methods such as PCR assays are, highly sensitive, easier to establish, and usually more efficient than antibody-based methods. However, DNA detection does not demonstrate viable infectious pathogens, RNA result does not provide information on protein functionality, and such tests usually require sample preparation for viral genome isolation and cDNA synthesis. Immuno-PCR [17] and real-time immuno-qPCR [18] represent the first trials to incorporate strengths into both approaches, addressing the weaknesses associated with conventional immunoassays. Nonetheless, the sensitivity and the specificity of immuno-PCR-based assays remain limited and depend on antibody specificity. As for conventional ELISA, these methods are prone to high background signals [19] as cross-reactive signals from single antibody binding to off-target proteins may be amplified [20]. Thus, it is critical to develop a rapid, sensitive, accurate, and widely available diagnostic test for the detection of NDV. PLA is reported to be exquisitely sensitive technique that relies on real-time PCR, with demonstrated lower background than other immuno-PCR methods [21]. PLA utilizes a pair of DNA-tagged aptamers or antibody to detect proteins, protein modifications or protein-protein interactions. This method may be conducted in both homogeneousand solid-phase formats, allowing accurate visualization of single protein molecules or complexes with higher precision. These properties underline the potential of PLA in research, diagnosis, pharmacology, and a wide variety of applications that require high specificity and sensitivity, and accurate protein expression assessments such as cancer biomarkers [22,23], prions [24], exosomes [25,26], and personalized medicine [24]. Furthermore, PLA has been reported to detect a range of viral disease agents such as foot-and-mouth disease virus [14]. However, to date, no study has been reported on the detection of NDV using aptamer technology. Thus, we present the first homogeneous-and solid-phase PLA, with DNA aptamers as affinity reagents, for the detection of NDV in farm samples. There are numerous advantages of using DNA aptamers in PLA as high-affinity binders for sensitive target recognition, such as easy chemical production and modification process and reagent stability. In addition, synthetic DNA aptamers are inexpensive compared to antibody production, with no batch-tobatch variations. Because of their ability to bind tightly and specifically to their targets, aptamers are used in a variety of diagnostic and therapeutic assays [27]. Importantly, aptamers may prove useful for the detection of a wide variety of viruses ranging from detection of Influenza, Dengue, Ebola and SARS to NDV and SARS-CoV2 [28]. The emergence of NDV and the increased rate of morbidity and mortality associated with it worldwide, make the establishment of accurate laboratory assays critical for poultry management and epidemic surveillance and NDV control. However, current diagnostic tests for NDV disease, including ELISA and rRT-PCR, may be time-consuming, and suffering from low sensitivity. By adopting selected DNA aptamers, based on Systematic Evolution of Ligand by Exponential enrichment (SELEX) technology and high-throughput sequencing, we have explored the possibility of NDV detection using PLA that uses DNA aptamers in homogenous and solid-phase formats. In this report, we demonstrate that PLA-based test presents an analytical sensitivity similar, if not greater, to that of rRT-PCR; thus, it can be suggested that this new methodology may be a rapid and reliable tool for the diagnosis of NDV, and presents an alternative to the ELISAbased test. The homogeneous PLA allowed detection of as few as 0.58 (EID 50 ÁmL À1 ), which is similar to the analytical sensitivity of the rRT-PCR-based assay, and superior to that of the sandwich ELAA. Furthermore, the solid-phase PLA with a LOD of 0.4 (EID 50 ÁmL À1 ) is one and a half times more efficient than the homogeneous PLA and the rRT-PCR, and three times more sensitive than the Sandwich ELAA. The diagnostic sensitivity, specificity, and robustness of the PLAs are heavily dependent upon the suitability of the binding aptamers used in such assays. Both Apt_NDV01 and Apt_NDV03 were selected using LaSota vaccine as a target along with SELEX process combined with high-throughput sequencing. The affinity, the specificity, and the compatibility of these two selected aptamers used in the sandwich assays have been evaluated in our previous study [15]. The binding analysis revealing that both aptamers recognize NDV in field clinical samples with high specificity and nanomolar affinities [15]. The developed PLA-based tests for the detection of NVD were further evaluated on the farm samples previously analyzed by rRT-PCR and sandwich ELAA. When these samples were analyzed by the PLA, a 100% concordance with the previously developed sandwich ELAA was demonstrated.
Compared to RT-PCR, PLA exhibits several advantages. It is much simpler to perform and demonstrates higher diagnostic sensitivity with very low background noises. In addition, PLAs does not require any sample preparation step as only a dilution of the original sample is performed before being added to a mix of the proximity probes.
In summary, we report for the first time the use of aptamers in homogeneous-and solid-phase PLAs to detect NDV and overcome limitations of laboriousness or lack of sensitivity of current assays for the detection of viral genomes and/or proteins. With such analytical sensitivity, similar or greater to that of rRT-PCR, the PLAs are sensitive, rapid, reproducible, and reliable tools for the diagnosis of NDV in farm samples and could be an alternative to rRT-PCR, with a clear potential for a multiplex application for the detection of various avian viruses.
Farm samples
The study was performed on 40 poultry samples including tracheal (ET) and cloacal swabs (EC), as well as internal organs consisting of allantois (A), kidneys (K), lung (L), liver (Li), and trachea (T), collected from suspected chickens showing NDV clinical signs. They were collected from eight farms, located in the provinces of Bizerte, Nabeul, Ben Arous, Sidi Bouzid, Beja, Ariana, Sfax, and Jendouba, and sent rapidly to the diagnostic laboratory at the Institute Pasteur of Tunis for analysis. Field samples are sent by the public or private veterinarians in the frame of their routine follow-up/monitoring of poultry farms, or when reporting a suspicious disease, especially for both NDV and avian influenza virus (AIV) infections, which are under continuous surveillance in Tunisia.
Aptamers
Aptamer production and validation was previously described by Marnissi et al. [15]. Briefly, selection was performed using three rounds of systematic evolution of ligands by exponential enrichment (SELEX). Then, the highly enriched ssDNA pool was sequenced using quantitative high-throughput sequencing, and the results were analyzed using FASTAptamer Toolkit. The reads were sorted by copy numbers and then clustered into families, according to their sequence homology. High-frequency aptamer sequences were tested for their affinity and specificity, and further validated in a sandwich enzymatic-linked aptamer assay (ELAA) for rapid detection of NDV in farm samples. Table 3. List of DNA oligonucleotides.
Viral nucleic acid extraction
Viral RNA was extracted from LaSota vaccine strain and NDV present in collected farm samples, using TRIzol reagent (Invitrogen, Carlsbad, CA, USA), according to the manufacturer's instructions. The final extracted pellets were suspended in 20 lL RNase-free water and stored at À80°C.
Sandwich ELAA
Sandwich ELAA was performed in Streptavidin-coated 96well microtiter plates (Thermo Fisher Scientific), as follows: 100 µL of 10 nM Biotin-Apt_NDV03 was denatured by heating at 95°C for 5 min and cooled on ice for 10 min. The aptamers were added to a microtiter plate and incubated for 1 h at room temperature (RT). Then, the wells were washed three times with 200 µL of washing buffer. Typically, 1 lL of each sample aliquot was diluted in 100 lL 19 PBS and added into each well, and incubated for 1 h, at RT. Wells were washed three times, and 100 µL of 10 nM digoxigenin-labeled Apt_NDV01 was added. After 1 h of incubation at RT and three washes, an antidigoxigenin antibody (1 : 2000) was added to each well and left to react for 30 min. The wells were washed three times, and a solution of OPD was added. Finally, the reaction was stopped with 2 NH 2 SO 4 and the absorbance recorded at 492 nm.
One-step real-time reverse transcription PCR
One-step rRT-PCR was conducted in a total volume of 15 lL, using AgPath-ID TM One-Step RT-PCR Kit (Applied Biosystems TM, Artenay, France) by mixing 7.5 lL of 29 RT-PCR Buffer, 0.6 µL of 259 RT-PCR Enzyme Mix, 2 lL of RNA template, 0.3 lM of each forward and reverse primer specific to polymerase (M) gene, and 0.2 lM Taq-Man TM (ThermoFisher, Artenay, France) probe and RNase-free water to reach the final volume of 15 lL. The mixture was incubated at 45°C for 10 min followed by 95°C for 10 min and 45 cycles of 95°C for 15 s and 60°C for 45 s. A negative control, in which the samples were replaced by water, was included in each PCR run.
PLA probe preparation and immobilization of aptamers on microparticles
The PLA probes were prepared using a slightly modified PLA protocol reported by Oliveira et al. [29]. Two PLA probes were prepared by mixing 10 µL of 100 nM SCL1 DNA oligonucleotide with 10 µL of 100 nM biotinylated Apt_NDV01, and 10 µL of 100 nM SCL2 DNA oligonucleotide with 10 µL of 100 nM biotinylated Apt_NDV03.
Following 10 s spin at 21 062 g, the mixtures were incubated at RT for 60 min. Then, 90 lL of probe storage buffer was added to the mixtures and the incubations were continued for 30 min at 20°C. The probes may be kept at 4°C for up to 6 months. Prior to their use, the aptamers were denatured by heating at 95°C for 5 min and then cooled on ice for 10 min.
For the solid-phase PLA, magnetic beads are used as solid support on which an aptamer is immobilized as capture affinity binder. For that, 100 lL of streptavidin-coated Dynabeads (10 mgÁmL À1 ) microparticles was washed three times with 500 lL of washing buffer. Then, the biotinylated Apt-NDV01 was diluted to 50 nM in storage buffer from which 200 lL were denatured by heating at 95°C for 5 min and cooled on ice for 10 min. The aptamer solution was then mixed with the magnetic beads, followed by incubation for 1 h at RT, with gentle shaking. The magnetic beads were collected on a magnet for 30 s, and the supernatant was discarded. The beads were then washed twice with 500 lL PLA buffer and resuspended in 200 lL of storage buffer. The conjugates may be kept at 4°C for up to 6 months.
Sample preparation
Homogenous PLA An aliquot of 10 6 EID 50 ÁmL À1 of LaSota vaccine strain, diluted in PLA buffer, was used to prepare a twofold serial dilution. For each PLA reaction, 45 lL of diluted sample was used.
The PLA probe mixture was prepared by diluting each PLA probe separately in the PLA buffer to 1 nM. Then, an equal volume of each probe was mixed in PLA buffer to a final concentration of 500 pM for each probe and incubated at RT for 5 min.
For each homogenous PLA reaction, 2 µL of PLA probe mix was added to 2 µL of each sample diluted with DMSO dilution buffer, and incubated for 2 h at RT. For clinical samples, a positive control made of 2 µL of diluted LaSota vaccine as well as a negative control, for which the sample was replaced by 2 µL of PBS/0.1% BSA, were included. After the incubation time, 2 lL of the mixture was added to 25 lL ligation/PCR mix (Table 4) and incubated for 5 min at RT. Then, qPCR was performed using an initial step of 95°C for 2 min, followed by 40 cycles of 95°C for 5 s and 60°C for 30 s. All reactions were carried out in triplicate.
Solid-phase PLA
Previously prepared Apt-NDV 01-coated magnetic beads were vortexed to a homogeneous suspension, and 1 lL was pipetted into a 1.5-mL tube. The storage buffer was discarded, and the beads were mixed with 5 lL of PLA buffer. Subsequently, the bead suspension was mixed with 45 lL of diluted samples in PCR strip; the mixture was shortly vortexed and incubated for 1 h at RT under rotation. The microparticles were washed three times, 50 lL PLA probe mix (250 pM of each probe) was added, and the mixture was incubated for 1 h at RT under rotation. The beads were washed twice, and then, 25 µL of ligation and PCR mix (Table 4) were added to the beads, and the qPCR was performed as described above. For each run, a positive control containing diluted LaSota vaccine and a negative control made of PBS, 0.1% BSA were included. All washing steps were done using the DynaMag TM -Spin Magnet (ThermoFisher Scientific, USA).
Statistics
The one-way ANOVA test, using Simple Inter-active Statistical Analysis online tool (http://www.quantitativeskills.c om/sisa/index.htm), was used to perform statistical analyses. The results were defined as significantly different if P < 0.01. The one-way ANOVA test was also used to calculate the 95% confidence intervals (CI) of each mean.
The mean and the standard deviation (SD), the coefficient of variation percentage (CV%) of the optical densities (OD 492 nm) for the sandwich ELAA, the threshold cycle (Ct) of rRT-PCR, and PLAs were further analyzed using Excel software. For rRT-PCR and qPCR readouts of cutoff points were fixed empirically at 40 Ct [30].
The relative variability (CV %) between triplicates of one sample was calculated as CV % = (SD sample triplicates / Mean sample triplicates )*100 and was defined as significantly different, if CV % < 20%. STATPLUSPRO version 5.9.8 was used to calculate the LOD [31], the LLOQ, the ULOQ, the MDD, and the dynamic range of each method, as described by Marnissi et al. [15].
Briefly, the LOD was calculated as LOD = background signal + (3 9 SD mean ), where the background signal corresponds to the mean value of three negative control samples. The LLOQ is the lowest concentration at which the product can be accurately identified and at which certain predefined bias and imprecision targets are achieved. The LLOQ was determined as LLOQ = LOD + (10 9 SD background ). The upper ULOQ was determined as ULOQ = f (X À (3 9 SD X ), and the MDD was determined as MDD = 2 9 SD background mean . The assay sensitivity was calculated as sensitivity = TP/(TP + FN), whereas the assay specificity was calculated as specificity = TN/(TN + FP), where TP = true positive; FP = false positive; TN = true negative; FN = false negative.
Supporting information
Additional supporting information may be found online in the Supporting Information section at the end of the article. Table S1. Determination of 95% confidence intervals (CI) and coefficient of variability. The 95% confidence interval (CI) of each mean was calculated using the one-way ANOVAs test. Therefore, a range between upper and lower numbers calculated from a sample was determined. The relative variability between triplicates of each sample was defined as significantly different if CV% < 20% using the same statistical test. Table S2. Clinical performance of Homogeneous PLA, Solid-phase PLA, Sandwich ELAA and rRT-PCR tests. Results of the diagnosis of NDV in tracheal (ET) and cloacal swabs (EC) and internal organs, consisting of allantois (A), kidneys (K), lung (L), liver (Li) and trachea (T) collected from 40 chickens with suspected NDV infection. Results of Homogeneous PLA, Solid-phase PLA, Sandwich ELAA and rRT-PCR tests are reported as positive or negative for each sample.
|
v3-fos-license
|
2018-04-03T02:48:13.425Z
|
2013-08-16T00:00:00.000
|
17051908
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/1556-276X-8-355",
"pdf_hash": "958d6db8e17f0b53ac479d082e4f2153de4526df",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45884",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "87cd9e168b0febdc5d403f9b3c68938be0768459",
"year": 2013
}
|
pes2o/s2orc
|
Highly stable carbon nanotube field emitters on small metal tips against electrical arcing
Carbon nanotube (CNT) field emitters that exhibit extremely high stability against high-voltage arcing have been demonstrated. The CNT emitters were fabricated on a sharp copper tip substrate that produces a high electric field. A metal mixture composed of silver, copper, and indium micro- and nanoparticles was used as a binder to attach CNTs to the substrate. Due to the strong adhesion of the metal mixture, CNTs were not detached from the substrate even after many intense arcing events. Through electrical conditioning of the as-prepared CNT emitters, vertically standing CNTs with almost the same heights were formed on the substrate surface and most of loosely bound impurities were removed from the substrate. Consequently, no arcing was observed during the normal operation of the CNT emitters and the emission current remained constant even after intentionally inducing arcing at current densities up to 70 mA/cm2.
Background
Carbon nanotubes (CNTs) are widely used as field emission electron emitters for X-ray tubes [1][2][3][4], field emission displays [5], and high-resolution electron beam instruments [6,7] because of their excellent electron emission property, chemical inertness, and high electrical and thermal conductivity [8,9]. In spite of these superior characteristics, practical applications of CNT field emitters to devices particularly requiring high-voltage operation are limited due to unstable electron emission properties of the CNT emitters. Electron beam current emitted from CNT emitters can be fluctuated or degraded because CNTs are damaged by the back bombardment of ions produced from the residual gas [10,11] or CNTs are structurally deformed due to excessive Joule heating [12,13]. More seriously, emission current can be abruptly dropped because CNTs are detached from a substrate [14]. If a very high current (300 nA per single CNT) flows through a CNT, adhesion between the CNT and the substrate becomes weak due to resistive heating and accordingly the CNT can be peeled off from the substrate [14,15], or a strong electric field exerts electrostatic force on CNTs, leading to the detachment of the CNTs [15,16]. Weak adhesion of CNTs to a substrate deteriorates the removal of CNTs.
In addition, if CNT emitters are operated at a high voltage or at a high electric field, electrical arcing (or vacuum breakdown) can occur. Arcing can be initiated by the removed CNTs [17], impurities on the CNTs or substrates [18,19], protrusion of CNTs [10], low operating vacuum [10], and a very high electric field [20][21][22][23]. Since arcing is accompanied with a very high current flow and it can produce a plasma channel near the emitter, CNTs are seriously damaged or sometimes CNTs are almost completely removed from the substrate by the arcing events [17,20]. Detachment of CNTs from a substrate is an irreversible catastrophic phenomenon for a device operation [14]. In addition to the detachment of CNTs, arcing induces a sudden voltage drop, and thus, device operation is stopped. Therefore, for a stable operation of a device using CNT emitters, arcing should be prevented. Particularly, CNT emitters on small metal tips (diameter < 1 mm) are necessary for miniature X-ray tubes [1][2][3][4] and micro-focus X-ray tubes [6,7]. Small metal tips produce much higher electric field than flat substrates at the same applied voltage due to their sharp geometry. As a consequence, CNT emitters on small metal tips can suffer from much serious and frequent arcing, and hence, stable operation of the CNT emitters against arcing is a big issue [4,14].
So far, few papers have been reported on CNT emitters to withstand arcing, although some methods to reduce arcing events have been reported, including the operation of the CNT emitters under ultrahigh vacuum (approximately 10 −9 Pa) [24,25], plasma treatment of the emitters [10,26], and removal of organic impurities by firing [19]. Here, we present an approach to fabricate CNT emitters on small metal tips that show extremely high stability against arcing. Using a metal alloy as a binder, CNT emitters can be strongly attached to a metal tip substrate. Due to the strong adhesion, CNTs emit constant currents even after intense arcing events. In addition, CNT emitters can be pre-treated with an electrical conditioning process with the help of strong adhesion, and almost no arcing events are observed during a normal operation.
Methods
The fabrication process of the CNT emitter is schematically displayed in Figure 1a. The commercial singlewalled CNTs (model: CNT SP95, Carbon Nano-material Technology Co., Ltd., Pohang-si, South Korea) were used for the fabrication of CNT emitters. The CNTs were purified using a hydrothermal treatment with a mixture of nitric acid and sulfuric acid for a better CNT dispersion and a complete removal of amorphous carbon [27]. After a CNT solution consisting of 1 wt.% CNT and 99 wt.% 1,2-dichlorobenzene (Sigma-Aldrich, St. Louis, MO, USA) was sonicated at room temperature for 2 h, the CNT solution (3 μl) was mixed with a commercialized metal mixture binder (0.025 g; Premabraze 616, Lucas-Milhaupt, Inc., Cudahy, CA, USA). The metal mixture binder is composed of 61.5 wt.% silver, 24 wt.% copper, and 14.5 wt.% indium micro-and nanoparticles. Metal wires such as copper, kovar, stainless steel (SUS), tungsten, silver, and titanium with a diameter of 1 mm were used as substrates of the emitters. One end of the metal wires was mechanically polished to have a flat surface. Around 0.5 μl of the CNT/metal binder mixture was put on a metal tip substrate. The CNT/metal binder mixture dried out very quickly in approximately 5 min due to high volatility of dichlorobenzene. Subsequently, an annealing process was carried out under vacuum at approximately 10 −6 Torr at different temperatures. For comparison, a CNT emitter was prepared using silver nanoparticles (NPs; DGH, Advanced Nano Products Co., Ltd., Buyong-myeon, South Korea) under similar conditions.
The morphologies of the fabricated CNT emitters were characterized using a field emission scanning electron microscope (FESEM; Hitachi S-4800, Chiyoda-ku, Japan). The adhesive force of the CNT/metal binder coating on a substrate was measured by a pencil hardness test, which is described in American Society for Testing and Materials (ASTM) D3363. Field emission properties of the fabricated CNT emitters were characterized in a vacuum chamber, which is schematically shown in Figure 1b. A diode type with a copper disc (diameter, 30 mm) acting as an anode was employed for the field emission test. A negative high voltage of 0~−70 kV was applied to the CNT emitter while the Cu anode was grounded. The distance between the CNT emitter and the anode was fixed to 15 mm. In order to protect the high-voltage power supply due to high-voltage arcing, a current-limiting resistor (resistance, 10 MΩ) was installed between the power supply and the emitter.
Results and discussion
The role of metal binders is to attach CNTs to substrates. Silver NPs have been widely used for a metal binder due to good electrical conductivity and good contact with CNTs [3,4,28]. To investigate the performance as a binder, we prepared a CNT emitter on a tungsten metal tip (diameter, 1 mm) using silver NPs ( Figure 2a). The annealing temperature to melt silver NPs was 750°C. As shown in Figure 2b, the fabricated CNT emitters exhibited very poor stability. Electron current density emitted from the emitter was initially 57.3 mA/cm 2 at the applied voltage of 35.5 kV; however, the current density was dramatically reduced to 13.6 mA/cm 2 for a 70-min operation ( Figure 2b). Frequent arcing was observed during the test, and the emission current density was slowly decreased with the increase in the arcing events. A FESEM image clearly shows that approximately 70% of the CNT and silver binder attached on the substrate were removed after the test (Figure 2c). These results indicate that silver NPs could not work as a good binder of a CNT emitter that can withstand against high-voltage arcing. To analyze the bad performance of the CNT emitter, the adhesion force between the silver NP binder and the tungsten substrate was characterized with a pencil hardness test. For the characterization, the silver NPs were annealed on a tungsten sheet (10 × 10 mm 2 ) at 750°C. The pencil hardness of the silver film attached to the tungsten sheet was 2B, which is a soft level as determined by ASTM D3363. Such poor adhesion of the silver film might be improved by changing the substrate, and thus, we prepared the silver film on other metal sheets such as SUS, titanium, kovar, and copper. However, the pencil hardness of the silver film did not exceed 1B, reflecting that the adhesive force of the silver binder is not so high on the metal substrates.
As a candidate of a good binder, we tried to use a brazing filler material that is used to join two different metals. The brazing filler material is a metal mixture composed of silver, copper, and indium micro-and nanoparticles described in the 'Methods' section. Before using this material as a binder of the CNT emitters, the adhesion behavior of the material at different substrates was analyzed. As shown in Figure 3a,b,c,d, the metal mixture was melted at 750°C, but the melted metal mixture was spherically aggregated on the tungsten, SUS, titanium, and silver substrates, suggesting a poor wettability to the substrates. However, thin films of metal mixture binders were uniformly formed on kovar and copper substrates (Figure 3e,f, respectively). In addition, pencil hardness tests revealed that the hardness of the metal mixture films on the kovar and copper substrates were 4H. This indicates that the metal mixture films were very strongly attached to the substrate and the adhesive force to the substrate was remarkably enhanced compared to silver NPs.
Based on this fact, CNT emitters were fabricated on kovar and copper tips using the metal mixture as a binder. The metal mixtures were annealed at 750°C. FESEM images of the CNT emitter prepared on a kovar tip show that CNTs were uniformly coated on the kovar tip and vertically aligned CNTs were clearly observed (Figure 4a). Emission current density remained almost constant with time after electrical conditioning, which will be described later (Figure 4b). In addition, even though frequent arcing occurred, the metal binders and the CNTs were still adhered to the tip substrate (Figure 4c). Note that the metal binder and CNTs were seriously detached from the substrate when silver NPs were used as a binder. Therefore, the CNT emitters fabricated using the metal mixture binder exhibited very high stability against arcing.
However, the fact that frequent arcing was observed during the field emission prevents a stable operation of the CNT emitters. As displayed in Figure 5a, approximately 160 arcing events occurred at the emission current density of 40 mA/cm 2 even after a conditioning process. The reason of such frequent arcing was attributed to non-melted materials in the metal mixture binder. Although it looks like that the metal mixture was melted to form a film on the tip substrate after annealing at 750°C, a FESEM image reveals that some NPs in the mixture were not completely melted and the NPs were exposed to the surface (Figure 5b). Since the non-melted NPs were loosely attached to the binder film, they could be easily detached from the surface by a high electric field [14][15][16]. When the NPs were detached, an arcing could be induced; the arcing continued until all the loosely bound NPs were completely removed from the surface. This is the reason why frequent arcing events were observed at the CNT emitters. To overcome this problem, the annealing temperature was increased to 900°C. A thin and uniform film of the CNT/metal binder mixture was formed on a kovar tip substrate, and no NPs were observed on the surface because they were completely melted at the temperature of 900°C. However, unfortunately, the surface of the kovar substrate was seriously damaged at the temperature, limiting the practical applications of the CNT emitters (inset of Figure 5c).
However, the damage of a tip substrate was not observed when copper was used as a substrate. Figure 6 shows the FESEM images of the CNT emitter fabricated on a copper tip. A uniform film of the CNT/metal binder mixture with the thickness of approximately 20 μm was prepared on the copper tip after an annealing process at 900°C (Figure 6a). The magnified FESEM images of the CNT/metal binder mixture (Figure 6b) show that vertically standing CNTs of different heights (Figure 6c) as well as CNTs lying on the side (Figure 6d) were formed on the surface. One end of the vertically standing CNTs was generally embedded in the binder film, suggesting strong adhesion to the coating. In contrast, agglomerates of amorphous carbons or CNTs (rectangular regions in Figure 6d) that were not bound to the coating materials were also observed. The agglomerates of amorphous carbons or CNTs were attributed to an incomplete purification process that was described in the 'Methods' section. These agglomerates exert negative effects on the stable operation of the field emitter.
In order to remove the loosely bound carbon agglomerates, the as-prepared CNT emitters were treated with electrical conditioning processes [29]. Electrical conditioning is a process to induce arcing intentionally to remove the materials that negatively affect field emission. An electrical conditioning process was carried out by increasing the applied electric field at the emitters by 0.033 V/μm (corresponding to 500 V in these experiments) to 0.83 V/μm (Figure 7a). The electric field at each step was maintained for 5 min, and three runs of the conditioning processes were performed for each CNT field emitter. It should be noted that the electric field (abscissa) shown in Figure 7a was calculated by dividing applied voltage by the emitter-anode distance. However, actual electric fields are much higher than the abscissa values. This is because small metal tips (diameter, 1 mm) were used as the substrates of CNT emitters in our experiments and such small metal tips produce higher electric field than a flat substrate at the same applied voltage [30]. While the electric field was increasing, many arcing events occurred because loosely bound materials on the surface were removed by the strong electric field [14][15][16]. After three runs of electrical conditioning processes, the loosely bound materials shown in Figure 6d were almost completely removed (Figure 7d). Meanwhile, arcing events inevitably occur during the field emission at emission current densities higher than a critical density of approximately 50 mA/cm 2 [22,23]. This is because emitting CNTs are self-heated due to Joule heating, which can result in a thermal runaway over the critical current density. Due to the thermal runaway, the temperature of CNTs at the tip apex regions increases and accordingly the apex regions can be melted or evaporated. Furthermore, CNTs can be broken at defect sites because electrical resistance at the defect sites is higher than that at other regions, and hence, the temperature can be highly increased at the sites. Since CNTs of greater heights contribute to higher field emission current, thermal runaway is more serious at longer CNTs. As a result, longer CNTs become short [29] and vertically standing CNTs with more uniform heights remained on the substrate after repetitive conditioning processes (Figure 7c). Consequently, through electrical conditioning processes, loosely bound materials on the surface were removed and simultaneously the heights of CNTs became more uniform. During the conditioning process, many arcing events occurred; however, the arcing finally led to more stable field emission because the materials that induce arcing were removed in advance. Figure 8 shows typical field emission characteristics of the fabricated CNT emitters after the conditioning processes. Current density vs. electric field (J-E) curves were repeatedly measured. The J-E curves follow well the Fowler-Nordheim (FN) equation [31] (inset of Figure 8a) with a comparatively high field enhancement factor (β) of about 23,000. For comparison, the J-E curves of the CNT emitters during the conditioning processes were included ( Figure 7a). As the conditioning process continued, a threshold electric field corresponding to 10 mA/cm 2 increased from 0.4 to 0.54 V/μm and the J-E curves changed. This is because long CNTs become gradually shorter during the conditioning processes and emission current density from each CNT is reduced. However, after the conditioning processes, J-E curves remain almost constant at the repeated field emission tests (Figure 8a). One thing to note here is that the emission current density reached higher than approximately 100 mA/cm 2 in the J-E measurements and a few arcing events occurred at such a high current density. However, in contrast to the conditioning process, the J-E curves practically do not change even after the arcing events. Figure 8b shows the temporal behavior of the emission current densities at different electric fields, which were measured at a medium vacuum of approximately 10 −5 Torr. No arcing event occurred at emission current densities lower than 50 mA/cm 2 , and the emission current densities remain almost constant with time. When the current density was increased to 70 mA/cm 2 that is higher than the critical current density, four arcing events (marked in blue arrows in Figure 8b) occurred for a 70-min operation. However, emission current density does not change after the arcing events, which is clearly shown in Figure 8b. Therefore, the emitters could be operated without arcing below 50 mA/cm 2 and constant current densities were stably emitted even arcing was induced at higher electric fields, demonstrating that the fabricated CNT emitters exhibit very stable field emission properties. The high stability of the field emitters with high β values was attributed to the fact that vertically standing CNTs were strongly attached to the substrates through the metal mixture binder.
Conclusions
CNT emitters were fabricated on copper tip substrates using a metal mixture that was composed of silver, copper, and indium micro-and nanoparticles as a binder. The metal mixture strongly attached CNTs to the tip substrate. Due to the strong adhesion, CNT emitters could be pre-treated with an electrical conditioning process without seriously damaging the CNTs even though many intense arcing events were induced at the small and sharp geometry of the tip substrate. Impurities that were loosely bound to the substrates were almost removed and CNT heights became uniform after the electrical conditioning process. Consequently, no arcing events were observed from the CNT emitters during the normal operation with the current density less than 50 mA/cm 2 . Moreover, even though arcing was induced at a higher current density of 70 mA/cm 2 , the emitters could withstand the arcing and the emission current remained constant with time. Due to the strong binding of the CNTs to the substrates, CNTs were not detached from the substrates even by the arcing events. Consequently, Figure 8 Field emission properties and emission stabilities of the fabricated CNT emitters after the electrical conditionings. (a) Field emission properties of the fabricated CNT emitters after the conditioning process. Five J-E measurements were performed. One arcing occurred at the maximum current density of the fourth run (pink arrow). Inset graph and image in (a) are the FN plots of the J-E curves of the CNT emitter and the wettability of metal mixture binders on the copper tip substrate after annealing at 900°C, respectively. (b) Emission stabilities of the fabricated CNT emitters at different electric fields.
|
v3-fos-license
|
2024-01-12T16:16:05.496Z
|
2024-01-01T00:00:00.000
|
266942011
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hyp.15068",
"pdf_hash": "07aa99a3a0be8b224e4d69fa411e49b46548082e",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45885",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"sha1": "34c6ce04fb6c8e47e6c63ac63836ebb668d5baf6",
"year": 2024
}
|
pes2o/s2orc
|
Large‐scale physical facility and experimental dataset for the validation of urban drainage models
Numerical models are currently the main tool used to simulate the effects of urban flooding. The validation of these models requires thorough and accurate observed data in order to test their performance. The current study presents a series of laboratory experiments in a large‐scale urban drainage physical facility of approximately 100 m2 that includes roofs, streets, inlets, manholes and sewers. The facility is equipped with a rainfall simulator as well as a surface runoff and pipe inflows generators. The experiments were divided in two sets. In Set 1 the surface runoff was generated exclusively by the rainfall input, while in Set 2 the rainfall simulator was used in combination with the runoff generators. In all the tests the water discharge was measured at points on the inlets, roofs, and outfall. The water depth at different locations of the facility was also measured. The experimental tests were replicated numerically using the urban drainage model Iber‐SWMM. Experimental results show that, even in a relatively small catchment the peaks in the hydrographs generated at each element of the facility during intermittent rainfalls are significantly attenuated at the catchment outlet. The agreement between the experimental and numerical results show that there are some differences in the hydrographs generated at each element, but that these differences compensate each other and disappear at the outfall. The results generated provide the research community with a thorough and high‐resolution dataset obtained under controlled laboratory conditions in a large‐scale urban drainage facility, something which has not previously been available.
| INTRODUCTION
Extreme rain events, which are becoming more and more frequent, test the capacity and operation of urban drainage systems.In many cases the drainage networks of cities are obsolete, deficient, or have low levels of retrofit and maintenance that can lead to human and material losses as a result of urban flooding and environmental pollution (Cea & Costabile, 2022).In addition, new urban developments and new impervious areas add extra runoff volumes to existing systems that must be managed and evacuated, adding further stress to the drainage network.
During rainfall, the overland flow is two-dimensional and is conditioned by the complexity of the urban configuration and by its interaction with the drainage network.Understanding and simulating these complex flows is essential in assessing flood risk and sewer performance, and in proposing mitigation actions.Currently, urban drainage software are able to simulate two-dimensional surface flow and its interaction with the sewer network in an integrated way thanks to our current understanding of urban hydrology processes, significant advances in computational performance, and the development of high-resolution data acquisition technologies.These software, known as 2D/1D dual urban drainage models, solve two-dimensional shallow water equations (SWE) with a fully distributed rainfall-runoff transformation approach on the surface, while solving one-dimensional Saint-Venant equations in the sewer network.A wide range of commercial 2D/1D urban drainage software is available, such as Info works (Bertrand et al., 2022), MIKE (Haghighatafshar et al., 2018) and Bentley (Ramos et al., 2017).Despite these recent developments, free software has yet to be used extensively in real projects, remaining largely in the research sphere (Fraga et al., 2017;Yin et al., 2020;Chang et al., 2021) or enjoying only limited distribution.In general, all 2D/1D software shares the same structure: A 2D overland flow engine, a 1D sewer flow engine, and a core that synchronizes both engines and exchanges information between them.Meanwhile the 2D engine is usually self-developed software, and many 2D/1D urban drainage models use the open-source engine of the Storm Water Management Model to solve the in-sewer processes (Leandro & Martins, 2016;Barreiro et al., 2022).
The reliability and performance of urban drainage models must be validated using observed data in order to ensure that the numerical solution adequately represents reality.The availability of observed data is usually very limited due to the cost and complex set-ups required to acquire it, both in field and laboratory campaigns.Datasets obtained in field campaigns are typically used to calibrate the model parameters in real urban basins, but usually introduce measurement errors such as uncertainties in the acquisition system or in the sensor calibration processes (Fraga et al., 2016).On the other hand, the use of experimental data obtained in laboratory facilities is more suitable for assessing the performance of numerical models, since tests are carried out under strictly controlled conditions, and hence there is far lower uncertainty in terms of the input and observed data (Naves et al., 2019;Addison-Atkinson et al., 2023).For this reason, in recent years a number of studies using urban drainage laboratory facilities have been conducted to better understand the relevant processes on urban hydrology and the hydrodynamics of urban flooding (Mignot et al., 2019).Several of these studies focused on the influence of street layouts and the presence of infrastructures on the flooding processes (Dong et al., 2021;Naves et al., 2020;Naves et al., 2021), including the bidirectional flow exchange between the surface and the sewer network under non-surcharge and surcharge conditions (Fraga et al., 2017;Rubinato et al., 2017), while others have analysed the hydraulic performance of specific urban drainage components such as inlets (Russo et al., 2021), manholes (Rubinato et al., 2022) or roofs (Sañudo et al., 2022).
Many of these experiments were carried out on scale models of real urban street layouts, either on simplified geometries (Cea, Garrido, Puertas, Jácome, et al., 2010) (Cea, Garrido, & Puertas, 2010;Mignot et al., 2020), or without considering some parts of the drainage system (e.g., the roofs of buildings), or considering drainage elements in an isolated way (Rubinato et al., 2018).In addition, in most cases the water input was generated by an upstream flow, with only a few studies using rainfall generators (Naves et al., 2021;Al Mamoon et al., 2019).
The current research presents a set of experimental tests carried out in a new large-scale urban drainage physical model, and aims to study rainfall-runoff processes in an integrated way at near-real scale, including roofs, streets, manholes, inlets, and sewers.Considering all the elements on the same facility allows for a holistic study of their operation and enables the evaluation of each element's contribution to the downstream discharge point.Understanding these contributions provides the basis for studies in urban areas where simplifications are required, or high-quality data are not available.The facility represents a section of an urban neighbourhood environment and is equipped with a high precision rainfall simulator, two surface runoff generators, and two pipe inflows generators.The facility setup makes it possible to control the rainfall and flow inputs and to measure hydraulic variables at custom points.All the information necessary to replicate the experimental tests is provided with high spatial resolution.In addition, numerical validation using the Iber-SWMM software was carried out based on the presented dataset.Iber-SWMM (Sañudo et al., 2020) is a 2D/1D hydraulic model that combines the freely distributed hydraulic model Iber (Bladé et al., 2014)
| Description of the urban drainage facility
The large-scale urban drainage facility (hereafter the block) is located in the Hydraulics Laboratory of the Centre of Technological Innovation in Construction and Civil Engineering (CITEEC) at the University of A Coruña (Spain).The facility, which measures of 100 m2 , consists of three main parts (Figure 1): a rainfall simulator, a street surface (including roofs and pavements), and a sewer network linked to it.The rainfall simulator can generate constant intensities of 30, 50, and 80 mm/h with high spatial uniformity.The street surface (Figure 2) consists of a T-intersection of two 2.5 m wide concrete roads and four blocks of buildings.The roadway and the building blocks are separated by a concrete tiled pavement, 30 cm wide, and a 6 cm high concrete curb.The roadway and pavements have a longitudinal slope of 1% and a transversal slope of 2%, respectively.The buildings have ceramic tiled roofs and semi-circular gutters.More information on the rainfall generator, the characterization of the rainfall, and the configuration of the roofs is given in Sañudo et al. (2022).The facility is fully equipped with ultrasonic depth sensors and flowmeters.In addition, it has a pumping system that allows the generation of two controlled surface runoff flows from the upstream boundaries to the surface of each road (SD) and two controlled inflows at the beginning of each pipe (PI).The sewer network (Figure 2) consists of four manholes (MH), two pipelines (PL), two boundary inflow gullies (PI) on the pipelines upstream boundaries, and an outfall (O) that spills all the water of the facility into an open channel equipped with a triangular weir.
The main pipeline, with an inner diameter of 240 mm, covers the longitudinal dimension of the facility and connects the upstream inflow PI1, the manholes MH1, MH2, MH3, and MH4, and the outfall (O1).
Additionally, a transversal pipeline, of an inner diameter of 194 mm, joins the inflow PI2 with the manhole MH3.Both pipelines intersect at MH3 and are made of methacrylate.The manholes have an outer diameter of 800 mm and a thickness of 20 mm and are not hermetically sealed so water can enter through these into the sewer system.
Their surface diameter is 580 mm, and they are closed with an iron cover.The surface and the sewer system are linked through 4 rectangular inlets of 0.5 Â 0.2 m and a downstream transversal grate of 2.5 Â 0.13 m that covers the width of the roadway.Each inlet has a drain box of 0.46 Â 0.16 Â 0.38 m and is directly connected to the nearest manhole by a 90 mm PVC pipe.Neither the drain box nor the pipe connection limits the inlet flow capacity.Similarly, the grate has its own drain box of the same length and width, and is 8 cm deep.
The grate is directly connected to MH4.
Roof runoff is conveyed through the gutters to the downspouts, which discharge into 4 gully pots, one for each roof.The roof gully pots are connected to their associated manhole by 90 mm PVC pipes.Thus, ROOF1 is connected to MH1, ROOF2 to MH4, ROOF3 to MH3, and ROOF4 to MH2 (Figures 1 and 2).The origin of the local coordinate system is established at the geometric centre of the facility and at the floor level of the laboratory.All the coordinates and georeferenced data are referenced to this local system.
F I G U R E 1 Conceptual scheme of the urban drainage facility.
| Experimental procedure
The aim of the experiments was to achieve an accurate characterization of surface and in-sewer runoffs by measuring the discharges on roofs, inlets, and at the outlet of the whole facility.In addition, depths at different surface points were also measured.The tests were divided into two sets: Set 1, in which all the runoff is generated by the rainfall simulator; and Set 2, in which the runoff is generated by the rainfall simulator plus both runoff generators.Pipe inflow generators were not used in these experiments, and hence the sewer base flow was not considered.Wet antecedent conditions were considered in all the experiments due to the long period required by the facility to become completely dry.To prevent residual flows from previous test, a 30 min time gap was left between tests.
Six different hyetographs with varying intensities and durations
were generated in the experiments (Figure 4).Hyetographs H1, H2 and H3 are defined by a constant rainfall intensity of 30, 50, and 80 mm/h respectively, sustained over a period of 4 min.Hyetographs H4 and H5 represent intermittent rainfall consisting of 15, 30, and 45 s rain intervals with 45 s of no rain between them.Hyetograph H6 is characterized by a quasitriangular symmetric pattern, with six rain periods of 30 s each and with a maximum intensity of 80 mm/h.First, Set 1 was carried out by performing 6 tests, one for each hyetograph, using only the rainfall simulator.Then, for Set 2, tests on Set 1 were repeated adding a constant flow of 1 L/s in each of the runoff generators.The runoff flows were steady during the tests.
Table 1 shows the configurations for the 12 tests performed.
The discharge was measured at the outlet of each roof, at each inlet, at the downstream grate, and at the outlet of the facility.
Generally, the manholes in sewer networks are correctly sealed to prevent odours and surface runoff intakes, but this is not the case in stormwater networks.Since manholes covers are usually not hermetically sealed in real storms sewer networks, it was decided not to seal them in the facility either.Therefore, some flow entered through their boundaries from the surface to the manhole.For this reason, the discharge that entered through the manholes into the sewer network was also measured.In addition to the discharge measurements, water depths were measured at 9 points on the surface, 8 of these 0.5 m upstream and 0.5 m downstream from each of the four inlets and 6 cm from the curb.Another depth measurement was obtained at the intersection of the streets.
| LiDAR acquisition and drainage network mapping
A good quality Digital Elevation Model (DEM) is key to addressing 2D/1D urban drainage models (Allitt, 2009) and to obtaining reliable results.The methodology adopted in the present study to obtain a high-resolution 3D surface model of the whole facility is described in (Sañudo et al., 2022), and uses an Intel ® RealSense™ LiDAR Camera L515 sensor.The final product was a DEM with a cell resolution of 5 mm (Figure 3).In addition, the dimensions of manholes, inlets and grates were measured with a graded meter.Finally, break lines were obtained to force the elements of the computational mesh along them, especially in high slope changes such as curbs.
A mapping of the drainage network was also carried out.The x and y coordinates were obtained by triangulation from at least two F I G U R E 2 Hyetographs generated in each test and location of the flow and depth measurement points.reference points.The z coordinate was obtained by measuring the distance between the measurement point and a horizontal laser plane established as a reference.
| Flow and depth measurements
First, roof discharges were measured following the methodology used in Sañudo et al. (2022) where a detailed analysis of rainfall-runoff transformation processes in roofs is presented.Roof discharge was estimated by measuring the rate of variation of the water level with respect to time in a square tank located at the gutter outlet.
Next, the flow captured at each inlet was measured using plastic pre-calibrated v-notch weirs located at the outlet of each drain box (Figure 4a).An ultrasonic depth sensor was located below each inlet grate to measure the depth in the drain box.A deflector was carefully installed inside the drain box to avoid high frequency oscillations of the free surface that would introduce excessive noise in the registered signal.It should be noted that this method of measuring the inlet discharge is non-intrusive.Similarly, the outlet discharge of the whole facility was estimated from the water level variation over a metallic triangular weir located at the channel outfall (Figure 4b).Manholes and grate discharges were obtained only under steady state conditions, due to the complexity of installing a sensor.The replicability of the tests is mainly conditioned by the ability of the rainfall simulator to generate identical rainfall intensities between runs.The replicability and the consistency of the rainfall intensities generated was verified in (Sañudo et al., 2022), so the methodology was considered to be fully repeatable.In addition, each test of Table 1 was performed twice.The comparison of the two runs showed a very high agreement for all tests, for example, less than 0.004 L/s for steady flows in tests T1, T2, and T3 was obtained, which guarantees the total replicability of the methodology.It must be noted that with this measurement setup the flows measured in the inlets and in the channel are not exactly the same as those actually entering through the inlets or exiting through the sewer outfall.This is the case because the rain boxes and the outlet channel store a volume of water that attenuates, thus delaying the measured hydrograph.Consequently, a volume compensation based on the water balance in the rain boxes and in the channel was carried out to obtain the actual hydrographs using the Equation (1): where ∂h ∂t is the depth variation at the rain box or outflow channel, Q in and Q out are the flows that enter or exit on them, and A is the drain box inner area or outflow channel area.As an example, the volume compensation methodology of the outlet channel is shown in Figure 5. Thus, the flow storage on the outflow channel is added to the outflow measured to obtain the sewer outfall discharge.Same methodology was applied to the inlet inflows.
| Framework of Iber-SWMM
Iber-SWMM (Sañudo et al., 2020) is a freely distributed 1D/2D dual drainage model which combines the 2D overland flow model of Iber (Bladé et al., 2014) and the 1D sewer network model SWMM (Rossman, 2015).The model considers the main urban hydrological and hydraulic processes during a rainfall event.All surface processes, such as rainfall-runoff transformation and surface hydraulics, are computed with the hydraulics module of Iber, whereas SWMM is used to compute the flow in the sewer network.The model implements a bidirectional exchange of water between the surface and the sewer network.Therefore, when the sewer system is not surcharged the water enters to the network through the inlets, while if it is surcharged the flooding can overflow to the surface through the manholes.
| Inlets and manholes
Inlets and manholes are the main link between the surface and the sewer system.Inlets are the main entrance of water to the network F I G U R E 5 Methodology of data processing for obtaining the outlet hydrograph at the outfall of the sewer network.The raw signal data is converted to flow and volume (depth  area) using pre-calibrated curves and rating curves.The compensated outlet hydrograph is obtained by adding the flow storage to the measured hydrograph.
under non-surcharged conditions, while manholes act as water sources to the streets under surcharged conditions.However, under non-surcharged conditions, if the manhole covers are not sealed, water can also enter through these into the sewer system.Iber-SWMM considers this possibility by including formulations, which consider surface water inflow through manholes.For this purpose, the model computes the discharge capacity of inlets and manholes by means of the widely used weir (2) and orifice (3) formulae: where c w and c o are the weir and orifice coefficients, respectively, W and A is the perimeter and area of the element (inlet or manhole), respectively, h is the hydraulic head, and g is the acceleration of gravity.
Although some authors also consider the inclusion of the rain boxes (Dong et al., 2021), others dismiss their effect in the propagation of runoff (Martins et al., 2018).Iber-SWMM does not contemplate the modelling of rain boxes, due to the low impact that these have in real applications.For manhole capacity formulae, the same formulae are implemented as for the inlets.(Sañudo et al., 2022) showed that the use of semi-distributed models is the most efficient approach to compute rainfall-runoff transformation in the roofs of buildings and to evaluate their outlet hydrographs.
| Roofs
Therefore, Iber-SWMM models the roofs as individual subcatchments, and implements the non-linear reservoir equation to compute the hydrograph generated by the rainfall over roofs.The non-linear reservoir model represents the subcatchment as a shallow storage (Rossman & Huber, 2016), in which the output hydrograph Q (m 3 =s) is controlled by the Manning equation ( 4): Roof systems in urban settings can be directly connected to the sewer network or unconnected to it.In Iber-SWMM, the user can define whether the roof is connected to the sewer network, unconnected, or if a percentage of the discharge is sent to the nearest manhole, to the street, or even infiltrates into the ground.
| Model setup
The street domain was discretised using a triangular unstructured mesh with an average element size of 0.05 m.On the other hand, the domain of the roofs was discretized using a structured mesh with an element size of 0.2 m, since these are computed with a lumped approach and hence no mesh resolution is needed.The numerical mesh of the model has a total of 40 000 elements.Currently this extremely high resolution is not feasible in practical applications because it would lead to very high computational times (Ramsauer et al., 2021).
F I G U R E 6 Experimental mass balance computed as the error between the addition of the discharge measured at the roofs, inlets, grate, and manhole (Q comp ) and the discharge at the outfall (Q out ) under steady conditions.
The Manning coefficient was set to 0.016 on the street surface (Naves et al., 2019), 0.025 on the roofs, and 0.008 in the pipes, since these latter are made of plastic and are completely clean.It is highlighted that no initial abstraction was defined on any surface, since the experiments were performed under wet antecedent conditions.Experimental tests were carried out one after the other leaving Comparison of the experimental and numerical discharges obtained for tests using only the rainfall simulator (Set 1).
only shorts periods of time between them to avoid residual flows from one test to another.This implies that small surface irregularities were filled at the beginning of each experiment.To guarantee that both the experimental and numerical tests start from the same initial conditions the numerical model included an initial rainy warm-up period to fill the potential surface irregularities, followed by a dry period to let the residual flows to drain.The numerical simulations were carried out using a wet-dry threshold of 0.1 mm and the decoupled hydrological discretization (DHD) scheme suitable for surface runoff computations in urban catchments and small-scale rural basins (Cea & Bladé, 2015).
The Dynamic Wave routing model with a 1 s routing step was used in the SWMM module.
The rain maps used were those obtained through the rain characterization performed in (Sañudo et al., 2022).The rain maps were introduced as rasters with average rain intensities of 30.3, 54.2, 85.0 mm/h for the three rainfalls that the simulator can generate.For simplicity, we will refer to these intensities as 30, 50, and 80 mm/h.
| RESULTS AND DISCUSSION
This section presents the experimental results for the tests of Set 1 and Set 2. The presentation and discussion of the experimental results is accompanied by numerical results, and the fitting between the experimental results and numerical simulations is also addressed.
Results on roofs were presented in Sañudo et al. (2022) so are not included in the present study.
| Experimental discharges
In order to characterize the relative contribution of each element (roofs, inlets, manholes, and grate) to the outlet hydrograph, as well as a first-level control of the experiments, an experimental mass balance was calculated from the measured hydrographs (Figure 6).The difference between the total volume of water captured by the roofs, inlets, grate, and manholes and the spilled by the sewer outfall under steady flow conditions was less than 1% in tests T1, T2, T3.This implies that there are no uncontrolled flows in the experiments.
The manholes captured approximately 14% of the total precipitation.Although this percentage might depend on the way in which the manhole covers are placed, it is not a negligible amount, so it is important take it into account in the numerical modelling of the experiments.Inlets captured 36% of the total precipitation.Visual observations showed that the discharge capacity of the inlets was not exceeded in the experiments, and thus all the water flow arriving at the inlet was captured.The downstream grate collected the runoff that was not collected by the inlets and manholes, which represented approximately 7% of the total precipitation.Approximately 42% of the total volume originated from the roof's hydrographs.This value is directly proportional to their surface area, which underlines the importance of including a detail definition of roofs for modelling purposes, especially in highly consolidated urban areas with a high percentages of buildings.These relative contributions of each element of the system remained practically the same for all the tests shown in Figure 6, with differences lower than 1% for all the tests.numerical model Iber-SWMM.Note that Inlet 1 has a small upstream contribution area, while Inlet 3 captures most of the runoff arriving from the intersection.Thus, Inlet 1 and Inlet 3 capture the lowest and the highest flow rates, respectively, the computed and measured hydrographs being proportional to their contribution areas.The agreement between the experimental and numerical results was quantified
| Inlets, manholes and outfall discharges
Comparison of the experimental and numerical discharges obtained for tests using only the rainfall simulator plus the runoff generators (Set 2).
with the Nash-Sutcliffe Efficiency coefficient (NSE) and with the Mean Absolute Error (MAE).There are some differences between the observed and computed hydrographs in some inlets.This is due to small features in the topography that slightly change the flow path of the surface runoff.Nevertheless, the observed and computed outfall hydrographs show a very good fit, with an average NSE and MAE (considering the six experimental tests) of 0.93 and 0.05 L/s, respectively.This means that the numerical-experimental differences in the computed hydrographs at each inlet compensate each other when the flow converges at the facility outfall.Thus, in practice, for model calibration and validation purposes the use of measurements at locations that receive water from several inlets is recommended, in order to avoid the effect of small topographic features that cannot be resolved with the numerical model.
As noted in section 2, the inflow through the manhole covers was measured under steady flow conditions.In the numerical model, the discharge capacity of manholes was manually calibrated, since the manholes were partially sealed.The agreement between the experimental and numerical discharges that enter through the manholes is shown in Figure 8.The results show a good correlation, with an R-squared greater than 0.9 for the three tests.
Table 2 shows the volume of water drained by the different elements of the facility, estimated from the experiments and from the numerical model.The numerical-experimental agreement is very good, with the largest volume contribution being from the roofs, followed by the inlets.
The results of Set 2, in which the surface runoff is generated by the rainfall simulator and by the runoff generators, are shown in Figure 9.For this set of experiments, the water drained through the manholes was not measured, so the numerical configuration of the discharge capacity used in Set 1 was also used in Set 2 (i.e., no calibration).Again, the experimental-numerical fit is far better at the facility outfall than at the inlets.The initial discharge of the hydrographs in this case is not zero, due to the steady flow introduced by the runoff F I G U R E 1 0 Comparison of the experimental and numerical depths obtained for tests using only the rainfall simulator (Set 1).
generators.The numerical model is not able to reproduce precisely how this initial steady runoff is distributed through each individual inlet.On the other hand, the prediction is good at the facility outfall, which means that the model preserves mass continuity through the whole facility.Despite these differences, in general terms the shape of the numerical hydrographs shows a satisfactory agreement with the experiments.At the facility outfall, the agreement between the numerical and experimental hydrographs is very good in all the tests, once again showing that the errors in the computed hydrographs at each inlet compensate each other at the global outfall.
| Water depths
The measured and modelled water depths at the street surface locations DS3, DS4, DS5, DS6 and DS7 are shown in Figure 10 for the experiments of Set 1.The average MAE is approximately 0.5 mm at DS3, DS5, and DS6, 1.2 mm at DS4, and 1.9 mm at DS7.These differences are very small considering that the magnitude of the water depth in the experiments is of the order of 1 cm.For instance, the water depth at locations DS1, DS2, DS8 and DS9 is less than 2 mm due to their small contribution area.As already mentioned in section 0, even if the topography was measured with a very high accuracy and spatial resolution, small irregularities in the topography can lead to certain discrepancies in the experimental-numerical agreement of the water depth at some locations.The fit at location DS7 is especially bad since the numerical model does not correctly represent the initial depth at the beginning of the experiment.Notice that the initial water depth is not zero at locations DS4, DS5, and DS6, due to the fact that the tests were carried out under antecedent wet conditions, which left residual ponds, especially near the curb.This is especially notable at DS4, where the shape of the topography generates a large pond of 5 mm depth located downstream from Inlet 2.
Figure 11 shows the water depths obtained in Set 2 at locations DS2, DS3 and DS7.In this case, the initial water depth is not zero at all the locations due to the steady flow generated by the runoff generators.The numerical-experimental agreement at control points DS2 and DS3 shows a good fit, with an average MAE of 0.6 mm and a correct representation of the shape of the depth time series.On the other hand, the results at control point DS7 present a poor fit due to the poor agreement of the initial depth condition that implies an offset, however the shape of the time series is reproduced correctly.This offset appears due to small features in the surface of the experimental facility that are not properly reproduced in the numerical topography, and that slightly change the surface flow paths.The offset is similar to the observed at DS7 in the Set 1 for the same reason (Figure 10).
| CONCLUSIONS
A large-scale urban drainage physical model equipped with a rainfall simulator has been presented, together with an experimental dataset F I G U R E 1 1 Comparison of the experimental and numerical depths obtained for tests using only the rainfall simulator plus the runoff generators (Set 2).
that includes measurements of water depths and discharges at the outlet of several components of the facility (roofs, inlets, manholes and grate).The dataset also provides a high-resolution characterization of the rainfall input and the geometry of the facility.This material can be used for the development, validation, and assessment of 2D/1D dual urban drainage models.
The numerical modelling of the experiments shows relatively poor fits to the observations when compared at specific locations of the facility, since those measurements and results are strongly influenced by irregularities in the topography or by the spatial variability of rainfall.Nevertheless, the shape of the hydrographs measured at the inlets and at the outlet of roofs during intermittent rainfalls is substantially attenuated at the outfall of the sewer system, and thus the numerical-experimental agreement is significantly improved at the global outlet of the facility.The good fit obtained at the system outlet justifies that modelling urban processes at element scale can be a good decision when no calibration is possible, and the input data is of high quality and high resolution.
The measurements show that the surface runoff that enters the sewer network through the (not-sealed) manholes is not negligible and should therefore be taken into consideration when seeking to reproduce the experiments in detail.For this facility, the relative contribution of each element was 42% for the roofs, 36% for the inlets, 14% for the manholes, and 7% for the downstream grate.
To date, no databases are available that include a detailed and controlled experimental characterization of the rainfall-runoff generation in the different elements of a large-scale urban drainage facility.
Thus, the results presented here are a significant contribution to the urban drainage community.The dataset is available in the open-access repository Zenodo (Sañudo et al., 2023).
The authors consider that future work should be focused on a better understanding of the sensitivity of the numerical results to variables as the mesh size, spatial resolution of the topography, or the location of the inlets.The validation of the modelling approach under surcharged conditions in order to assess manhole flooding should also be addressed in future works.
with SWMM engine via Dynamic Link Libraries (DLL), with the aim of providing a freely distributed resource for the scientific community and practitioners.The goals of this research are: (i) to describe a new large-scale laboratory urban drainage facility; (ii) to present and assess a first experimental dataset obtained at this facility; (iii) to present a digital twin of the facility based on the 2D/1D model Iber-SWMM and replicate numerically the experimental tests.All the experimental data and results are available at the open-access repository Zenodo (Sañudo et al., 2023).
Digital Elevation Model of the roofs obtained using LiDAR.F I G U R E 4 Setup for the flow discharge measurement at the inlet (a) and at the outlet channel (b) using an ultrasonic depth sensor.For measuring distance, ultrasonic pre-calibrated sensors (UB500-18GM75-I-V15, Pepperl + Fuchs, Germany) with an output resolution of 0.13 mm were used.The sampling frequency was set to 5 Hz.A low-pass filter was applied to the raw signal to eliminate outliers and reduce the noise.For all tests, recording began 120 s prior to the start of the rain.
where n (s=m 1=3 ) is the Manning coefficient of the roof, W (m) is the subcatchment width, S (m/m) is the subcatchment slope, d(m) is the water depth, and d s (m) is the storage depth that fixes the virtual reservoir capacity and sets the initial abstraction.Outflow only occurs when the depth exceeds the depression storage and the slope is different from zero.
Figure 7
Figure7shows the flow rates measured in tests T1 to T6 (Set 1) on the inlets and on the outfall, as well as the results obtained with the
|
v3-fos-license
|
2021-07-26T00:06:39.163Z
|
2021-01-01T00:00:00.000
|
236287382
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/2331186X.2021.1935408?needAccess=true",
"pdf_hash": "9cf14f6e10d8139b9ec6e160b809f4855165f026",
"pdf_src": "TaylorAndFrancis",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45887",
"s2fieldsofstudy": [
"Psychology",
"Business"
],
"sha1": "2eff923bab25184e6f59a2007640e2d8146365e1",
"year": 2021
}
|
pes2o/s2orc
|
The carryover effects of college dishonesty on the professional workplace dishonest behaviors: A systematic review
Abstract There has been a growing interest in determining whether dishonesty in college can be transferred to the professional workplace. There have been few, yet scarce, studies that focused on the link between college and workplace dishonesty. This review aims to bring into the limelight evidence-based consistency of the links between college and workplace dishonesty. Four databases were systematically scanned and yielded 18 articles related to dishonesty at the college and workplace levels were retrieved. Recognizing that there are only a few studies in this area that limit generalizations of this article, there are pieces of evidence that support a considerable association between college and workplace dishonest behaviors. Academic dishonesty in college can be considered more than just a matter of immediate academic repercussions, and it can also indicate potential workplace dishonesty to some degree. Instead, it tends to be a choice between producing ethical and unethical citizens or between preserving and smashing the profession. We have attempted to maintain academic integrity by developing multi-level intervention approaches that involve students, educators, administrators, and policymakers.
PUBLIC INTEREST STATEMENT
This article is a review of whether academic dishonesty/cheating in college have the potential to be transferred to a professional workplace. It summarizes the findings of studies that focused on the relationship between college and workplace dishonest behaviors. According to the obtained findings of the research, there is a considerable association between college and workplace dishonest behaviors. That means students who are accustomed to cheating in college are also more likely to be cheated when they go to the workplace, which can be a potential problem for employers. As a result, we believe that all stakeholders should strive to ensure that students do not become a victim of college fraud in college.
Introduction
Academic dishonesty is commonplace in students' academic pursuits far and wide (Premeaux, 2005;Tierney & Sabharwal, 2017). However, personal, sociological, institutional, and situational factors underpin the extent to which academic dishonesty occurs (Rettinger & Kramer, 2009;Yazici et al., 2011). Academic dishonesty, in this article, can be conceptualized as a "deliberate act, in that students make a conscious decision to engage in academic dishonesty" (Anderman et al., 2017, p. 95). Academic dishonesty and academic cheating are used interchangeably in this paper, as being used by (Yu et al., 2017). The explanation is that terms like academic cheating, academic misconduct, and academic dishonesty can be used interchangeably to refer to academic malpractice, although the expressions can vary in different contexts (Park et al., 2013). Each of the terms implies the deceptive attributes to impersonate a knowledgeable person (Anderman et al., 2017;Bloodgood et al., 2008). Studies show that most students have been occupied with dishonest behaviors at some point in their academic careers (Anderman et al., 2017). What makes it a great concern is that dishonesty in academia could be expanded to professional workplaces (Ballantine et al., 2014;Graves & Austin, 2008;Orosz et al., 2018). For example, there is a strong relationship between the frequency of dishonesty in academia and the recurrence of deceptive behavior in the real work environment (Nonis & Swift, 2001).
Given that dishonesty in college considerably impacts workplace trustworthiness (Grym & Liljander, 2017;Guerrero-Dib et al., 2020;Ma, 2013), it is more important than ever to revitalize and address this challenge. One way to achieve this goal is to have scientists working in this field and attempting to bring the links between college and workplace dishonesty into the spotlight. Accordingly, this review attempts to bring into the spotlight the association between college and workplace dishonesty. Thus, it may be conceivable to increase the sensitivity of educational stakeholders to academic dishonesty and the determination of higher education institutions to combat dishonest behaviors.
There are a plethora of investigations into how academic dishonesty undermines the quality of education and the factors that constitute dishonesty behaviors. However, a few emerging studies have focused on the relationship between college students' academic cheating and actual workplace dishonesty (Blankenship & Whitley, 2000;Klein et al., 2007;Ma, 2013;Orosz et al., 2018). This pattern has been evident, especially in the fields of business and health, over the past two decades. However, there is a dearth of a systematic review that offers the consistency of the connection between college and workplace dishonest behaviors. In this paper, our primary goal is not just to abridge these scant findings and propose new ways to handle fraudulent behaviors. Rather, it is to bring the issue to the forefront and make it a point of discussion.
While there is an increasing interest in understanding the relationship, there are very few studies on the long-term impact of college dishonesty on workplace dishonesty. The mass media have likewise focused on the immediate consequences of academic cheating, just overlooking the intriguing enduring effects of dishonesty behaviors. The immediate pressures of academic dishonesty may also be seen from two broader perspectives (Lupton et al., 2010). First, a student who frequently engages in academic dishonesty has the advantage of higher grades without diligence. Therefore, the impartiality and effectiveness of educational assessments will be compromised and the relative capacity of a student cannot be measured. Second, academic dishonesty is presumed to decrease students' enthusiasm for achieving viable instructional objectives both in terms of comprehending cutting-edge thoughts and applying instructional objectives. Moreover, rampant academic cheating damages creativity, innovation, and academic excellence (Shon, 2006).
Scientists who work in the field of academic integrity have long recognized the enduring impact of academic dishonesty Laduke, 2013;Orosz et al., 2018). However, most scholars have not made a rigorous attempt to understand more deeply into the consolidated pieces of evidence for the fact that a synthesis of an article is sounder than the findings of a single article (Stern et al., 2014). The intention of this review was, therefore, to bring into the spotlight if college dishonesty would be extended to the workplaces. By drawing the attention of educators, policymakers, and politicians to this issue, this paper could be used to plan, develop, and monitor strategies for reducing academic dishonesty in colleges. To guide the review of how consistent college and workplace dishonesty is, we have forwarded the following research questions (a) Is there a substantial association between dishonesty in college and dishonesty in the workplace? (b) Do demographic factors influence dishonesty in college and the workplace? (c) How would educational stakeholders respond to college academic dishonesty?
Methods
A search of peer-reviewed articles published from 2000 to 2020 regarding the relationship between college and workplace dishonesty was conducted. The researchers chose this time frame because the studies focused on the relationship between college and workplace dishonesty have begun to appear over the last two decades. Papers were retrieved from four bibliographic databases: PubMed, IRIC, PsycINFO, and Google Scholar. These databases were selected because studies that focused on the relationship between college and workplace dishonesty are often available there. The search for papers occurred from March 2019 through April 2020. The researchers have used 2020 as a cutoff to include key findings of recent studies in the field. The search was limited to high-quality peer-reviewed journals to maintain the quality of the papers.
To perform a paper search, inclusion and exclusion criteria were devised to standardize the selection of papers based on the suggestion of Butler et al. (2016), which embraces five explicit criteria: types of study, types of data, the phenomena under study, results, and demography. The inclusion criteria were intended to settle any predispositions and ensure that the articles are chosen distinctly based on the predefined criteria. Alongside setting the criteria for inclusion and exclusion, the researchers chose search terms related to academic dishonesty, college dishonesty, and workplace dishonesty. The complete search terms involve academic dishonesty, academic fraud, academic deception, academic cheating, plagiarism, college deviant behaviors, college misconduct, college unethical behaviors, workplace cheating, workplace deceptions, workplace misconduct, real-world dishonesty, and real-world cheating. In this review, only original papers published in the English language were considered.
In the initial paper search, the researchers identified 289 papers related to the relationship between academic and workplace dishonesty. The searched papers were downloaded to Mendeley 1.19.5 Reference Management Software, and 73 duplicates were removed. The abstracts of 216 papers were reviewed, and 54 have been discarded as irrelevant. Ultimately, the complete body of 162 papers was reviewed and 18 papers were deemed eligible for this review, as presented in Figure 1. Generally, the findings from 6,223 participants, seven countries, and four continents were synthesized and consolidated in this review regarding the relationship between the college and workplace dishonesty.
Results
This review focuses on the association between college and workplace dishonest behaviors. The researchers presented data in the form of narration corresponding to the specific objectives as provided in Table 1. This review results generally show that academic dishonesty has been a widespread phenomenon throughout the world (Blankenship & Whitley, 2000), and it is a common concern to scholars ( Klein et al., 2007). One of the big concerns is whether academic cheating in college will be transferred into the professional workplace. This paper presented a synthesis of scientific studies regarding the transferability of college cheating into the professional workplace to address this concern.
Association between the college and workplace dishonesty
While the primary objective of this article is to determine whether dishonest behaviors in college will be extended to the workplace, it was found that academic dishonesty in college has a considerable association with workplace dishonesty. For instance, the entire findings of the papers included in this review show that college and workplace dishonesty have a considerable association except for the finding of Martin et al. (2009). To mention some of these findings, nearly an equal number of participants reported that they had cheated in an academic setting and violated workplace policies . Bernardi et al. (2015) also stated that students who are reported to have cheated at least once are more likely to cheat in the future. Likewise, Graves and Austin (2008) stated that students who cheat on tests, paperwork, or both in college are more likely to engage in the misuse of property. An experimental study found that academic dishonesty in business schools predicted later behavior in a real business context. That means students who were dishonest in college were also dishonest in real business settings (Grym & Liljander, 2017). In a self-report survey, students who engaged in academic dishonesty scored higher on real-world personal risk behaviors such as unreliability, risky driving behaviors, and illicit activities (Blankenship & Whitley, 2000). Furthermore, the findings demonstrate that college dishonesty has the potential to influence workplace professional ethics (Lawson, 2004;Lucas & Friedrich, 2005;Nonis & Swift, 2001).
Based on the conclusions of the papers that were reviewed, it could be argued that there is a considerable association between academic integrity in college and acceptable moral standards in the workplace. In other words, the more students get involved in dishonest behaviors in college, the more possibility they are to be dishonest at work. However, this does not mean that dishonest behavior in the workplace is directly emanated from dishonest behaviors in college, because the articles included in this review were predominantly based on a self-report survey that was unable to determine cause-effect relationships.
Academic dishonesty in college may be extended into the workplace for various reasons and eventually incorporated into the plans of those involved in cheating. For instance, Bernardi et al. This figure illustrates the databases and steps that researchers have undertaken in searching the sample article. It presents the detail processes of how articles have been retrieved, screened, and selected for the final review Blankenship and Whitley (2000) US 284 students of introduction to psychology
Self-reported survey
Applying a false excuse to avoid sitting for an exam is a form of minor deviance. Academic dishonesty is associated with false excuses for exam deadlines and minor deviant behaviors. Self-reported cheating and false excuses were not related Students who engaged in academic dishonesty have scored higher on personal unreliability, risky driving behaviors, and illicit activities.
Male students reported significantly higher levels of dishonesty than female students on drug use and illegal behaviors. Bultas et al. (2017) US 310 graduate and undergraduate students Online survey Cheating has also occurred in the classroom and in clinical settings. Nursing students are less likely to cheat than students in other disciplines.
Graduating students were less tolerant of academic cheating. When dishonesty in classrooms has increased, the clinical dishonesty has also increased. Students who have a positive attitude to academic dishonesty and who frequently report a high degree of academic cheating are more likely to engage in dishonest behaviors in clinical settings.
(Continued) Graves and Austin (2008) US 124 graduate and undergraduate business students
Self-report survey
Students who cheat on tests and/or homework in high school and/or college are more likely to engage in the misuse of property. Cheating in an academic setting is a better indicator of a person's deviant behavior in the workplace. Grym and Liljander (2017) Finland 99 business school students
Experiment
Men are more likely to involve in cheating than their female counterparts.
Academic cheating in business schools has implications for later ethical behavior in business contexts.
By adding a moral reminder, it is possible to reduce dishonesty in a test.
(Continued) Lucas and Friedrich (2005) US 87 students of introductory psychology with experience of prior employment
Survey
The overall employment integrity index is highly correlated with academic dishonesty.
The strongest relationship existed between academic cheating and workers' compensation fraud in the real world. There is a moderate to a strong relationship between academic dishonesty and overall deviant behaviors.
Ma ( (2015) expressed that those students who have cheated at least once in the past reported that they most likely tend to cheat in the future. Similarly, Klein et al. (2007) stated that students might carry over an attitude of dishonesty in college into the workplace, potentially problematic for employers. It was also reported that when faced with the challenge of cheating, many students make the same decision in the end, both in college and in the workplace . It has also been reported that those students who consider any type of dishonesty as a serious offense in college are more likely to behave ethically in the workplace (Guerrero-Dib et al., 2020). Likewise, it was expressed that dishonesty in college has a relationship with deviant behaviors in the workplace (Graves & Austin, 2008). Furthermore, it has been reported that academic dishonesty in business schools has implications for later ethical behaviors in business contexts (Grym & Liljander, 2017). These pieces of evidence show that dishonest behaviors, attitudes, and propensity in college have the potential to be carried over from college to the workplace.
Researchers have a firm belief that fraud in college also affects the quality of life in the future (Ballantine et al., 2014;Bernardi et al., 2015;Graves & Austin, 2008;Krueger, 2014;Ma, 2013). For example, students who engaged in academic dishonesty in college also scored higher on dishonesty behaviors such as personal unreliability, risky driving behaviors, and illicit activities in the real world (Blankenship & Whitley, 2000). In another study, nursing students who were alleged to have cheated in college also reported having cheated in clinical settings (Bultas et al., 2017). In a similar setting, Krueger (2014) expressed a significant relationship between self-reported academic dishonesty in the classroom and self-reported dishonesty in a clinical setting. Krueger further stated that more than half of the participants confessed that they had cheated both in the classroom and in clinical settings that affect the patient's life. In a study involving engineering students, it was reported that a comparable number of students cheated both in the college and the workplace (Harding et al., 2004b). Even having at least one episode of cheating in college has been reported to have a high degree of association with cheating in the actual workplace (Bernardi et al., 2015) and the dishonesty in the workplace extends to the misuse of property and being unfaithful to the organization (Graves & Austin, 2008).
To be more specific, dishonesty in college may indicate at least one of the following five sorts of dishonest behaviors in the workplace: unethical behaviors, deviant behaviors, misuse of property, belief in cheating, and the ultimate decision making to cheat. Concerning unethical behaviors, a student who cheated to earn better grades in college is more likely to demonstrate comparable unethical behaviors in the workplace (Ballantine et al., 2018(Ballantine et al., , 2014Hsiao & Yang, 2011). It was also revealed that cheating in college is a better indicator of deviant behaviors in the workplace (Blankenship & Whitley, 2000;Graves & Austin, 2008;. Likewise, college cheating has a strong connection with the misuse of property in the workplace (Graves & Austin, 2008), just as beliefs about dishonesty in college tend to be moved to the workplace. Accordingly, people may tend to replicate the same belief about cheating both in college and the workplace (Klein et al., 2007;Nonis & Swift, 2001). Finally, people who are accustomed to cheating in college may make the same choice when faced with the temptation to engage in cheating (Klein et al., 2007). Indeed, most of the students involved in dishonesty in college made similar ultimate decisions in the workplace .
Factors that constitute a relationship between the college and workplace dishonesty
Many variables moderate college and workplace dishonesty behaviors. For example, previous studies have shown that demographic factors, approaches to learning, values for integrity, attitudes toward dishonesty, propensity, academic disciplines, and institutional sensitivity can point to the trends of dishonesty. To begin with the demographic variables, male students reported significantly higher levels of academic dishonesty than female students in general (Ballantine et al., 2014;Blankenship & Whitley, 2000;Grym & Liljander, 2017). It has also been reported that male students tolerate academic dishonesty more than female students, whereas values such as idealism are associated with academic dishonesty intolerance (Ballantine et al., 2014;Grym & Liljander, 2017). In another study, female students have witnessed more dishonest behaviors than their male counterparts (Blankenship & Whitley, 2000). Male students also reported significantly higher scores in illegal behaviors and drug use in particular (Blankenship & Whitley, 2000;Lucas & Friedrich, 2005). The reason might be that female students tend to be more comfortable with social needs than male students (Bernardi et al., 2015). Besides, there was no sex difference reported regarding false excuses according to (Blankenship & Whitley, 2000).
Grade levels and fields of study have also been linked to academic dishonesty behaviors. As students grow through academic hierarchies, they tend to tolerate less academic dishonesty (Bultas et al., 2017). Such issue is especially true while students progress from high school to college, from first degree to a Master's degree and then to a Ph.D. degree. In terms of fields of study, students from the field of business are ranked first in dishonesty and succeeded by engineering, while students in health sciences are found to be more ethical (Harding et al., 2004b). It is also found that the learning approach has connections to college dishonesty. Indeed, in-depth learning is associated with a lower degree of academic dishonesty, while surface learning has been associated with a higher degree of academic dishonesty (Ballantine et al., 2018). These demographic variables indicate that academic dishonesty can be moderated by age, gender, learning style, and fields of study to a certain degree.
It is also reported that attitudes toward dishonesty and propensity to cheat determine college and workplace dishonesty. For example, there is a strong connection between students' propensity to participate in dishonesty behaviors in college and the propensity to engage in such behaviors in the business world (Lawson, 2004). The author also reported that students' responses to beliefs about ethics in the academic setting are strongly related to their responses to various situations in a non-academic environment. Likewise, Lawson (2004, p. 195) stated that "cheaters are more likely to believe it is acceptable to lie to a potential employer on an employment application and to believe it is acceptable to use insider information when buying and selling stocks". Furthermore, some findings show that students who tend to cheat on exams or plagiarize papers are more likely to demonstrate unethical behavior in the workplace (Klein et al., 2007;Nonis & Swift, 2001). These pieces of evidence indicate that it is not only the deeds of dishonesty that will be carried over from college to the workplace, but also attitude and propensity. Therefore, to trace academic dishonesty, it may be invaluable to consider students' propensity.
The intensity of dishonesty can be moderated by the social and institutional sensitivity to dishonesty. For example, perceived college and workplace conditions have a strong potential for inducing or reducing dishonest behaviors (Guerrero-Dib et al., 2020;. The more institutions place a high value on integrity and track it properly, the better students tend to behave in an ethical manner (Guerrero-Dib et al., 2020). One of the methods that institutions use to monitor academic integrity is the code of honor. Higher education institutions with an honor code have reduced academic dishonesty to a certain degree (Graves & Austin, 2008;Krueger, 2014). For example, a student who underwent an honor code or attributes his or her compliance just to the harsh penalty of code violations might show a significant change in the school setting; however, this may be less likely to advance in the workplace (Lucas & Friedrich, 2005). Additionally, if faculty can discuss academic misconduct and take measures following any academic dishonesty, the degree of fraud will be decreased (Ballantine et al., 2018;Bultas et al., 2017;Burke et al., 2007).
Students' attitudes toward dishonesty in college can also greatly impact their actual dishonest behaviors in the future. For example, students may extend an attitude of dishonesty in college to the workplace (Klein et al., 2007). In another study, Bultas et al. (2017) stated that "as students' condemnatory attitude toward cheating increased, the frequency of dishonest behaviors in the clinical setting decreased", p. 60. The authors further indicated that students who have a positive attitude to dishonesty have reported higher frequencies of academic dishonesty and are more likely to engage in workplace dishonesty. It is also reported that those students who scored higher on measures of personal unreliability also scored higher on measures of dishonest behavior in the real world (Blankenship & Whitley, 2000).
How should educational stakeholders respond to cheating in college?
There should be an increasing concern about college academic dishonesty because it can be carried over to the workplace. On the one hand, creating academic integrity safeguards educational productivity and the fair evaluation of students. On the other hand, academic dishonesty has an impact on future workplace behaviors. In other words, academic dishonesty is a threat to both immediate educational quality and sustained professional excellence. Consequently, a firm determination must be made to adapt intervention strategies that maintain academic integrity in a manner that is not dangerous to students' livelihood and quality of education. By harmonizing both the quality of education and the well-being of students, this paper attempted to provide intervention strategies on how to respond to academic dishonesty. The intervention strategies presented in the subsequent sections were derived from the articles included in the review and a few extra articles.
Before suggesting intervention strategies, it is essential to comprehend dishonesty in an academic setting, which has a direct connection with the intervention techniques. The reasons why students cheat in college considerably vary from student to student and context to context. Indeed, several triggering factors can induce students to engage in college dishonesty (Simkin & Mcleod, 2010). In this article, such a factor can be classified as personal, situational, and assessment-related issues. The personal factors involve variables such as students' ethical considerations, attitudes, social standing, demographic factors, self-esteem, intention, achievement, and program of study Simkin & Mcleod, 2010;Yazici et al., 2011). The situational factors involve peer pressure, parental expectations, professor's control, classroom conditions, the desire for higher grades, and high stakes testing (Iberahima et al., 2013;Lucas & Friedrich, 2005;Rettinger & Kramer, 2009;Simkin & Mcleod, 2010). The assessment-related factors are associated with the role of learning assessments in education. For instance, when the assessments focus on grades instead of learning, students are more likely to tend to be involved in cheating (Murdock et al., 2004;Yazici et al., 2011). Generally, the intervention strategies should consider all personal, situational, and assessment-related issues.
Although there is less dispute regarding giving particular attention to dishonest behaviors in college and taking immediate measures against such behaviors, the primary concern is whether the priority is given to behaviors or values. Behaviors refer to the details of an individual's involvement in unauthorized phenomena. These behaviors involve copying from another student, allowing another student to copy, using unauthorized material without the professor's permission, turning in term papers done by another student, sitting on exams for another student, and so on. The values refer to developing fundamental ethics such as honesty, trust, and responsibility. Since behavior and value are the two sides of the same coin, and they cannot be separated, this article stands that both behaviors and values should be highlighted side by side.
First, taking firm and fair measures should be prioritized in the short-term regarding the details of unauthorized phenomena to deter students from dishonesty (Graves & Austin, 2008;Ma, 2013). Besides, it is also very essential to focus on long-term value development. The long-term value development may help to create a generation that involves less in academic and workplace dishonesty. It is worthy in particular because numerous studies have demonstrated that morally incompetent students are more likely to engage in dishonesty than morally competent students (Ballantine et al., 2018(Ballantine et al., , 2014Hsiao & Yang, 2011;Nonis & Swift, 2001). Instead of just threatening each dishonest behavior, discussing the underlying reasons for forbidding dishonest behaviors is found to be productive (Klein et al., 2007). As a result, they may also carry over justified values and behaviors into the workplace.
There have been attempts to devise effective intervention techniques that can be implemented in academic settings. For instance, Nick and Llaguno (2015) delineated five strategies that help reduce academic dishonesty. These strategies involve building relationships with students, helping students comprehend the importance of academic integrity, providing students with orientation programs, having students write honor codes and formative evaluation of students' trustworthiness. While these intervention techniques appear to work and are more humanistic, they are likely restricted to the relationship between institutions and students, which overlook the roles of several stakeholders. In another study, increasing the admission of females into a profession is also supposed as one means of increasing academic integrity (Ballantine et al., 2014).
Consequently, this article relied on a multifaceted intervention approach that included both personal and situational factors. Such a situation begins with students and advances up to the education authorities. To begin with the personal factors, students may both intentionally or unintentionally engage in academic dishonesty. Students' intentional involvement in academic dishonesty may be motivated by one of the following factors: a belief that dishonesty will not have a long-term impact on them or others; a desire to meet others' expectations; the belief that everyone does it; recognition that they are not doing it; and a lack of time (Anderman et al., 2017;Bernardi et al., 2015;Brimble & Stevenson-Clarke, 2005;Mulisa, 2015;Mustaine & Tewksbury, 2005). Unintentionally, students may plagiarize or cheat with a poor understanding of what academic dishonesty is, mainly when they first enter college (Newton, 2015). For example, empirical evidence shows one of three students involved in cheating accidentally (Brimble & Stevenson-Clarke, 2005). As a result, corrective measures can be drawn from two views. At the outset, to avoid unintentional dishonesty, all acceptable and unacceptable academic phenomena should be explicitly communicated to students along with the consequences of each dishonest behavior. The reason is that there is no consensus on what exactly constitutes academic dishonesty in higher education (Graves & Austin, 2008;Klein et al., 2007;Krueger, 2014).
In the case of intentional dishonesty, researchers advocate strict rule enforcement by all educational authorities because contextual factors have a greater potential to influence students' decisions to engage in dishonest behavior than personal factors (Murdock et al., 2004). For example, the way faculty members react to dishonest behaviors may exert a powerful influence on a student's academic dishonesty (Blankenship & Whitley, 2000). If students perceive faculty members have given little attention to dishonesty or show little concern about academic integrity, they are more likely to engage in dishonest behaviors (Iberahima et al., 2013;Yu et al., 2017). Therefore, the faculty members and educational authorities should take prompt measures following each dishonesty phenomenon without compromising any rules. However, in one study, it was reported that the present laws might not fit with the dynamic nature of dishonest behaviors in college (Draper & Newton, 2017). Thus, a systematic intervention that targets college dishonesty needs a consistent update and amendment with the advancement of technology and cheating techniques. Furthermore, long-term plans should be developed to improve students' attitudes and values toward integrity behaviors. Unless students' attitudes and values toward academic dishonesty in college are altered, they tend to rationalize each dishonesty phenomenon (Lowe et al., 2018). Consistently, Bultas et al. (2017) urged a frequent and timely discussion of appropriate behaviors and values to support students' development of honesty and integrity beyond the classroom. The implication is that the faculty's response to academic dishonesty and the implementation of institutional laws have great potential to reduce academic misconduct (Ma, 2013).
Peers are also considered to have the power to initiate dishonest behaviors in college (Rettinger & Kramer, 2009). Because a student wants to be perceived as knowledgeable and achieve higher grades (Jones, 2011), a peer is thought to have the potential to induce dishonest behaviors. In particular, the influence is more decisive in female-to-female interaction (Tsai, 2012). Hence, providing students with the skills to withstand peer pressure seems valuable. These efforts may improve the likelihood of integrity behaviors by resisting peer pressure. It is also possible to go beyond these categories and address some ways to deal with academic dishonesty. For example, higher education has the potential to reduce dishonest behavior in college if it fully shapes and develops students' moral vision and purposes (Guerrero-Dib et al., 2020). Academic integrity can be improved by raising student understanding of what constitutes academic dishonesty and communicating the essence of academic integrity (Carpenter et al., 2007). Similarly, students may not fully comprehend the rules of academic dishonesty; therefore, faculty must be clear about what to expect and what academic honesty policies should be, as well as serve as role models and create positive learning environments for students (Krueger, 2014).
Smith (2003) offered practical recommendations for stakeholders in academia, such as letting students know that professors are technologically savvy, indicating that detecting plagiarism is an easy process, involving tutors or writing centers to teach paraphrasing skills, redesigning coursework by dividing major research assignments into smaller, sequential steps that lead to the finished product, and investing in anti-plagiarism software. In addition to developing various technological activities to reduce the degree of academic dishonesty, it is further specified that some proactive measures that seem valuable. For example, activities such as faculty alert, student support, and diversity management positively impact reducing academic dishonesty. Faculty alertit represents encouraging faculty to report any students who are dishonest in their studies as soon as possible to take positive steps that lead to a successful academic pursuit. Student support can be explained in terms of prioritizing student support rather than prioritizing remedial measures. In particular, it encourages students to have a good foundation for study skills. Diversity management refers to a service designed to make students have fair access to educational resources and learning experiences regardless of their background. Similarly, it was reported that providing training about academic dishonesty reduces academic fraud by educating students about misconduct (Perkins et al., 2020).
Nowadays, anti-plagiarism software is widely in use around the world to prevent academic dishonesty. However, such technologies effectively control fraud, and they cannot detect dishonesty behaviors outside the world of digital technology. For example, frauds that are not based on digital technology such as exam papers and academic correspondence of less developed countries are less likely to be controlled by such technologies. Even in the digital world, anti-plagiarism checkers only detect whether the manuscript is original or not and cannot tell us who did it. It does not reduce the fraud that occurs behind the wall (Draper & Newton, 2017). Hence, in addition to the existing software, it may include integrating technology that monitors progressive practice that strengthens integrity behaviors. For dishonest behaviors in the exam sessions, both camerabased and software-based technologies that help monitor academic honesty better explore all student movements, actions, and reactions, and practices appear important. In the case of contract cheating, which is out of control of technology, Perkins et al. (2020) suggests collecting student writing samples, creating assignments that focus on particular material rather than generic papers, integrating critical thinking tests and personal involvement, and using alternative assessments such as testing, oral presentations, and reports.
Discussions
There has been an increasing interest in understanding whether college academic dishonesty can be extended into the workplace. The primary objective of this review was to highlight the association between college and workplace dishonesty and bring it into the spotlight. Attempts have also been made to understand the factors affecting dishonest behavior in college. However, a couple of limitations must be taken into account in attempting to answer the research questions. First, nearly all articles used in this review were based on self-report studies. Consequently, it may be impossible to determine the cause-effect relationship between college and workplace dishonesty. Second, there was a scarcity of articles that directly focused on the relationship between college and workplace dishonesty. Consequently, a large number of articles could not be assembled and the geographic distribution of the articles may not be representative since most of the articles were from the US, which could have a considerable bearing on the generalization of this paper's outcome worldwide. Instead of inconsistencies in findings among the available publications, the problem is that there are fewer articles focused on this topic.
Enduring these limitations, it seems reasonable to state that dishonesty in college has the potential to be transferred to the workplace. This transferability could imply that the more students are exposed to dishonesty in college, the more probable it is that they will participate in dishonesty later in the workplace. Besides, the more we tolerate college dishonesty, the higher the likelihood we produce a dishonest society for the future workplace, as highlighted by Ma (2013). Higher education should be, therefore, a struggle against dishonest behaviors unreservedly for it has the potential to indicate future workplace behaviors, which is stressed by and Klein et al. (2007). However, there is no guarantee maintaining that academic integrity in college will improve trustworthiness in the workplace.
Put succinctly, integrity is believed to be one of the fundamental values that employers need from the employees they recruit. It is a trait that an ethical employee demonstrates at the workplace and a foundation to establish healthy interpersonal relationships with coworkers and employers. Dishonest behaviors, on the other hand, make it challenging to build such trust in the workplace. Thus, reducing dishonesty seems valuable to contributing to competent and ethical employees in the market (Brimble & Stevenson-Clarke, 2005). Such effort might be why Burke et al. (2007) asserted that forging ethical professionals begins in the classroom. There is also another dark side of dishonesty in college besides transferring into the professional workplace. It has a contagious feature (Saidin & Isa, 2013) and could be extended to the honest community down to the next generations. The cumulative of these results show that, unless a series of measures are taken, it might jeopardize academic outcomes, the credibility of professions, and influence workplace trustworthiness.
Given that academic dishonesty could be transferred from college to the workplace and impact future behaviors, it seems college dishonesty has just more than academic implications. It even seems a matter of choice between producing ethical and unethical citizens or between preserving and smashing the profession because it places the academic output in jeopardy. For example, to address the tip of the iceberg, who wants treatment with a medical doctor who cheated right through college? Patients will die at the hand of such a doctor. Who wants justice with a lawyer who cheated right through college? Justice will be lost at the hand of such a lawyer. Who wants a teacher who cheated right through college? Such a teacher will spoil the students. If dishonesty in college continues, we rather favor producing unethical citizens that spoil the generations. Therefore, we must give special attention to curbing dishonesty in college, which may in turn influence ethical behaviors in the workplace.
Highlighting the rigorous and consistent intervention of college dishonesty, it is essential to note that several factors can determine its magnitude. Among the determinants, students' sex, age, grade levels, propensity, fields of study, learning style, idealism versus realism, and pedagogical approaches can be mentioned. For example, there are pieces of evidence that academic dishonesty is higher for males than for females (Ballantine et al., 2014;Blankenship & Whitley, 2000;Grym & Liljander, 2017). For example, Ballantine et al. (2014) argue that increasing the admission of females to a given field of study, such as accounting, increases integrity in the workplace. Although this intervention technique seems sound in a specific discipline such as accounting, it appears defective across entire fields of study and in traditionally male-dominated careers. In contrast, in another study, Ip et al. (2018) stated that there is no statistically significant gender difference regarding engaging in various forms of dishonesty behaviors in college. The reason could be that dishonesty contextual factors can influence academic dishonesty. While it appears cumbersome to address each determinant factor, what should not be overlooked is a learning approach, such as in-depth and surface learning. Indeed, if teachers use strategic learning and in-depth learning approaches, instead of surface learning, there is an opportunity to reduce academic dishonesty in college, as highlighted by (Ballantine et al., 2018), However, it is yet challenging to assess how well each student has adopted strategic learning and indepth learning in the real classrooms.
Considering the consequences of college dishonesty and the factors that determine its magnitude, formulating intervention strategies seems invaluable. Regarding the intervention techniques, several rules and regulations have been developed and implemented across the globe to avoid dishonesty. However, dishonesty remains one of the major challenges in the academic setting. The reason is that perhaps the intervention strategies such as rules and regulations are primarily focused just on specific targets, such as students. This paper chose to depend on multi-level intervention techniques involving students, educators, administrators, and policymakers.
Fundamentally, a series of actions need to be taken to improve students' conception of dishonest behaviors and their short-term and long-term consequences as they do not view it as a violation of academic integrity (Burgason et al., 2019). As a result, students may be less likely to engage in such dishonest behavior. Particularly, the development of value orientation appears very valuable to increase students' integrity (Hsiao & Yang, 2011). As an immediate actor in the educational setting, the intervention strategies used by educators may focus on deterring the details of unauthorized behaviors and placing themselves as role models for younger generations. This method may be effective as focusing on the prevention of academic dishonesty has several advantages over taking remedial measures, which is stressed by (Stoesz & Yudintseva, 2018). As a result, socially responsible faculty members can help to reduce dishonesty in higher education institutions (Simkin & Mcleod, 2010). Besides, they should also be conscious enough of the detailed scams that students use, which may help them take further preventive measures.
Faculty members should take consistent and firm measures amidst the unresponsive bureaucracies of higher education institutions. If students recognize that professors overlook academic dishonesty and tolerate these behaviors, they are more likely to engage in dishonest behaviors, which were stated by (Bernardi et al., 2015). In the case of suspected academic dishonesty, along with the suggestion of Burke et al. (2007), teachers need to take practical measures. Burke presented the details of the intervention, such as reassigning the work, reducing grades, providing F grades for the student, reporting the incident to the concerned stakeholders, and even keeping a record of academic dishonesty in the student's academic file. Furthermore, as indicated by Klein et al. (2007), regular classroom discussions with students, organizational meetings with students, and informative orientations have the potential to reduce college dishonesty.
College professors can also reduce academic dishonesty if there is an unreserved commitment to safeguarding academic integrity because dishonesty is contagious by nature, which begins with an individual and gradually extends to others (Saidin & Isa, 2013). Some dishonest behaviors such as inconsistency with the rule of law, tolerating taking measures according to the law, and being reluctant to monitor integrity are supposed to hearten students to the academic offense (Bernardi et al., 2015). As a result, there should be no room for any dishonest behavior in academic settings. Furthermore, faculty members must take measures such as focusing on authentic assessment, readmitting tests, rejecting theses and reports, dismissing students from the course, and invalidating the results according to (Ballantine et al. (2018), Burke et al. (2007), and . Since cheating is always supported by new technologies (Peytcheva-Forsyth et al., 2018), faculty members should update themselves periodically with in-depth research findings associated with the techniques students use to cheat.
The other stakeholders are educational authorities next to faculty members and students. The intervention strategies that these authorities could take may include a strategic plan to increase the value of academic integrity and reinforcing rules. Strengthening rules can be implemented both for all faculty members and students Administrators must arrange platforms to promote academic integrity and establish standards to forestall academic dishonesty in college. This effort is made because students' involvement in dishonest behavior partly relies on the faculty members (Murdock et al., 2004). Hence, administrators should denounce a faculty member who overlooks students' dishonest behaviors. Additionally, policymakers need to focus on the long-term development of values to promote organizational cultures that place a high value on academic integrity besides drafting rules of law for short-term interventions.
To recapitulate, in the 21st century of the knowledge economy, academic dishonesty in college is perceived as a signal of a big emergency. If dishonesty is so prevalent in college, students may lack the necessary knowledge and skills to achieve the objectives, which inevitably results in a loss of social and economic benefits associated with the given knowledge. This lack of the economy of knowledge, in turn, makes the given country less competent and more subordinate. Therefore, educational authorities should focus on reducing college dishonesty that is supported by technology and may incorporate creating an ethical society in academic settings, as suggested by Ma (2013), improving the self-esteem of students and Yazici et al. (2011), in-depth learning instructional approaches Ballantine et al. (2018), assessing students' authentic performance Murdock et al. (2004), consistent monitoring and evaluation systems, and giving media coverage for academic dishonesty.
Conclusion
This review was intended to bring into the spotlight the consistency of the association between college and workplace dishonesty and the increasing sensitivity of educational stakeholders about college dishonesty. There have been growing, but limited investigations that have focused on the relationship between dishonesty in college and dishonesty in the workplace. Given the limited number of articles dedicated to this area, the scrutiny of the findings proves that dishonesty has the potential to be extended from college to the workplace. Indeed, dishonest behaviors in college can to some extent predict dishonest behaviors in the workplace. Furthermore, it has been attested that dishonesty in college has far-reaching enduring implications besides its immediate academic jeopardy. These dishonest behaviors are often accompanied by different personal and demographic factors, such as an attitude toward dishonesty and the ultimate decision to succeed in the shortcut.
As a result, tolerating dishonest behaviors in college seems to support dishonest students who may continue to be dishonest in the future. Thus, maintaining academic integrity in college may increasingly contribute to the credibility of the workplace. To gradually maintain academic integrity, the intervention strategies at the level of students, faculty members, administrators, and policymakers need to be synergistically implemented. Accordingly, students should fully understand all the ethical and unethical academic behaviors, the short-term and long-term consequences of dishonesty, and the importance of respecting academic integrity. Faculty members should also exercise academic integrity, discuss issues of dishonesty with students, and take prompt measures as required by law. Administrators should focus on developing an organizational culture of academic integrity and strengthening the enforcement of the academic integrity rules, while policymakers should focus on policies and technologies that would help to build a culture of academic integrity. Finally, as a direction for future research, intense findings should be directed toward integrating the types of policies, technologies, and pedagogical approaches that create an academic community that values academic integrity.
Funding
The authors received no direct funding for this research.
Data availability statement
In this article, there were four databases (PubMed, IRIC, PsycINFO, and Google Scholar) used to collect data.
Because the present data were collected from the 18 articles incorporated in the review, it could be possible for readers and reviewers to access the data.
|
v3-fos-license
|
2018-04-03T02:48:27.781Z
|
2012-07-25T00:00:00.000
|
14264087
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://bmcearnosethroatdisord.biomedcentral.com/track/pdf/10.1186/1472-6815-12-8",
"pdf_hash": "29bc28f444e95e759cbd7373f088a8cc093aa3cd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45888",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "93dbf06d29040a8f77864d7ef38d4f89a96dc9f4",
"year": 2012
}
|
pes2o/s2orc
|
Superior semicircular canal dehiscence in East Asian women with osteoporosis
Background Superior semicircular canal dehiscence (SSCD) may cause Tullio phenomenon (sound-induced vertigo) or Hennebert sign (valsalva-induced vertigo) due to the absence of bone overlying the SSC. We document a case series of elderly East Asian women with atypical SSCD symptoms, radiologically confirmed dehiscence and concurrent osteoporosis. Methods A retrospective record review was performed on patients with dizziness, vertigo, and/or imbalance from a neurology clinic in a community health center serving the East Asian population in Boston. SSCD was confirmed by multi-detector, high-resolution CT of the temporal bone (with Pöschl and Stenvers reformations) and osteoporosis was documented by bone mineral density (BMD) scans. Results Of the 496 patients seen in the neurology clinic of a community health center from 2008 to 2010, 76 (17.3%) had symptoms of dizziness, vertigo, and/or imbalance. Five (6.6%) had confirmed SSCD by multi-detector, high-resolution CT of the temporal bone with longitudinal areas of dehiscence along the long axis of SSC, ranging from 0.4 to 3.0 mm, as seen on the Pöschl view. Two of the 5 patients experienced motion-induced vertigo, two fell due to disequilibrium, and one had chronic dizziness. None had a history of head trauma, otologic surgery, or active intracerebral disease. On neurological examination, two patients had inducible vertigo on Dix-Hallpike maneuver and none experienced cerebellar deficit, Tullio phenomenon, or Hennebert sign. All had documented osteoporosis or osteopenia by BMD scans. Three of them had definite osteoporosis, with T-scores < −2.5 in the axial spine, while another had osteopenia with a T-score of −2.3 in the left femur. Conclusions We describe an unusual presentation of SSCD without Tullio phenomenon or Hennebert sign in a population of elderly, East Asian women. There may be an association of SSCD and osteoporosis in this population. Further research is needed to determine the incidence and prevalence of this disorder, as well as the relationship of age, race, osteoporosis risk, and the development of SSCD.
Background
Superior semicircular canal dehiscence (SSCD) is an uncommon cause of vertigo and disequilibrium. The initial population described by Minor et al. [1] in 1998 was young, with a median age of 41, and they experienced vertigo or oscillopsia inducible by sound (Tullio phenomenon) and/or increased middle ear or intracranial pressure (Hennebert sign). Computed tomography (CT) of the temporal bone revealed dehiscence of bone overlying one or both of the SSCs [1,2]. The dehisced bone consequently acts as a third window in the inner ear that can disturb the normal pressure gradient directing the endolymphatic flow within the vestibular system [1,3]. Surgical repair by plugging the bony defect can result in improvement of vertigo and disequilibrium in some patients [1,2] Figure 1.
We identified five patients with SSCD from a community health center that serves the East Asian population in Boston. Unlike previous reports, our patients were all elderly women of East Asian descent, had concomitant osteoporosis or osteopenia, and lacked Tullio phenomenon or Hennebert sign.
Methods
Patients with neurologic symptoms, including those with dizziness, vertigo, and disequilibrium, were evaluated by the neurologist E.T.W. Medical records were reviewed according to a protocol approved by an institutional review board at the clinic. Patients presenting with vertigo, dizziness, or disequilibrium with unclear etiology, as well as those suspected of having unusual inner ear pathology, underwent high-resolution multi-detector CT of the temporal bone, with 0.625 mm slices and reformations of the SSC along the longitudinal axis (Pöschl view) and transverse axis (Stenvers view). To measure the extent of dehiscence, a straight line was drawn subtending the arc of the SSC and rounding to the nearest 0.5 mm in the Pöschl view [3]. If indicated, head CT or magnetic resonance imaging was done to rule out intracerebral or cerebrovascular pathology. T-scores were extracted from bone mineral density (BMD) scans when available.
Results
Patient characteristics were summarized in Table 1 and 2. Among the 496 individuals seen in the neurology clinic between 2008 and 2010, the median age was 58 (range 20-93) years. SSCD was found in five (1.0%) elderly Chinese women with a median age of 72 (range 57 to 85) years. The prevalence in our cohort is doubled when compared to a large autopsy series of the general population (0.5%) reported to date [4]. Among patients with symptomatic dizziness, vertigo, and disequilibrium (n = 76), the prevalence of SSCD was substantially higher, or 6.6%. Thirty-three of the 76 patients (43.4%) had BMD scans and 27 (35.5%) had documented osteoporosis or osteopenia.
All five individuals had radiological evidence of SSCD confirmed by high-resolution CT of the temporal bone with Pöschl and Stenvers views ( Figure 1). None had a history of head trauma, otologic surgery, or active intracerebral disease. Two had histories of intermittent motioninduced vertigo while another two fell secondary to disequilibrium. The vertigo and disequilibrium symptoms were of new onset in three, occurring within 3 months prior to neurological evaluation, while one had a 10-year history of chronic intermittent vertigo. One patient had chronic bilateral hearing loss, while another had tinnitus and vertigo-associated nausea and vomiting. None experienced Tullio phenomenon or Hennebert sign. During neurological examination, two patients had rotatory nystagmus on left-sided Dix-Hallpike maneuver. Although they may have concurrent benign paroxysmal positional vertigo, neither patient derived benefit from Brandt-Daroff exercise or Epley maneuver. The other three patients had normal oculomotor, vestibular, and cerebellar examinations.
A B D C Figure 1 High-resolution multi-detector CT of the temporal bone was performed with 0.625 mm slices and reformations of the SSC along the longitudinal axis (Pöschl view) and transverse axis (Stenvers view). To measure the extent of dehiscence, a straight line was drawn subtending the arc of the SSC and rounding to the nearest 0.5 mm in the Pöschl view [3]. In patient 1, the petrous bone overlying the right SSC was eroded, but without dehiscence, as seen on the (A) Pöschl view and (C) Stenvers view. However, there was dehiscence of the left SSC as seen on the (B) Pöschl view (arrowhead) and (D) Stenvers views (arrow).
All five women were notable for abnormal T-scores on BMD scans. T-scores within three years of SSCD diagnosis were available for four of the five patients; the only patient without T-score values had documentation of osteoporosis in clinical notes prior to receiving care in the clinic. Three of them had definite osteoporosis, with T-scores <−2.5 in the axial spine, while another had osteopenia with a T-score of −2.3 in the left femur (Table 1). Two patients had previously used alendronate for the prevention of fractures from osteoporosis, all for periods of less than 5 years. At the time of SSCD diagnosis, two patients were taking calcium plus vitamin D supplement, while another two used multivitamins that included vitamin D. One patient was taking both ranitidine and omeprazole.
Discussion
There are major differences between our five elderly, East Asian women with SSCD when compared to cases described in the literature with respect to the demographics and clinical presentations. Our patients had a median age of 75 (range 57 to 85) years, which is significantly older than the median age of 41 (range 13 to 70) years reported in Minor's review of 65 patients [1]. All of our patients are women while other reports described either a male predominance or equal gender distribution [5,6]. Most importantly, none of our patients experienced Tullio phenomenon or Hennebert sign, while prior report described a high prevalence of these neuro-otologic symptoms, 88% and 63%, respectively [1]. This may be due to the relatively small size dehiscence in our cohort, which were all < 2.0 mm. Yuen et al. [3] reported that patients with dehiscence of ≥ 3.0 mm experienced an average airbone gap hearing loss of 10 decibel on pure-tone audiometry between 500-2000 hertz while none experienced such hearing loss when the dehiscence was < 3.0 mm. Pfammatter et al. [6] found that large dehiscence of 2.5 mm or greater were associated with significantly more vestibulocochlear symptoms, including Tullio phenomenon and Hennebert sign. Although not performed, additional testing in our cohort, such as the enlarged, low threshold click-evoked vestibulo-ocular reflex that aligns with the SSC [7] and the large amplitude, low threshold ocular vestibular evoked cervical myogenic potential [8,9], may provide confirmatory physiological evidence of SSCD.
The pathophysiological mechanism giving rise to SSCD in our cohort may be different from the previously reported population, in whom an earlier development of SSCD may be a result of congenital maldevelopment of the petrous bone [1,4]. Notably, all of our patients have osteoporosis or osteopenia and no prior report to date has described abnormal bone mineral metabolism in patients with SSCD. Because Asians are particularly at risk of developing osteoporosis [10], our older patients could have had normal formation of the petrous bone at birth but developed SSCD later in life from the prolonged osteoporotic erosion of bone overlying the SSC. Our view is consistent with the rising prevalence of SSCD or canal thinning in the elderly population, particularly for those at age 80 or older [11]. Furthermore, the slow evolution of dehiscence from osteoporosis may explain the smaller size of the bony defect, ranging from 0.5 to 2.0 mm, in our cohort at diagnosis. The protracted development of dehiscence may also allow time for compensatory adaptation by the nervous system resulting in a paucity of Tullio phenomenon, Hennebert sign, and other severe vestibulocochlear symptoms. However, we cannot exclude the possibility that SSCD and osteoporosis are separate disease processes that co-developed in our cohort. The incidence and prevalence of SSCD is unknown in a population of patients with vertigo, dizziness, and/or disequilibrium. Autopsy series of cadaver temporal bone revealed an incidence of 0.3% in the general population without neuro-otologic deficits [12]. But among those with inner ear symptoms, the incidence may be as high as 19% in a retrospective series [12]. We found five with SSCD among 76 patients with vertigo, dizziness, and/or disequilibrium, or a prevalence of 6.6%. We suspect that this prevalence is higher than the general population because the clinic, which primarily serves the Asian population in Boston, may allow enrichment of the population at risk for the development of SSCD. Furthermore, improved CT technology may have helped the earlier detection of SSCD. Compared to the 50% sensitivity when 1.0 mmcollimated CT with transverse and coronal images, SSCD was better visualized with 93% sensitivity when 0.5 mmcollimated CT was used together with reformation along the long axis of the SSC [2]. However, further research would be needed to determine the exact incidence and prevalence of SSCD among Asian patients and the general population in the United States, as well as the pathophysiological mechanisms of SSCD in these two groups.
Conclusions
We documented a case series of elderly, East Asian women with atypical SSCD, as confirmed by highresolution multi-detector CT of the temporal bone, and concurrent osteoporosis with abnormal T-scores as demonstrated by BMD scans. There may be an association between SSCD and osteoporosis in this susceptible patient population.
|
v3-fos-license
|
2019-02-21T23:03:37.000Z
|
2019-02-21T00:00:00.000
|
119273309
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://www.emis.de/journals/SIGMA/2021/068/sigma21-068.pdf",
"pdf_hash": "dad7c4b542e3a34552bc3ef70bb1104c81b0aab8",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45889",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "dad7c4b542e3a34552bc3ef70bb1104c81b0aab8",
"year": 2019
}
|
pes2o/s2orc
|
Good Wild Harmonic Bundles and Good Filtered Higgs Bundles
We prove the Kobayashi-Hitchin correspondence between good wild harmonic bundles and polystable good filtered $\lambda$-flat bundles satisfying a vanishing condition. We also study the correspondence for good wild harmonic bundles with the homogeneity with respect to a group action, which is expected to provide another way to construct Frobenius manifolds.
Introduction
Let X be a smooth projective variety with a simple normal crossing hypersurface H. Let L be an ample line bundle on X. We shall prove the following theorem, that is the Kobayashi-Hitchin correspondence for good wild harmonic bundles and good filtered Higgs bundles. • Good wild harmonic bundles on (X, H).
We shall recall the precise definitions of the objects in §2.
In [47], we have already proved that good wild harmonic bundles on (X, H) induce µ L -polystable good filtered Higgs bundles satisfying the vanishing condition. Indeed, more generally, for any complex number λ, good wild harmonic bundles induce µ L -polystable good filtered λ-flat bundles satisfying a similar vanishing condition. Note that 0-flat bundles are equivalent to Higgs bundles, and 1-flat bundles are equivalent to flat bundles in the ordinary sense. Moreover, we studied an analogue of Theorem 1.1 in the case λ = 1, i.e., the correspondence between good wild harmonic bundles and µ L -polystable good filtered flat bundles satisfying a similar vanishing condition [47,Theorem 16.1]. It was applied to the study of the correspondence between semisimple algebraic holonomic D-modules and pure twistor D-modules.
There is no new essential difficulty to prove Theorem 1.1 after our studies [43,44,45,47] on the basis of [57,58]. Moreover, in some parts of the proof, the arguments can be simplified in the Higgs case. However, because the Higgs case is also particularly important, it would be useful to explain a rather detailed proof of the correspondence.
Kobayashi-Hitchin correspondence for vector bundles
We briefly recall a part of the history of this type of correspondences. (See also [24,32,38].) For a holomorphic vector bundle E on a compact Riemann surface C, we set µ(E) := deg(E)/ rank(E), which is called the slope of E. A holomorphic bundle E is called stable (resp. semistable) if µ(E ′ ) < µ(E) (resp. µ(E ′ ) ≤ µ(E)) holds for any holomorphic subbundle E ′ ⊂ E such that 0 < rank(E ′ ) < rank(E). It is called polystable if it is a direct sum of stable subbundles with the same slope. This stability, semistability and polystability conditions were introduced by Mumford [52] for the construction of the moduli spaces of vector bundles with reasonable properties. Narasimhan and Seshadri [53] established the equivalence between unitary flat bundles and polystable bundles of degree 0 on compact Riemann surfaces.
Let (X, ω) be a compact connected Kähler manifold. For any torsion-free O X -module F , the slope of F with respect to ω is defined as µ ω (F ) := X c 1 (F )ω dim X−1 rank F .
If the cohomology class of ω is the first Chern class of an ample line bundle L, then µ ω (F ) is also denoted by µ L (F ). Then, a torsion-free O X -module F is called µ ω -stable if µ ω (F ′ ) < µ ω (F ) holds for any saturated subsheaf F ′ ⊂ F such that 0 < rank(F ′ ) < rank(F ). This condition was first studied by Takemoto [64,65]. It is also called µ ω -stability, or slope stability. Slope semistability and slope polystability are naturally defined. Bogomolov [4] introduced a different stability condition for torsion-free sheaves on connected projective surfaces, and he proved the inequality of the Chern classes c 2 (E) − (r − 1)c 2 1 /2r < 0 for any unstable bundle E of rank r in his sense. Gieseker [18] proved the inequality for slope semistable bundles. The inequality is called Bogomolov-Gieseker inequality.
Inspired by these works, Kobayashi [29] introduced the concept of Hermitian-Einstein condition for metrics of holomorphic vector bundles. Let (E, ∂ E ) be a holomorphic vector bundle on a Kähler manifold (X, ω). Let h be a Hermitian metric of E. Let R(h) denote the curvature of the Chern connection ∇ h = ∂ E + ∂ E,h , associated to h and ∂ E . Then, h is called Hermitian-Einstein if ΛR(h) ⊥ = 0, where R(h) ⊥ denote the trace-free part of R(h). In [29], he particularly studied the case where the tangent bundle of a compact Kähler manifold has a Hermitian-Einstein metric, and he proved that such bundles are not unstable in the sense of Bogomolov. Kobayashi [30,31] and Lübke [37] proved that a holomorphic vector bundle on a compact connected Kähler manifold satisfies the slope polystability condition if it has a Hermitian-Einstein metric. Moreover, Lübke [36] established the so called Kobayashi-Lübke inequality for the first and the second Chern forms associated to Hermitian-Einstein metrics, which is reduced to the inequality Tr (R(h) ⊥ ) 2 ω dim X−2 ≥ 0 in the form level. It particularly implies the Bogomolov-Gieseker inequality for holomorphic vector bundles (E, ∂ E ) with a Hermitian-Einstein metric h on compact Kähler manifolds (X, ω). Moreover, if c 1 (E) = 0 and X ch 2 (E)ω dim X−2 = 0 are satisfied for such (E, ∂ E , h), and if we impose det(h) is flat, then the Kobayashi-Lübke inequality implies that R(h) = 0, i.e., ∇ h is flat.
Independently, in [33], Hitchin proposed a problem to ask an equivalence of the stability condition and the existence of a metric h such that ΛR(h) = 0, under the vanishing of the first Chern class of the bundle. (See [24] for more precise.) It clearly contains the most important essence. He also suggested possible applications on the vanishings. His problem stimulated Donaldson whose work on this topic brought several breakthroughs to whole geometry.
In [13], Donaldson introduced the method of global analysis to reprove the theorem of Narasimhan-Seshadri. In [14], by using the method of the heat flow associated to the Hermitian-Einstein condition, he established the equivalence of the slope polystability condition and the existence of a Hermitian-Einstein metric for holomorphic vector bundles on any complex projective surface. The important concept of Donaldson functional was also introduced in [14].
Eventually, Donaldson [15] and Uhlenbeck-Yau [66] established the equivalence on any dimensional complex projective manifolds. Note that Uhlenbeck-Yau proved it for any compact Kähler manifolds, more generally. The correspondence is called with various names; Kobayashi-Hitchin correspondence, Hitchin-Kobayashi correspondence, Donaldson-Hitchin-Uhlenbeck-Yau correspondence, etc. In this paper, we adopt Kobayashi-Hitchin correspondence.
As a consequence of the Kobayashi-Hitchin correspondence and the Kobayashi-Lübke inequality, we also obtain an equivalence between unitary flat bundles and slope polystable holomorphic vector bundles E satisfying µ ω (E) = 0 and X ch 2 (E)ω dim X−2 = 0. Note that Mehta and Ramanathan [40,41] deduced the equivalence on complex projective manifolds directly from the equivalence in the surface case due to Donaldson [14].
Higgs bundles
Such correspondences have been also studied for vector bundles equipped with something additional, which are also called Kobayashi-Hitchin correspondences in this paper. One of the most rich and influential is the case of Higgs bundles, pioneered by Hitchin and Simpson.
Let (E, ∂ E ) be a holomorphic vector bundle on a compact Riemann surface C. A Higgs field of (E, ∂ E ) is a holomorphic section θ of End(E) ⊗ Ω 1 C . Let h be a Hermitian metric of E. We obtain the Chern connection ∂ E + ∂ E,h and its curvature R(h). Let θ † h denote the adjoint of θ. In [23], Hitchin introduced the following equation, called the Hitchin equation: Such (E, ∂ E , θ, h) is called a harmonic bundle. He particularly studied the case rank E = 2. Among many deep results in [23], he proved that a Higgs bundle (E, ∂ E , θ) has a Hermitian metric h satisfying (1) if and only if it is polystable of degree 0. Here, a Higgs bundle (E, ∂ E , θ) is called stable (resp. semistable) if µ(E ′ ) < µ(E) (resp. µ(E ′ ) ≤ µ ′ E) holds for any holomorphic subbundle E ′ ⊂ E such that θ(E ′ ) ⊂ E ′ ⊗ Ω 1 C and that 0 < rank(E ′ ) < rank(E), and a Higgs bundle is called polystable if it is a direct sum of stable Higgs subbundles with the same slope. By this equivalence and another equivalence due to Donaldson [16] between irreducible flat bundles and twisted harmonic maps, Hitchin obtained that the moduli space of polystable Higgs bundles of degree 0 and the moduli space of semisimple flat bundle are isomorphic. Together with another equivalence due to Donaldson [16] between irreducible flat bundles and twisted harmonic maps, Hitchin's work showed that the moduli spaces of Higgs bundles and flat bundles have extremely rich structures.
The higher dimensional case was studied by Simpson [57]. Note that Simpson started his study independently motivated by a new way to construct variations of Hodge structure, which we shall mention later in §1.2.1. For a holomorphic vector bundle (E, ∂ E ) on a complex manifold X with arbitrary dimension, a Higgs field θ is defined to be a holomorphic section of End(E) ⊗ Ω 1 X satisfying the additional condition θ ∧ θ = 0. Suppose that X has a Kähler form. Let h be a Hermitian metric of E. Let F (h) denote the curvature of the connection ∇ h + θ + θ † h . A Hermitian metric h of a Higgs bundle (E, ∂ E , θ) is called Hermitian-Einstein if ΛF (h) ⊥ = 0. When X is compact, the slope stability, semistability and polystability conditions for Higgs bundles are naturally defined in terms of the slopes of Higgs subsheaves. Simpson established that a Higgs bundle (E, ∂ E , θ) on a compact Kähler manifold (X, ω) has a Hermitian-Einstein metric if and only if it is slope polystable. Moreover, he generalized the Kobayashi-Lübke inequality for the Chern forms to the context of Higgs bundles, which is reduced to the inequality Tr (F (h) ⊥ ) 2 ω dim X−2 ≥ 0 in the form level for any Hermitian-Einstein metric h of (E, ∂ E , θ). Here, the condition θ ∧ θ = 0 is essential. It particularly implies that if (E, ∂ E , θ) on a compact Kähler manifold (X, ω) satisfies µ ω (E) = 0 and X ch 2 (E)ω dim X−2 = 0, then a Hermitian-Einstein metric h of (E, ∂ E , θ) is a pluri-harmonic metric, i.e., i.e., the connection ∇ h + θ + θ † h is flat. It is equivalent to the following: A Higgs bundle (E, ∂ E , θ) with a pluri-harmonic metric is called a harmonic bundle. The equivalence and another equivalence due to Corlette [10] induces an equivalence between semisimple flat bundles and polystable Higgs bundles (E, ∂ E , θ) satisfying µ ω (E) = 0 and X ch 2 (E)ω dim X−2 = 0 on any connected compact Kähler manifold. This correspondence is not only really interesting, but also a starting point of the further investigations. Simpson pursuit the comparison of flat bundles and Higgs bundles in deeper levels [59], and developed the non-abelian Hodge theory [61].
Filtered case
It is interesting to generalize such correspondences for objects on complex quasi-projective manifolds. We need to impose a kind of boundary condition, that is parabolic structure. Mehta and Seshadri [42] introduced the concept of parabolic structure of vector bundles on compact Riemann surfaces. Let C be a compact Riemann surface with a finite subset D ⊂ C. Let E be a holomorphic vector bundle on C. A parabolic structure of E at D is a tuple of filtrations We set µ(E, F ) := deg(E, F )/ rank(E). For any subbundle . Semistability and polystability conditions are also defined naturally. Then, Mehta and Seshadri proved an equivalence of unitary flat bundles on C \ D and parabolic vector bundles (E, F ) with µ(E, F ) = 0 on (C, D). For some purposes, it is more convenient to replace parabolic bundles with filtered bundles introduced by Simpson [57,58]. Let V be a locally free O C ( * D)-module. A filtered bundle P * V over V is a tuple of lattices P a V (a = (a P ) P ∈D ∈ R D ) such that (i) P a V( * D) = V, (ii) the restriction of P a V to a neighbourhood of P ∈ D depends only on a P , (iii) P a+n V = P a V( n P P ) for any a ∈ R D and n ∈ Z D , (iv) for any a ∈ R D , there exists ǫ ∈ R D >0 such that P a V = P a+ǫ V. Let 0 denote (0, . . . , 0) ∈ R D . Then, P 0 V is equipped with the parabolic structure F induced by the images of P a V |P −→ P 0 V |P . It is easy to observe that filtered bundles are equivalent to parabolic bundles. We set µ(P * V) := µ(P 0 V, F ) for filtered bundles P * V.
Simpson [57,58] generalized the theorem of Mehta-Seshadri to the correspondences of tame harmonic bundles, regular filtered Higgs bundles and regular filtered Higgs bundles on compact Riemann surfaces. A harmonic bundle (E, ∂ E , θ, h) on C \ D is called tame on (C, D) if the closure of the spectral curve of θ in T * C(log D) is proper over C. A regular filtered Higgs bundle consists of a filtered bundle P * V equipped with a Higgs field θ : Similarly, a regular filtered flat bundle consists of a filtered bundle P * V equipped with a connection ∇ : for any a ∈ R D . Stability, semistable and polystable conditions are naturally defined in terms of the slope. Then, Simpson established the equivalence of tame harmonic bundles on (C, D), polystable regular filtered Higgs bundles (P * V, θ) satisfying µ(P * V) = 0, and polystable regular filtered flat bundles (P * V, θ) satisfying µ(P * V) = 0. Note that filtered bundles express the growth order of the norms of holomorphic sections with respect to the metrics. We should mention that the study of the asymptotic behaviour of tame harmonic bundles is much harder than that of the asymptotic behaviour of unitary flat bundles. Hence, it is already hard to prove that tame harmonic bundles induce regular filtered Higgs bundles and regular filtered flat bundles.
There are several directions to generalize. One is a generalization in the context of tame harmonic bundles on higher dimensional varieties. Let X be a smooth connected projective variety with a simple normal crossing hypersurface H and an ample line bundle L. Then, there should be equivalences of tame harmonic bundles on (X, D), µ L -polystable regular filtered Higgs bundles (P * V, θ) on (X, D) satisfying X par-c 1 (P * V)c 1 (L) dim X−1 = 0 and X par-ch 2 (P * V)c 1 (L) dim X−2 = 0, and µ L -polystable regular filtered flat bundles (P * V, θ) on (X, D) satisfying a similar vanishing condition. In [2], Biquard studied the case where D is smooth. In [34,35,63], Li, Narasimhan, Steer and Wren studied the correspondence for parabolic bundles without Higgs field nor flat connection. In [27], Jost and Zuo studied the correspondence between semisimple flat bundles and tame harmonic bundles. Eventually, in [43,44,45], the author obtained the satisfactory equivalences for tame harmonic bundles. Note that Donagi and Pantev proposed an attractive application of the Kobayashi-Hitchin correspondence for tame harmonic bundles to the study of geometric Langlands theory [12].
In another natural direction of generalization, we should consider more singular objects than regular filtered Higgs or flat bundles. A harmonic bundle (E, ∂ E , θ) on X \ D is called wild if the closure of the spectral variety of θ in the projective completion of T * X is complex analytic. For the analysis, we should impose that the spectral variety of harmonic bundles satisfy some non-degeneracy condition along D. (See §2.6.2.) This is not essential because the condition is always satisfied once we replace X by its appropriate blow up. The notion of regular filtered Higgs (resp. flat) bundle is appropriately generalized to the notion of good filtered Higgs (resp. flat) bundle. The results of Simpson should be generalized to equivalences of good wild harmonic bundles, µ L -polystable good filtered Higgs bundles (P * V, θ) satisfying X par-c 1 (P * V)c 1 (L) dim X−1 = 0 and X par-c 2 (P * V)c 2 (L) dim X−2 = 0, and µ L -polystable good filtered flat bundles satisfying a similar vanishing condition. Sabbah [54] studied the correspondence between semisimple meromorphic flat bundles and wild harmonic bundles in the one dimensional case. Biquard and Boalch [3] obtained generalization for wild harmonic bundles in the one dimensional case. Boalch informed the author that wild generalization in the context of the Higgs case was not expected in those days.
As mentioned, in [47], the author studied the wild harmonic bundles on any dimensional varieties. We obtained that good wild harmonic bundles induce µ L -polystable good filtered Higgs bundles and µ L -polystable good filtered flat bundles satisfying the vanishing conditions. Moreover, we proved that the construction induces an equivalence of good wild harmonic bundles and slope polystable good filtered flat bundles satisfying the vanishing condition. Such an equivalence for meromorphic flat bundles is particularly interesting because we may apply it to prove a conjecture of Kashiwara [28] on semisimple algebraic holonomic D-modules. See [49] for more details on this application.
In [47], we did not give a proof of the equivalence for wild harmonic bundles on the Higgs side because it is rather obvious that a similar argument can work even in the Higgs case after [43,44,45,47] on the basis of [57,58]. But, because the Higgs case is also important, it would be better to have a reference in which a rather detailed proof is explained. It is one reason why the author writes this manuscript. As another reason, in the next subsection, we shall explain an application to the correspondence for good wild harmonic bundles with homogeneity, which is expected to be useful in the generalized Hodge theory.
1.2 Homogeneity with respect to group actions
Variation of Hodge structure
As mentioned, Simpson [57] was motivated by the construction of polarized variation of Hodge structure. Let us recall the definition of polarized complex variation of Hodge structure given in [57], instead of the original definition of polarized variation of Hodge structure due to Griffiths. A complex variation of Hodge structure of weight w is a graded C ∞ -vector bundle V = p+q=w V p,q equipped with a flat connection ∇ satisfying the Griffiths transversality condition, i.e., ∇ 0, where ∇ p,q denote the (p, q)-part of ∇. A polarization of a complex variation of Hodge structure is a flat Hermitian pairing ·, · satisfying the following conditions: (i) the decomposition V = V p,q is orthogonal with respect to ·, · , (ii) ( √ −1) p−q ·, · is positive definite on V p,q . A polarization of pure Hodge structure typically appears when we consider the Gauss-Manin connection associated to a smooth projective morphism f : X −→ Y. Namely, the family of vector spaces H w (f −1 (y)) (y ∈ Y) naturally induces a flat bundle on Y. With the Hodge decomposition, it is a variation of Hodge structure of weight w. A relatively ample line bundle induces a polarization on the variation of Hodge structure.
Simpson discovered a completely different way to construct polarized variation of Hodge structure. Let (V = V p,q , ∇) be a complex variation of Hodge structure. Note that ∇ 0,1 induces holomorphic structures is a graded holomorphic vector bundle. We also note that ∇ 1,0 induces linear maps V p,q −→ V p−1,q+1 ⊗ Ω 1,0 , and hence θ : V −→ V ⊗ Ω 1,0 . It is easy to check that θ is a Higgs field of (V, ∂ V ). Such a graded holomorphic bundle V = p+q=w V p,q with a Higgs field θ such that θ · V p,q ⊂ V p−1,q+1 ⊗ Ω 1,0 is called a Hodge bundle of weight w. In general, we cannot construct a complex variation of Hodge structure from a Hodge bundle. However, Simpson discovered that if a Hodge bundle (V = V p,q , θ) on a compact Kähler manifold satisfies the stability condition and the vanishing condition, then there exists a flat connection ∇ and a flat Hermitian pairing ·, · such that (i) (V = V p,q , ∇) is a complex variation of Hodge structure which induces the Hodge bundle, (ii) ·, · is a polarization of (V = V p,q , ∇). Indeed, according to the equivalence of Simpson between Higgs bundles and harmonic bundles, there exists a pluri-harmonic metric h of (V, θ). It turns out that the flat connection ∇ h + θ + θ † h satisfies the Griffiths transversality. Moreover, the decomposition V = V p,q is orthogonal with respect to h, and flat Hermitian paring ·, · is constructed by the relation ( √ −1) p−q ·, · V p,q = h |V p,q . Note that a Hodge bundle is regarded as a Higgs bundle (V, ∂ V , θ) with an S 1 -homogeneity, i.e., (V, ∂ V ) is equipped with an S 1 -action such that t • θ • t −1 = t · θ for any t ∈ S 1 . It roughly means that Hodge bundles correspond to the fixed points in the moduli space of Higgs bundles with respect to the natural S 1 -action induced by t(E, ∂ E , θ) = (E, ∂ E , tθ).
By the deformation (E, ∂ E , αθ) (α ∈ C * ), any Higgs bundles is deformed to an S 1 -fixed point in the moduli space, i.e., a Hodge bundle as α → 0. Note that the Higgs field of the limit is not necessarily 0. Hence, by the equivalence between Higgs bundles and flat bundles, it turns out that any flat bundle is deformed to flat bundle underlying a polarized variation of Hodge structure.
Simpson [57] particularly applied these ideas to construct uniformizations of some types of projective manifolds. He also applied it to prove that some type of discrete groups cannot be the fundamental group of any projective manifolds in [59].
TE-structure
We recall that a complex variation of Hodge structure on X induces a TE-structure in the sense of Hertling [20], i.e., a holomorphic vector bundle V on X := C λ × X with a meromorphic flat connection where X 0 := {0} × X. Indeed, for a complex variation of Hodge structure (V = V p,q , ∇), F p (V ) := p1≥p V p1,q1 are holomorphic subbundles with respect to ∇ 0,1 . Thus, we obtain a decreasing filtration of holomorphic subbundles F p (V ) (p ∈ Z) satisfying the Griffiths transversality Let p : C * λ × X −→ X denote the projection. We obtain the induced flat bundle (p * V, p * ∇). By the Rees construction, p * V is extended to a locally free O X -module V, on which ∇ := p * ∇ is a meromorphic flat connection satisfying the condition ∇V ⊂ V ⊗ O X (X 0 ) ⊗ Ω 1 X (log X 0 ). It is recognized that a TE-structure appears as a fundamental piece of interesting structures in various fields of mathematics. For instance, TE-structure is an ingredient of Frobenius manifold, which is important in the theory of primitive forms due to K. Saito [56], the topological field theory of Dubrovin [17], the tt *geometry of Cecotti-Vafa [7,8], the Gromov-Witten theory, the theory of Landau-Ginzburg models, etc. For the construction of Frobenius manifolds, it is an important step to obtain TE-structures. Abstractly, TEstructure is also an important ingredient of semi-infinite variation of Hodge structure [1,9,25], TERP structure [20,21,22], integrable variation of twistor structure [55], etc. (See also [46,48].)
Homogeneous harmonic bundles
As Simpson applied his Kobayashi-Hitchin correspondence to construct complex variation of Hodge structure, we may apply Theorem 1.1 to construct TE-structure with something additional. It is done through harmonic bundles with homogeneity as in the Hodge case.
Let X be a complex manifold equipped with an S 1 -action. Let (E, ∂ E ) be an S 1 -equivariant holomorphic vector bundle. Let θ be a Higgs field of (E, ∂ E ), which is homogeneous with respect to the S 1 -action, i.e., t * θ = t m θ for some m = 0. Let h be an S 1 -invariant pluri-harmonic metric of (E, ∂ E , θ). Then, as studied in [48, §3], we naturally obtain a TE-structure. More strongly, it is equipped with a grading in the sense of [9,25], and it also underlies a polarized integrable variation of pure twistor structure of weight 0 [55]. Moreover, if there exists an S 1 -equivariant isomorphism between (E, ∂ E , θ, h) and its dual, the TE-structure is enhanced to a semi-infinite variation of Hodge structure with a grading [1,9,25]. If the S 1 -action on X is trivial, this is the same as the construction of a variation of Hodge structure from a Hodge bundle with a pluri-harmonic metric for which the Hodge decomposition is orthogonal.
Let H be a simple normal crossing hypersurface of X. If we are given an S 1 -homogeneous good wild harmonic bundle (E, ∂ E , θ, h) on (X, H), as mentioned above, we obtain a TE-structure with a grading on X \ H. Moreover, it is extended to a meromorphic TE-structure on (X, H) as studied in [48, §3]. We obtain the mixed Hodge structure as the limit objects at the boundary, which is useful for the study of more detailed property of the TE-structure.
An equivalence
Let X be a complex projective manifold with a simple normal crossing hypersurface H and an ample line bundle L, equipped with a C * -action. We may define a good filtered Higgs bundle (P * V, θ) is called C * -homogeneous if P * V is C * -equivariant and t * θ = t m · θ for some m = 0. Then, we obtain the following theorem by using Theorem 1.1. (See §8.1.2 for the precise definition of the stability condition in this context.) Theorem 1.2 (Corollary 8.10) There exists an equivalence between the following objects.
As mentioned in §1.2.3, Theorem 1.2 allows us to obtain a meromorphic TE-structure on (X, H) with a grading from a µ L -polystable C * -equivariant good filtered Higgs bundle satisfying the vanishing condition. We already applied it to a classification of solutions of the Toda equations on C * [51]. It seems natural to expect that this construction would be another way to obtain Frobenius manifolds.
Although we explained the homogeneity with respect to an S 1 -action, Theorem 1.2 is generalized for Ghomogeneous good wild harmonic bundles as explained in §8, where G is any compact Lie group.
Acknowledgement I thank Carlos Simpson for his fundamental works on harmonic bundles which are most fundamental in this study. I thank Philip Boalch and Andy Neitzke for their kind comments to a preliminary note on the proof in the one dimensional case. I thank Claude Sabbah for discussions on many occasions and for his kindness. I thank François Labourie for his comment on the definition of wild harmonic bundles. I thank Akira Ishii and Yoshifumi Tsuchimoto for their constant encouragement. A part of this manuscript was prepared for lectures in the Oka symposium and ICTS program "Quantum Fields, Geometry and Representation Theory". I thank the organizers for the opportunities. Let X denote a complex manifold with a simple normal crossing hypersurface H. Let H = i∈Λ H i denote the irreducible decomposition. For any P ∈ H, a holomorphic coordinate neighbourhood (X P , z 1 , . . . , z n ) around P is called admissible if For such an admissible coordinate neighbourhood, there exists the map ρ P : {1, . . . , ℓ(P )} −→ Λ determined by H ρP (i) ∩ X P = {z i = 0}. We obtain the map κ P : R Λ −→ R ℓ(P ) by κ P (a) = (a ρ(1) , . . . , a ρ(ℓ(P )) ).
Let E be any coherent torsion free O X ( * H)-module. A filtered sheaf over E is defined to be a tuple of coherent O X -submodules P a E ⊂ E (a ∈ R Λ ) satisfying the following conditions.
• P a E( * H) = E for any a ∈ R Λ .
• P a+n E = P a E i∈Λ n i H i for any a ∈ R Λ and n ∈ Z Λ .
• For any a ∈ R Λ there exists ǫ ∈ R Λ >0 such that P a+ǫ E = P a E.
• For any P ∈ H, we take an admissible coordinate neighbourhood (X P , z 1 , . . . , z n ) around P . Then, for any a ∈ R Λ , P a E |XP depends only on κ P (a).
For any coherent O X ( * H)-submodule E ′ ⊂ E, we obtain a filtered sheaf P * E ′ over E ′ by P a E ′ := P a E ∩E ′ . If E ′ is saturated, i.e., E ′′ := E/E ′ is torsion-free, we obtain a filtered sheaf P * E ′′ over E ′′ by P a E ′′ := Im P a E −→ E ′′ . A morphism of filtered sheaves f : P * E 1 −→ P * E 2 is defined to be a morphism f : Remark 2.1 The concept of filtered bundles on curves was introduced by Mehta and Seshadri [42] and Simpson [57,58]. A higher dimensional version was first studied by Maruyama and Yokogawa [39] for the purpose of the construction of the moduli spaces.
Reflexive filtered sheaves
A filtered sheaf P * E on (X, H) is called reflexive if each P a E is a reflexive O X -module. Note that it is equivalent to the "reflexive and saturated" condition in [43,Definition 3.17] by the following lemma.
Lemma 2.2 Suppose that P * E is reflexive. Let a ∈ R Λ . We take a i −1 < b ≤ a i , and let a ′ ∈ R Λ be determined by a ′ j = a j (j = i) and a ′ i = b. Then, P a E/P a ′ E is a torsion-free O Hi -module.
Proof Let s be a section of P a E/P a ′ E on an open set U ⊂ D i . There exists an open subset U ⊂ X and a section s of P a E on U such that U ∩ D i = U and that s induces s. Note that there exists Z ⊂ U of codimension 2 such that s | U\Z is a section of P a ′ E | U\Z . Because P a ′ E is reflexive, there exists a section s ′ of P a ′ E on U such that s ′ | U\Z = s | U\Z . Hence, we obtain that s is a section of P a ′ E, i.e., s = 0. The following lemma is clear.
Lemma 2.3
Let P * E be a reflexive filtered sheaf on (X, H). Then a coherent O X ( * H)-submodule E ′ ⊂ E is saturated if and only if the induced filtered sheaf P * E ′ is reflexive.
Filtered Higgs sheaves
X is induced by the composition of morphisms and the wedge product. If θ ∧ θ = 0 is satisfied, θ is called a Higgs field of E. When a Higgs field θ is given, a Higgs subsheaf of E means a coherent A pair of a filtered sheaf P * E over E and a Higgs field θ of E is called a filtered Higgs sheaf. It is called reflexive if P * E is reflexive.
µ L -Stability condition for filtered Higgs sheaves
Let X be a connected projective manifold with a simple normal crossing hypersurface H = i∈Λ H i . Let L be an ample line bundle.
Slope of filtered sheaves
Let P * E be a filtered sheaf on (X, H) which is not necessarily a filtered bundle. Recall that par-c 1 (P * E) is defined as follows. Let η i be the generic point of H i . Note that O X,ηi -modules (P a E) ηi depends only on a i , which is denoted by P ai (E ηi ). We obtain O Hi,ηi -modules Gr P a (E ηi ) := P a (E ηi ) P <a (E ηi ). Then, we have We set It is called the slope of P * E with respect to L. The following is proved in [43,Lemma 3.7].
be a morphism of filtered sheaves which is generically an isomorphism, i.e., the induced morphism E η(X) at the generic point of X is an isomorphism. Then, µ L (P * E (1) ) ≤ µ L (P * E (2) ) holds. If the equality holds, f is an isomorphism in codimension one, i.e., there exists an algebraic subset Z ⊂ X such that (i) the codimension of Z is larger than 2, (ii) f |X\Z : |X\Z is an isomorphism.
A filtered Higgs sheaf (P * E, θ) is called µ L -polystable if the following holds.
Filtered bundles in the local case
We explain the notion of filtered bundle in the local case. We shall explain it in the global case in §2.
Pull back, push-forward and descent with respect to ramified coverings in the local case
Let ϕ : C n −→ C n be given by We set ϕ * (P * V 1 ) := P * V ′ 1 . Thus, we obtain the pull back functor ϕ * from the category of filtered bundles on (U, H U ) to the category of filtered bundles on (U ′ , H U ′ ). For , we obtain the following filtered bundle In this way, we obtain a functor ϕ * from the category of filtered bundles on (U ′ , H U ′ ) to the category of filtered bundles on (U, H U ).
Filtered bundles in the global case
We use the notation in §2.1.1. Let V be a locally free O X ( * H)-module. A filtered bundle P * V = P a V a ∈ R Λ be a sequence of locally free O X -submodules P a V of V such that the following holds.
• For any P ∈ H, we take an admissible coordinate neighbourhood (X P , z 1 , . . . , z n ) around P . Then, for any a ∈ R Λ , P a V |XP depends only on κ P (a). We denote P • The sequence (P ) is a filtered bundle over V |XP in the sense of §2.3.1. Clearly, a filtered bundle is a special type of filtered sheaf in §2.1.1.
Remark 2.7
The higher dimensional version of filtered bundles was introduced in [44] with a different formulation. See also [5,6]. In this paper, we follow Iyer and Simpson [26].
The induced bundles and filtrations
For any I ⊂ Λ, let δ I ∈ R Λ be the element whose j-th component is 0 (j ∈ I) or 1 (j ∈ I). We also set It is naturally regarded as a locally free O Hi -module. Moreover, it is a subbundle of P a (V) |Hi . In this way, we obtain a filtration i F of P a (V) |Hi indexed by ]a i − 1, a i ].
We obtain the induced filtrations i F of P a V |HI if i ∈ I. Let a I ∈ R I denote the image of a by the projection R Λ −→ R I . Set ]a I − δ I , a I ] := i∈I ]a i − 1, a i ]. For any b ∈]a I − δ I , a I ], we set By the condition of filtered bundles, the following compatibility condition holds.
• Let P be any point of H I . There exists a neighbourhood X P of P in X and a decomposition such that the following holds for any c ∈]a I − δ I , a I ]: For any c ∈]a I − δ I , a I ], we obtain the following locally free O HI -modules: Here, b c means "b ≤ c and b = c". We introduce some notation. We set Par(P * V,
First and second Chern characters for filtered bundles
Let P * V be a filtered bundle over (X, H). Take any a ∈ R Λ . We set Here, [H i ] denote the cohomology class induced by H i . It is easy to see that par-c 1 (P * V) is independent of a choice of a ∈ R Λ . We also obtain the following element in H 4 (X, R): Here, ι i * : denote the cohomology class induced by C.
Remark 2.8
The higher Chern character for filtered sheaves was defined by Iyer and Simpson [26] in a systematic way. In this paper, we adopt the definition of par-ch 2 (P * V) in [43].
Good filtered Higgs bundles
Let X be a complex manifold with a simple normal crossing hypersurface H = i∈Λ H i .
Good set of irregular values at P
Let P be any point of H. We take an admissible holomorphic coordinate neighbourhood (X P , z 1 , . . . , z n ) around ∈ O X,P , (ii) g(P ) = 0, then we set ord(f ) := n. Otherwise, ord(f ) is not defined. For any a ∈ O X ( * H) P /O X,P , we take a lift a ∈ O X ( * H) P . If ord( a) is defined, we set ord(a) := ord( a). Otherwise, ord(a) is not defined. Note that it is independent of the choice of a lift a.
Let I P ⊂ O X ( * H) P /O X,P be a finite subset. We say that I P is a good set of irregular values if the following holds.
• ord(a) is defined for any a ∈ I P .
• ord(a − b) is defined for any a, b ∈ I P .
Good filtered Higgs bundles
Let V be a locally free O X ( * H)-module with a Higgs field θ. Let P * V be a filtered bundle over V. We say that (P * V, θ) is unramifiedly good at P if the following holds.
• There exist a good set of irregular values I P ⊂ O X ( * H) P /O X,P , an admissible holomorphic coordinate neighbourhood (X P , z 1 , . . . , z n ) around P , and a decomposition such that θ a − d a id Va are logarithmic with respect to the lattice P a V a for any a ∈ R ℓ(P ) and a ∈ I P , i.e., Here a denote lifts of a to O X ( * H) P .
We say that (P * V, θ) is good at P if the following holds.
• There exist a neighbourhood X P of P in X and a covering map ϕ P : X ′ P −→ X P ramified over H P = H ∩ X P such that ϕ * P (P * V, θ) is unramifiedly good at any point of ϕ −1 P (H P ). We say that (P * V, θ) is good (resp. unramifiedly good) if it is good (resp. unramifiedly good) at any point of H.
Prolongation of holomorphic vector bundles with a Hermitian metric
Let X be any complex manifold with a simple normal crossing hypersurface For any open subset U ⊂ X, let P h a E(U) be the space of holomorphic sections of E |U \H satisfying the following condition.
• For any point P of U ∩H, take an small admissible holomorphic coordinate neighbourhood (X P , z 1 , . . . , z n ) around P such that X P is relatively compact in U. Set c = κ P (a). Then,
A sufficient condition to be filtered bundles
We mention a useful sufficient condition for P h * E to be a filtered bundle, although we do not use it in this paper. Let g X\H be a Kähler metric satisfying the following condition [11]: • For any P ∈ H, we take an admissible holomorphic coordinate neighbourhood (X P , z 1 , . . . , z n ) around P such that X P is isomorphic to A Hermitian metric h of (E, ∂ E ) is called acceptable if the curvature of the Chern connection is bounded with respect to h and g X\H . The following theorem is proved in [ A Higgs bundle (E, ∂ E , θ) with a pluri-harmonic metric h is called a harmonic bundle.
Wild harmonic bundles
Let X be a complex manifold with a simple normal crossing hypersurface H = i∈Λ H i . Let (E, ∂ E , θ, h) be a harmonic bundle on X \ H. It is called wild on (X, H) if the following holds.
• Let Σ θ ⊂ T * (X \ H) denote the spectral cover of θ, i.e., Σ θ denotes the support of the coherent O T * (X\H)module induced by (E, ∂ E , θ). Then, the closure of Σ θ in the projective completion of T * X is complex analytic.
A wild harmonic bundle (E, ∂ E , θ, h) is called unramifiedly good at P ∈ H if the following holds.
• There exists a good set of irregular values I P ⊂ O X ( * H) P /O X,P , a neighbourhood X P , and a decompo- A wild harmonic bundle (E, ∂ E , θ, h) is called good at P ∈ H if the following holds.
• There exist a neighbourhood X P and a covering ϕ P : X ′ P −→ X P ramified along H ′ P such that the pull back ϕ −1 P (E, ∂ E , θ, h) |XP is unramifiedly good wild at any point of ϕ −1 P (H).
We say that (E, ∂ E , θ, h) is good wild (resp. unramifiedly good wild) on (X, H) if it is good wild (resp. unramifiedly good wild) at any point of H. Note that not every wild harmonic bundle on (X, H) is good on (X, H). But, the following is known [50, Corollary 15.2.8].
Theorem 2.12 Let (E, ∂ E , θ, h) be a wild harmonic bundle on (X, H). Then, there exists a proper birational morphism ϕ : The following is one of the fundamental theorem in the study of wild harmonic bundles [47,Theorem 7.4.3].
The following is a consequence of the norm estimate for good wild harmonic bundles [47, Theorem 11.7.2].
Theorem 2.14 If h 1 is another pluri-harmonic metric of (E, ∂ E , θ) such that P h1 * E = P h * E. Then, h and h 1 are mutually bounded.
Prolongation of good wild harmonic bundles in the projective case
Suppose that X is projective and connected. Let L be any ample line bundle on X. The following is proved in [47, Proposition 13.6.1, Proposition 13.6.4].
Main existence theorem in this paper
Let X be a smooth connected projective complex manifold with a simple normal crossing hypersurface H. Let L be any ample line bundle on X. Let (P * V, θ) be a good filtered Higgs bundle on (X, H). Let (E, ∂ E , θ) be the Higgs bundle obtained as the restriction of (P * V, θ) to X \ H.
Theorem 2.16 Suppose that (P * V, θ) is µ L -polystable, and the following vanishing: Then, there exists a pluri-harmonic metric h of (E, We proved a similar theorem for good filtered flat bundles in [47, Theorem 16.1.1]. Theorem 2.16 can be proved similarly and more easily on the basis of the fundamental theorem of Simpson [57] after [43,45]. We shall explain a proof in §3-7. Note that the one dimensional case is due to Biquard-Boalch [3].
Corollary 2.17
We have the equivalence of the following objects.
• Good wild harmonic bundles on (X, H).
Hermitian-Einstein metrics of Higgs bundles
Let Y be a Kähler manifold with a Kähler form ω. Let (E, ∂ E , θ) be a Higgs bundle on Y with a Hermitian metric. We set D 1 : Recall that h is called a Hermitian-Einstein metric of the Higgs bundle if Λ ω F (h) ⊥ = 0, where F (h) ⊥ denote the trace-free part of F (h), and Λ ω denote the adjoint of the multiplication of ω (see [32, §3.2]). The following is a generalization of Kobayashi-Lübke inequality to the context of Higgs bundles due to Simpson [57,Proposition 3.4].
Proposition 3.1 (Simpson)
If h is a Hermitian-Einstein metric, there exists C(n) > 0 depending only on n = dim Y such that the following holds: Corollary 3.2 (Simpson) If Y is compact, and if a Higgs bundle (E, ∂ E , θ) on Y has a Hermitian-Einstein metric h, then the Bogomolov-Gieseker type inequality holds:
Rank one case
Let X be an n-dimensional smooth connected projective variety with a simple normal crossing hypersurface H. Let ω be a Kähler form. Let Λ ω denote the adjoint of the multiplication of ω. We have the irreducible The following proposition is standard.
Proposition 3.3 There exists a Hermitian metric h of the line bundle
gi is a Hermitian metric of P a (V) of C ∞ -class. Such a metric is unique up to the multiplication of positive constants. Moreover, if c 1 (P * E) = 0, then R(h) = 0 holds, and hence h is a pluri-harmonic metric of (E, θ).
Proof Note that F (h) = R(h) holds in the rank one case. Let h ′ 0 be a C ∞ -metric of P a E. We obtain the metric The metric h = h 0 e ϕ0 has the desired property. The uniqueness is clear. Suppose that c 1 (P * E) = 0. In the rank one case, a Hermitian metric of E is a pluri-harmonic metric of (E, ∂ E , θ), if and only if R(h) = 0. Because the cohomology class of R(h 0 ) is 0, there exists an R-valued C ∞ -function ϕ 0 such that R(h 0 e ϕ0 ) = 0 by the standard ∂∂-lemma.
β-subobject and socle for reflexive filtered Higgs sheaves
Let X be a complex projective connected manifold with a simple normal crossing hypersurface H = i∈Λ H i and an ample line bundle L.
β-subobjects
Let (P * V, θ) be a reflexive filtered Higgs sheaf on (X, H). For any A ∈ R, let S(P 0 V, A) denote the family of saturated subsheaves F of P 0 V such that deg L (F ) ≥ −A and that F ( * H) is a Higgs subsheaf of V. Any F ∈ S(P 0 V, A) induces a reflexive filtered Higgs sheaf P * (F ( * H)) by P c (F ( * H)) := P c V ∩ F ( * H) for any c ∈ R Λ . We set f A (F ) := µ L (P * (F ( * H))). Thus, we obtain a function f A on S(P 0 V, A).
In particular, f A has the maximum.
Proof According to [19,Lemma 2.5], S(P 0 V, A) is bounded. Hence, it is easy to see that there exists a finite decomposition S(P 0 V, . It is standard that any reflexive filtered Higgs sheaf has a β-subobject, i.e., the following holds. Proposition 3.5 For any reflexive filtered Higgs sheaf (P * V, θ), there uniquely exists a non-zero Higgs subsheaf V 0 ⊂ V such that the following holds for any non-zero reflexive Higgs subsheaf V ′ ⊂ V.
Proof There exists N > 0 such that the following holds for any saturated subsheaf F ⊂ P 0 V: where θ ′ denote the Higgs field induced by θ.
Suppose that the Higgs subsheaves V i ⊂ V (i = 1, 2) satisfy µ L (P * V i ) = B 0 . We obtain the subsheaf Then, the claim of the lemma is clear.
Suppose that K = 0, i.e., I = 0. Because I is a subsheaf of V (2) , we also obtain a filtered sheaf P * I induced by P * V (2) . Because I ≃ K, we obtain a filtered sheaf P ′ * I over I induced by P * K. Then, we obtain Because (P * V (2) , θ) is µ L -stable and because I = 0, we obtain that rank(I) = rank V (2) , i.e., I and V (2) are generically isomorphic. Because µ L (P * I) = P * V (2) , Lemma 2.4 implies that P * I −→ P * V (2) is an isomorphism in codimension 1. Hence, there exists a closed algebraic subset Z ⊂ X such that (i) the codimension of Z is larger than 2, (ii) V Let us study the case where V (1) ∩ V (2) = 0. Let V (3) denote the saturated Higgs subsheaf of V generated by V (1) + V (2) . Let P * V (3) denote the filtered sheaf over V (3) induced by P * V. Lemma 3.8 (P * V (3) , θ (3) ) is µ L -semistable, and the induced morphism g : is generically an isomorphism, and because they have the same slope, g is an isomorphism in codimension one by Lemma 2.4.
By Lemma 3.8, it is easy to observe that there exists a finite sequence of reflexive Higgs subsheaves V ′ j (j = 1, . . . , m) such that (i) the induced filtered Higgs sheaves (P * V ′ j , θ ′ j ) are µ L -stable, (ii) the image of the induced morphism g : Hence, g is an isomorphism in codimension one by Lemma 2.4. Because both P * V and P * V 1 are reflexive, we obtain that P * V ≃ P * V 1 . Thus, we obtain Proposition 3.6.
Mehta-Ramanathan type theorem
Let X be a smooth connected projective variety with a simple normal crossing hypersurface H. Let L be an ample line bundle on X.
Proposition 3.9 Let P * V be a filtered sheaf on (X, H) with a meromorphic Higgs field θ. Suppose that (P * V, θ) is µ L -stable. Then, it is µ L -stable (resp. µ L -semistable) if and only if the following holds.
• For any m 1 > 0, there exists m > m 1 such that (P * V, θ) |Y is µ L -stable (resp. µ L -semistable) where Y denotes the 1-dimensional complete intersection of generic hypersurfaces of L ⊗m .
Proof We can prove this proposition by the argument in [43, §3.4], which closely follows the arguments of Mehta-Ramanathan [40,41] and Simpson [59].
such that θ a − da id Va are logarithmic with respect to the lattices P a V a . We obtain the endomorphism . By taking the direct sum, we obtain the following endomorphism of i Gr F b (P a V): Note that Res i (θ) |HI preserves the induced filtrations j F (j ∈ I \ {i}) of i Gr F b (P a V) |HI . We set ∂H I := j ∈I (H j ∩ H I ). Let π I : R ℓ −→ R I be the projection. We obtain the following filtered bundle on (H I , ∂H I ): Note that Res i (θ) (i ∈ I) are endomorphisms of the filtered bundle I Gr F b (P * V). Let ϕ : C n −→ C n be given by ϕ(ζ 1 , . . . , ζ n ) = (ζ m1 1 , . . . , ζ m ℓ ℓ , ζ ℓ+1 , . . . , ζ n ). Let U ′ := ϕ −1 (U ). The induced map U ′ −→ U is also denoted by ϕ. Set H U ′ := ϕ −1 (H U ). We obtain the good filtered Higgs bundle (P * V 1 , θ 1 ) := ϕ * (P * V, θ) on (U ′ , H U ′ ) obtained as the pull back. We obtain the endomorphisms Res i (θ 1 ) (i ∈ I) of the filtered bundles I Gr F b1 (P * V 1 ).
. We obtain endomorphisms Res i (θ 1 ) ′ (i ∈ I) of I Gr F ϕ * (b) (P * V 1 ) obtained as the descent of Res i (θ 1 ). By the relation dζ i /ζ i = m i ϕ * (dz i /z i ), we obtain the following relation:
Residue in the local and ramified case
Let (P * V, θ) be a good filtered Higgs bundle on (U, H U ). There exists a ramified covering ϕ : has a decomposition as in (5). For any I ⊂ {1, . . . , ℓ}, we obtain the endomorphisms Res i (θ 1 ) (i ∈ I) of the filtered bundles I Gr F b1 (P * V 1 ) on (H ′ I , ∂H ′ I ). We obtain the endomorphism as the descent of Res i (θ 1 ). We set It is easy to check that Res i (θ) are independent of the choice of a ramified covering U ′ −→ U . In particular, we obtain endomorphisms Res i (θ) of I Gr F b (P a V) for any a ∈ π −1 I (b). The above construction is independent of the choice of a holomorphic coordinate system.
Global case
Let (P * V, θ) be a good filtered Higgs bundle on (X, H). Then, by gluing the residues locally obtained in §3.5.2 for any I ⊂ Λ, we obtain the endomorphisms Res i (θ) (i ∈ I) of I Gr F b (P a V) for any b ∈ π −1 I (a).
Gap of filtered bundles
Let X be a complex manifold with a simple normal crossing hypersurface H = i∈Λ H i . For simplicity, we assume that Λ is finite. Let (P * V, θ) be a good filtered Higgs bundle on (X, H).
We take a ∈ R Λ such that a i ∈ Par(P * V, i). We set Par(P * V, a, i) := Par(P * V, i)∩]a i − 1, a i [. We also set We set gap(P * V, a) := min i∈Λ gap(P * V, a, i). Recall that Λ is assumed to be finite.
Curve case
Let C be a complex curve with a finite subset D ⊂ C. Let (P * V, θ) be a good filtered Higgs bundle on (C, D).
For any (k, b) ∈ Z×Par(P * V, a, P ), we obtain the subspace W k F b (P a V |P ) as the pull back of W k Gr F b (P a V |P ) by the projection F b (P a V |P ) −→ Gr F b (P a (V |P )). We define the filtration F (ǫ) on P a (V) |P indexed by ]a(P ) − 1, a(P )] as follows: F (ǫ) c P a (V) |P := (k,b)∈Z×Par(P * V,a,P ) ϕǫ,P (k,b)≤c We have the corresponding good filtered Higgs bundle (P (ǫ) * V, θ). We clearly have lim ǫ→0 par-c 1 (P (ǫ) * V) = par-c 1 (P * V). The following is standard. Lemma 3.10 Suppose that C is compact and that (P * V, θ) is stable (resp. polystable). Then, if ǫ is sufficiently small, (P (ǫ) * V, θ) is also stable (resp. polystable).
Proof See [43, Proposition 3.28] for the stability. If (P * V, θ) = P * (V i , θ i ), then we have (P Hence, we obtain the claim for the polystability.
Surface case
Let X be a complex projective surface with a simple normal crossing hypersurface H = i∈Λ H i . Let (P * V, θ) be a good filtered Higgs bundle on (X, H). We shall explain a similar perturbation of good filtered Higgs bundles. We take a ∈ R Λ such that a i ∈ Par(P * V, i) for any i ∈ Λ. We choose η > 0 such that 0 < 10 rank(V)η < gap(P * V, a).
For any 0 < ǫ < η, let ψ ǫ,i be a map Par( Note that the eigenvalues of the endomorphism Res i (θ) on Gr F b (P a V |Hi ) are constant on H i because H i are compact. Hence, we have the well defined nilpotent part N i,b of Res i (θ). Note that there exists a finite subset Z i ⊂ H i such that the conjugacy classes of the nilpotent part of N i,b|Q (Q ∈ H i \ Z i ) are constant. We obtain the filtration W of Gr F b (P a V Hi\Zi ) by algebraic vector subbundles whose restriction to Q ∈ H i \ Z i are the weight filtration of N i,b|Q . By the valuative criterion, it is uniquely extended to a filtration of Gr F b (P a V |Hi ) by holomorphic subbundles, which is also denoted by W . For . We define the filtration F (ǫ) on P a (V) |Hi indexed by ]a i − 1, a i ] as follows: We have the corresponding good filtered Higgs bundle (P (ǫ) * V, θ). We clearly have lim ǫ→0 par-c 1 (P (ǫ) * V) = par-c 1 (P * V) and lim ǫ→0 par-ch 2 (P (ǫ) * V) = par-ch 2 (P * V). The following is standard, and similar to Lemma 3.10. (See also [43,Proposition 3.28].) Lemma 3.11 Suppose that (P * V, θ) is stable (resp. polystable). Then, if ǫ is sufficiently small, (P (ǫ) * V, θ) is also stable (polystable).
Let h ǫ be the C ∞ -metric of E given by Lemma 3.14 (E, ∂ E , θ, h ǫ ) are harmonic bundles.
Proof Let H ǫ be the matrix valued function on X \ D determined by (H ǫ ) i,j := h ǫ (v i , v j ). Then, the following holds: Let Θ be the matrix valued function representing θ with respect to the frame ( Let θ † ǫ denote the adjoint of θ with respect to h ǫ . Let Θ † ǫ denote the matrix valued function representing θ † ǫ . The following holds: Hence, we obtain It implies that ∂ H −1 ǫ ∂H ǫ + Θ, Θ † ǫ = 0. It is exactly the Hitchin equation for (E, ∂ E , θ, h ǫ ).
Let (P (ǫ) * E, θ) denote the associated filtered Higgs bundle. We have Let s ǫ be determined by h ǫ = h 0 s ǫ .
Families of equivariant harmonic bundles with nilpotent Higgs fields
Let G := {µ ∈ C * | µ ℓ = 1} for some ℓ. Let V be a finite dimensional C-vector space equipped with a G-action and a G-invariant nilpotent endomorphism N . We set V = V ⊗ O X ( * D) with the Higgs field θ = N dz/z. Let W denote the weight filtration of N on V . We fix 0 < η such that 10 rank(V)η < 1. Take c ∈ R and c(ǫ) ∈ R (0 ≤ ǫ ≤ η) such that |c(ǫ) − c| ≤ 2ǫ. We set We consider the G-action on X by the multiplication on the coordinate. Then, (P (ǫ,c(ǫ)) * V, θ) is naturally G-equivariant.
• lim ǫ→0 h ǫ,c(ǫ) = h 0,c in the C ∞ -sense locally on X \ D. Moreover, there exists C > 1 such that Proof We set G ∨ := Hom(G, C * ). For each χ ∈ G ∨ , let C χ denote the irreducible G-representation corresponding to χ. There exists the canonical decomposition (V, N ) = (V χ , N χ ) ⊗ C χ , where (V χ , N χ ) denote finite dimensional C-vector spaces with a nilpotent endomorphism. We For any finite dimensional vector space U , let Sym ℓ (U ) denote the ℓ-th symmetric tensor product of U . For any nilpotent endomorphism N U on U , let Sym ℓ (N U ) denote the endomorphism of Sym ℓ (U ) induced by the Leibniz rule.
Example of family of equivariant unramifiedly good wild harmonic bundles
Let X, D and G be as in §3.7.3. Note that G acts on z −1 C[z −1 ] by the pull back. Let a ∈ z −1 C[z −1 ]. We set G · a := {µ * a | µ ∈ G}. Let V be a finite dimensional C-vector space equipped with a nilpotent endomorphism N , a grading and a G-action such that µ • N = N • µ for any µ ∈ G, and µV b = V µ * b for any µ ∈ G and b ∈ G · a. We set V := V ⊗ O X ( * D) and V b := V b ⊗ O X ( * D). We have the decomposition V = b∈G·a V b . Let α ∈ C. Let θ be the Higgs field of V given by We have the decomposition (V, θ) = b∈G·a (V b , θ b ). Let W denote the weight filtration on V with respect to N . Take η > 0 such that 10 rank(V)η < 1. Take c ∈ R and c(ǫ) ∈ R (0 ≤ ǫ ≤ η) such that |c(ǫ) − c| ≤ 2ǫ. For a ∈ R, we set P (ǫ,c(ǫ)) Lemma 3.17 There exists a family of harmonic metrics h ǫ,c(ǫ) (0 ≤ ǫ ≤ η) of (V, θ) |X\D such that the following holds • h ǫ,c(ǫ) is adapted to P (c,ǫ) * V.
• lim ǫ→0 h ǫ,c(ǫ) = h 0,c in the C ∞ -sense locally on X \ D. Moreover, there exists C > 1 such that N a ) is naturally G a -equivariant. Let h a,ǫ,c(ǫ) be a family of G a -invariant harmonic metrics of (V a , θ a ) as in Lemma 3.16. By the isomorphisms µ * V a ≃ V µ * a , we obtain harmonic metrics h b,ǫ,c(ǫ) for (V b , θ b ). We set h ǫ,c(ǫ) := b h b,ǫ,c(ǫ) . Then, the family of the harmonic metrics has the desired property.
Proof of Proposition 3.12
Let ϕ : C −→ C be the map determined by ϕ(ζ) = ζ ℓ for some ℓ. We set X ′ := ϕ −1 (X). We may assume to have I ⊂ ζ −1 C[ζ −1 ] and a decomposition where θ a,α − (da + αdz/z) id are logarithmic, and the eigenvalues of the residues are 0. There exists the natural action of G := µ ∈ C * | µ ℓ = 1 on X ′ given by the multiplication on the coordinate. Because ϕ * (P * V, θ) is naturally G-equivariant, there exists the natural G-action on I. We obtain the orbit decomposition It is naturally G-equivariant.
equipped with the induced G-action such that µ * Gr F b (P a V a,α ) = Gr F b (P a V µ * a,α ), and the G-invariant nilpotent endomorphism N which is compatible with the grading. We set b(ǫ) := ψ ǫ (b). By applying Lemma 3.17, we obtain a family of G-equivariant unramifiedly good filtered Higgs fields equipped with a family of harmonic metrics h (ǫ,b(ǫ)) i,b,α,ǫ satisfying the conditions in Lemma 3.17. We set
Lemma 3.18
There exists a G-equivariant isomorphism of filtered bundles f : P for any c ∈ R.
Proof Let F (0) denote the filtration on P (0) a V ′ ai,b,α induced by the filtered bundle P (0) * V ′ . By the construction, we have the isomorphisms f ai,b,α : Gr There exists a G ai -equivariant isomorphism f ai,α : We set f := i,α f i,α . Then, f has the desired property by the construction.
We also remark that . Hence, we obtain the claim of the proposition. We give an outline of the proof in §4.2 based on the fundamental theorem of Simpson [57,Theorem 1] because we obtain a consequence on the Donaldson functional from the proof, which will be useful in the proof of Proposition 4.2.
Convergence of some families of Hermitian metrics
For each P ∈ D, we take a holomorphic coordinate neighbourhood (C P , z P ) around P such that z P (P ) = 0. Set C * P := C P \ {P }. Fix N > 10. Let g ǫ be a sequence of C ∞ -metrics of C \ D, such that The following proposition is a variant of [45, Proposition 5.1].
• Let b (i) be the automorphism E which is self-adjoint with respect to h (ǫi) and determined by h Then, b (i) and (b (i) ) −1 are bounded with respect to h (ǫi) on C \D. We do not assume the uniform estimate.
Proof We have only to apply the argument in the proof of [45, Proposition 5.1] by replacing G(h) and D λ with F (h) and ∂ + θ, respectively.
Proof of Theorem 4.1
Take a ∈ R D such that a P ∈ Par(P * V, P ) for any P ∈ D. Let (C P , z P ) be a holomorphic coordinate neighbourhood around P such that z P (P ) = 0. Set C * P := C P \ {P }. Take η > 0 such that 10 rank(V)η < gap(P * , a). We take a Kähler metric g C\D,η of C \ D satisfying the following condition.
Lemma 4.4
There exists a Hermitian metric h 0 of E such that the following holds.
(a) (E, ∂ E , h 0 ) is acceptable, and P h0 Proof By applying Proposition 3.12 only in the case ǫ = 0, we obtain a Hermitian metric h ′ 0 of E satisfying (a) and (b). We define the function ϕ : Then, ϕ induces a C ∞ -function on C. We set h 0 := h ′ 0 e ϕ/ rank(E) . Then, the metric h 0 has the desired property. For any holomorphic Higgs subbundle E ′ ⊂ E, let h ′ 0 denote the Hermitian metric of E ′ induced by h 0 . Let θ ′ denote the Higgs field of E ′ obtained as the restriction of θ. We have the Chern connection ∇ h ′ 0 and the adjoint θ ′ † Let Λ g C\D,η denote the adjoint of the multiplication of the Kähler form associated to g C\D,η . Because F (h 0 ) is bounded with respect to h 0 and g C\D,η , deg(E ′ , h 0 ) is well defined in R ∪ {−∞} by the Chern-Weil formula [57,Lemma 3.2]: Here, π E ′ denotes the orthogonal projection E −→ E ′ with respect to h 0 .
Hence, we obtain that (E, ∂ E , θ, h 0 ) is analytically stable. According to the existence theorem of Simpson [57,Theorem 1], there exists a harmonic metric h of (E, ∂ E , θ) such that det(h) = det(h 0 ) and that h and h 0 are mutually bounded. Thus, we obtain Theorem 4.1.
Complement on the Donaldson functional
Let P(h 0 ) be the space of C ∞ -Hermitian metrics h 1 of E satisfying the following condition.
Here, we consider the L p -norms induced by h 0 and g C\D,η .
Proof of Proposition 4.2
For 0 ≤ ǫ ≤ η 2 , let g C\D,ǫ,η1 be the Kähler metric on C \ D such that the following holds on C * P for any P ∈ D: Let Λ ω,ǫ denote the adjoint of the multiplication of the Kähler form ω C\D,ǫ,η1 associated to g C\D,ǫ,η1 . By using families of Hermitian metrics as in Proposition 3.12, we construct a family of metrics h (ǫ) in (0 ≤ ǫ ≤ η 2 ) of E such that the following holds: in locally on C \ D in the C ∞ -sense as ǫ −→ 0.
where the L p -norms are taken with respect to h (ǫi) in and g C\D,ǫi,η1 . We do not assume that the estimate is uniform in i.
Then, there exists C 3 , C 4 > 0 such that the following holds for any ǫ i 1 ) = 1. Take any sequence ǫ i → 0. By Proposition 4.6 and Lemma 4.7, there exists a constant C 10 > 0 such that the following holds for any i: in and g C\D,ǫi,η1 , we obtain that Λ ǫi,η1 ∂∂ Tr(b (ǫi) 1 ) is L 1 with respect to g C\D,ǫi,η1 . Because (∂ + θ)b (ǫi) 1 is L 2 with respect to g C\D,ǫi,η1 and h By [57, Lemma 3.1], the following holds: Therefore, there exists C 12 > 0 such that the following holds for any i: Take Q ∈ C \ D. Let (C Q , z Q ) be a holomorphic coordinate neighbourhood around Q which is rela- . Hence, by (7), there exists C 13 (Q) > 0 such that the following holds for any i: According to a variant of Simpson's main estimate (for example, see [43, Proposition 2.10]), there exists C 14 (Q) > 0 such that the following holds on C Q for any i: Because R(h (ǫi) ) + [θ, θ † h (ǫ i ) ] = 0, we obtain the following estimate on C Q for any i: , there exists C 15 (Q) > 0 such that for any i. By a bootstrapping argument, for any p ≥ 2, there exists C 16 (Q, p) > 0 such that p) for any i. There exists a subsequence ǫ ′ j such that the sequence b is weakly convergent locally on in b ′ ∞ is a harmonic metric of (E, ∂ E , θ) such that (i) h ′(0) and h (0) are mutually bounded on C \ D, (ii) det(h ′ (0) ) = h det E . Then, by the uniqueness, we obtain that b ∞ = id E . Namely, h (ǫ ′ j ) is weakly convergent to h (0) locally in L p 2 for any p. By a bootstrapping argument, we obtain that h (ǫ ′ j ) is convergent to h (0) locally in the C ∞ -sense. Then, the claim of the proposition follows.
Continuity of family of harmonic metrics
Let π : C −→ ∆ be a smooth projective family of complex curves. Let D ⊂ C be a smooth hypersurface such that the induced map D −→ ∆ is proper and locally bi-holomorphic. For each t ∈ ∆, we set C t := π −1 (t) and D t := C t ∩ D.
Let (P * V, θ) be a good filtered Higgs bundle on (C, D). The induced good filtered Higgs bundles (P * V, θ) |Ct are denoted by (P * V t , θ t ).
Let (E, ∂ E , θ) be the Higgs bundle on C \ D obtained as the restriction of (P * V, θ) to C \ D. Let (E t , ∂ Et , θ t ) be the Higgs bundle on C t \ D t obtained as the restriction of (E, ∂ E , θ). Suppose the following.
• There exists a Hermitian metric h det E of E such that (i) R(h det E ) = 0, (ii) h det E is adapted to P * (det V).
• Each (P * V t , θ t ) is stable of degree 0.
According to Theorem 4.1, there exists harmonic metrics h t of (E t , ∂ Et , θ t ) adapted to P * V t such that det(h t ) = h det(E)|Ct\Dt . We obtain the Hermitian metric h of E determined by h |Ct\Dt = h t . We obtain the following proposition by using Proposition 4.6 and an argument similar to the proof of Proposition 4.2. (See also [45,Proposition 4.2].) Proposition 4.9 h is continuous. Moreover, any derivatives of h in the fiber direction are continuous.
Kähler metrics
Let X be a smooth projective surface with a simply normal crossing hypersurface H = i∈Λ H i . Let L be an ample line bundle on X. Let g X be the Kähler metric of X such that the associated Kähler form ω X represents c 1 (L).
We take Hermitian metrics g i of O(H i ). Let σ i : O X −→ O X (H i ) denote the canonical section. Take N > 10. There exists C > 0 such that the following form defines a Kähler form on X \ H for any 0 ≤ ǫ < 1/10: It is easy to observe that X ω 2 ǫ = X ω 2 X and that X ω ǫ τ = X ω X τ for any closed C ∞ -(1, 1)-form τ on X.
Condition for good filtered Higgs bundles and initial metrics
Let (P * V, θ) be a good filtered Higgs bundle on (X, H) satisfying the following condition.
Condition 5.1
• There exists c ∈ R Λ and m ∈ Z >0 such that Par(P * V, i) = {c i + n/m | n ∈ Z} for each i ∈ Λ.
• The nilpotent part of Res i (θ) on i Gr F b (P a V) are 0 for any i ∈ Λ, a ∈ R Λ and b ∈]a i − 1, a i [.
Definition 5.4 A Hermitian metric h of E is called strongly adapted to P * V if the following holds.
• For any P ∈ H, there exists a small neighbourhood X P of P such that h |XP \H is strongly adapted to P * V |XP in the sense of Definition 5.2 and Definition 5.3.
Lemma 5.5
Let h be a Hermitian metric of E strongly adapted to P * V. Then, the following holds: Proof It is the equality (36) in the proof of [43,Proposition 4.18].
For each i ∈ Λ, we choose b i ∈ Par(P * det V, i). Set b = (b i ) ∈ R Λ . We take a Hermitian metric h det(E) of det(E) such that h det(E) i∈Λ |σ i | 2bi gi induces a Hermitian metric of P b det V of C ∞ -class. We shall prove the following proposition in §5.4 after preliminaries in §5.2-5.3.
Proposition 5.6
There exists a Hermitian metric h in of E such that the following holds.
• h in is strongly adapted to P * V.
• F (h in ) is bounded with respect to h in and ω ǫ , where ǫ := m −1 .
Such a Hermitian metric h in is called an initial metric of (P * V, θ).
Preliminary existence theorem for Hermitian-Einstein metrics
Let (P * V, θ) be a good filtered Higgs bundle satisfying Condition 5.1. Let h in be an initial metric for (P * V, θ) as in Proposition 5.6.
(i) h HE and h in are mutually bounded.
(iv) The following equalities hold:
Unramified case
Suppose that (P * V, θ) satisfies the following condition.
• There exists a decomposition of good filtered Higgs bundles such that θ a,α − (da + α i dz i /z i ) induce holomorphic Higgs fields of P c V a,α .
We take any holomorphic frame v = (v j ) of P c V compatible with the decomposition. For j = 1, . . . , r, . We have ∂v = v − k=1,2 c k dz k /z k I, where I denotes the identity matrix. We have the description θv = v Λ 0 + Λ 1 such that the following holds.
Note that there exists a C ∞ -function u on X 0 such that det(h 0 ) = e u h det(E) . We set h in := h 0 e −u/ rank E .
Around smooth points
We set X 0 := (z 1 , z 2 ) ∈ C 2 |z i | < 1 and H := {z 1 = 0}. Let ν : X 0 \ H −→ R >0 be a C ∞ -function such that ν|z 1 | −1 induces a nowhere vanishing function on X 0 of C ∞ -class. Let (P * V, θ) be a good filtered Higgs bundle on (X 0 , H). Let (E, ∂ E , θ) be the Higgs bundle obtained as the restriction of (P * V, θ) to X 0 \ H. We choose b ∈ Par(P * ) and a Hermitian metric h det(E) of det(E) such that h det(E) ν 2c induces a C ∞ metric of P c (det V).
We take C ∞ -metrics h a,α of P c V a,α , and we set h 0 := ν −2c h a,α . We may assume that det(h 0 ) = h det(E) . Let v = (v 1 , . . . , v r ) be any holomorphic frame of P c V compatible with the decomposition. For each i, a i and α i are determined by the condition that v i is a section of P c V ai,αi . There exist matrix valued C ∞ -(1, 0)-forms A a,α such that where I denotes the identity matrix, and (A a,α ) i,j = 0 unless (a i , α i ) = (a j , α j ) = (a, α). Let Λ denote the matrix valued holomorphic 1-form determined by θv = vΛ. There exists the decomposition Λ = Λ 0 + Λ 1 such that the following holds.
Proof of Proposition 5.6
Let X, H and L be as in §5. Then, we obtain (8) from Lemma 5.11 and Lemma 5.14. Thus, we obtain Proposition 5.6.
Proof of Theorem 5.7
Let E ′ ⊂ E be any coherent Higgs O X\H -subsheaf. We assume that E ′ is saturated, i.e., E/E ′ is torsion-free.
holds. As a result, (E, ∂ E , θ, h in ) is analytically stable in the sense of [57].
As studied in [34,35] It is equal to 2 X par-ch 2 (P * V) by Lemma 5.5 and Proposition 5.6. Thus, Theorem 5.7 is proved.
Bogomolov-Gieseker inequality
Let X be any dimensional smooth connected projective variety with a simple normal crossing hypersurface H = i∈Λ H i . Let L be any ample line bundle on X.
Theorem 6.1 Let (P * V, θ) be a µ L -polystable good filtered Higgs bundle on (X, H). Then, the Bogomolov-Gieseker inequality holds: Proof By the Mehta-Ramanathan type theorem (Proposition 3.9), it is enough to study the case dim X = 2, which we shall assume in the rest of the proof. We use the notation in §3.6.3. We take a ∈ R Λ such that a i ∈ Par(P * V, i) for any i ∈ Λ. We choose η > 0 such that 0 < 10 rank(V)η < gap(P * V, a).
Applying the construction in §3.6.3, we obtain a good filtered Higgs bundle (P (ǫ) * V, θ) on (X, H). By the construction, it satisfies Condition 5.1. There exists m 0 such that (P be the Higgs bundle obtained as the restriction of (P * V, θ) to X \ H. We use the Kähler metric g ǫ of X \ H as in §5.1.1. There exists a Hermitian-Einstein metric h (ǫ) HE of the Higgs bundle (E, ∂ E , θ) as in Theorem 5.7 for the good filtered Higgs bundle (P (ǫ) * V, θ). By Proposition 3.1, the equality (9), and the equality By taking the limit as m → ∞, i.e., ǫ → 0, we obtain the desired inequality.
Let (E, ∂ E , θ) be the Higgs bundle obtained as the restriction (P * V, θ) |X\H . Let h det(E) denote the pluriharmonic metric of (det(E), ∂ det(E) , Tr θ) strongly adapted to P * (det(E)). For the proof of Theorem 2.16, it is enough to prove the following theorem.
Theorem 7.1 There exists a unique pluri-harmonic metric h of the Higgs bundle (E, ∂ E , θ) such that P h * E = P * V and det(h) = h det(E) .
Local holomorphic coordinate systems
Let P ∈ X \ W M . We take s ∞ ∈ Z △ M such that P ∈ X s∞ . The following is clear because P 1 (X △ M ) ∩ P(T * P X) is dense in P(T * P X). Lemma 7.3 There exist s i ∈ Z △ M (i = 1, 2) and ǫ > 0 such that the following holds. • P ∈ X si (i = 1, 2).
• X s1 and X s2 are transversal at P .
• {s 1 + as ∞ | |a| < ǫ}, {s 2 + as ∞ | |a| < ǫ}, s 1 + s 2 + as ∞ |a| < ǫ and s 1 + √ −1s 2 + as ∞ |a| < ǫ are contained in Z △ M . We set x i := s i /s ∞ (i = 1, 2). There exists a neighbourhood U P of P in X \ H such that (x 1 , x 2 ) is a holomorphic coordinate system on U P . Note that P with respect to ∂ zi and ∂ zi (i = j) are continuous. Hence, we obtain that h P,∞ is C 1 . Thus, we obtain the claim of the lemma.
The curvature R(h ∞ ) of the Chern connection is defined as a current. We also obtain the adjoint of Higgs field θ † h∞ as a C 1 -section of End(E) ⊗ Ω 0,1 . We obtain F (h ∞ ) ( Because h ∞|{xi=a} is equal to h si+as∞ , we obtain F (h ∞ ) ii = 0 for i = 1, 2. By considering the holomorphic coordinate system (w 1 , w 2 ) = (x 1 +x 2 , x 1 −x 2 ) and the coefficient of Proof The first claim immediately follows from Lemma 7.6. We obtain the second claim by the elliptic regularity and a standard bootstrapping argument.
HE |Xs is weakly convergent locally on X s \ H in L p 2 for s ∈ X ♯ M , we obtain that ∂b (ǫi) is convergent to 0 almost everywhere. Hence, , we obtain the claim of the lemma.
Lemma 7.10 h ∞ induces a C ∞ -metric of E on X \ H, and hence it is a pluri-harmonic metric of (E, ∂ E , θ).
Proof Take P ∈ W M \ H. We take a holomorphic coordinate neighbourhood (X P , z 1 , z 2 ) around P in X \ H. We may assume that X P is bi-holomorphic with {(z 1 , z 2 ) | |z i | < 2} by the coordinate system, and that P corresponds to (0, 0). Let C i,a := {z i = a} ∩ X P . Let g P denote the metric dz i dz i . We have the expression θ = f 1 dz 1 + f 2 dz 2 . According to a variant of Simpson's main (for example, see [43, Proposition 2.10]), there exist A > 0 such that |f 1|C2,a | < A and |f 2|C1,a | < A for any {a ∈ C | 0 < |a| < 1}. Hence, we obtain that |f 1 | < A and |f 2 | < A on X P \ {P }. We obtain that R(h) |XP \{P } |gP ,h < BA 2 , for a constant B > 0 depending only on rank(E).
We take a C ∞ -metric h 0,P of E |XP . Let b P be the automorphism of E |XP which is self-adjoint with respect to both h ∞|XP and h 0,P and determined by h ∞|XP = h 0,P b P . By the norm estimate for tame harmonic bundles [44], we obtain that b P and b −1 P are bounded with respect to h 0,P .
Therefore, we obtain that |∂ E,z1 b P | h0,P is L 2 on X P . Similarly, we obtain that |∂ E,z2 b P | h0,P is L 2 on X P . Hence, we obtain that b −1 P ∂ h0,P b P is L 2 . Because ∂(b −1 P ∂ h0,P b P ) = R(h ∞|XP ) − R(h 0,P ) on X P \ {P }, we obtain that ∂(b −1 P ∂ h0,P b P ) is bounded on X P \ {P }. It is easy to observe that ∂(b −1 P ∂ h0,P b P ) = R(h ∞|XP ) − R(h 0,P ) holds X P as distributions. By the elliptic regularity, we obtain that b −1 P ∂ h0,P b P is L p 1 for any p > 1. By using and the elliptic regularity, we obtain that b P is L p 2 for any p > 1. Then, by using (15) and the standard bootstrapping argument, we obtain that b P is C ∞ on X P .
Because (E, ∂ E , θ, h ∞ ) is a good wild harmonic bundle on (X, H), we obtain that a good filtered Higgs bundle (P h∞ E, θ) on (X, H). We put H [2] = i =j (H i ∩ H j ). For any P ∈ H \ (W M ∪ H [2] ), there exists s ∈ Z △ M such that P ∈ X s . By the construction, h ∞|Xs\Hs = h s . Hence, we obtain P h∞ (E) |Xs = P * (V) |Xs . Let Y := (H ∩ W M ) ∩ H [2] , which is a finite subset of H. We obtain that P h∞ * (E) X\Y ≃ P * V |X\Y . By Hartogs theorem, we obtain that P h∞ * (E) ≃ P * V. Thus, the proof of Proposition 7.2 is completed.
There exists a pluri-harmonic metric h s of (E s , ∂ Es , θ s ) := (E, ∂ E , θ) |Xs\H adapted to P * V s such that det(h s ) = h det(E)|Xs\H . Take another s ′ ∈ Z △ M such that P ∈ X s ′ . There exists a pluri-harmonic metric h s ′ of (E s ′ , ∂ E s ′ , θ s ′ ) adapted to P * V s ′ such that det(h s ′ ) = h det(E)|X s ′ \H . Proof Suppose that X s ∪ X s ′ ∪ H is simply normal crossing. We set X s,s ′ := X s ∩ X s ′ . It is smooth and connected. We obtain a good filtered Higgs bundle (P * V, θ) |X s,s ′ , and h s|X s,s ′ and h s ′ |X s,s ′ are adapted to P * V |X s,s ′ . Let b s,s ′ be the automorphism of E |X s,s ′ which is self-adjoint with respect to both h s|X s,s ′ and h s ′ |X s,s ′ , and determined by h s ′ |X s,s ′ = h s|X s,s ′ · b s,s ′ . There exists a decomposition (P * V, θ) |X s,s ′ = (P * V i , θ i ), which is orthogonal with respect to both h s|X s,s ′ and h s ′ |X s,s ′ , and b s,s ′ = a i id Vi for some positive constants a i .
In general, there exists s 2 ∈ Z △ M such that (i) P ∈ X s2 , (ii) X s ∪ X s2 ∪ H and X s ′ ∪ X s2 ∪ H are simply normal crossing. By the above consideration, we obtain h s|P = h s2|P = h s ′ |P . Therefore, we obtain Hermitian metrics h P of E |P (P ∈ X \ (H ∪ W M )). By using the argument in Lemma 7.5, we can prove that they induce a Hermitian metric h of E |X\(H∪WM ) of C 1 -class. We obtain F (h) as a current. Because h |Xs (s ∈ U △ M ) are pluri-harmonic metrics of (E, ∂ E , θ) |Xs\H , we obtain that F (h) = 0. It also implies that h is a C ∞ on X \ (H ∪ W M ). By using the argument in the proof of Lemma 7.10, we obtain that h induces a pluri-harmonic metric of (E, ∂ E , θ) on X \ H. Then, as in the proof of Proposition 7.2, we can conclude that P h * (E) = P * V. Thus, we obtain Theorem 7.1.
8 Homogeneity with respect to group actions 8.1 Preliminary
Homogeneous harmonic bundles
Let Y be a complex manifold. Let K be a compact Lie group. Let ρ : K × Y −→ Y be a K-action on Y such that ρ k : Y −→ Y is holomorphic for any k ∈ K. Let κ : K −→ S 1 be a homomorphism of Lie groups. Let (E, ∂ E , θ, h) be a harmonic bundle on Y . It is called (K, ρ, κ)-homogeneous if (E, ∂ E , h) is K-equivariant and k * θ = κ(k)θ.
Remark 8.1 According to [60], harmonic bundles are equivalent to polarized variation of pure twistor structure of weight w, for any given integer w. As studied in [48, §3], by choosing a vector v in the Lie algebra of K such that dκ(v) = 0, we obtain the integrability of the variation of pure twistor structure from the homogeneity of harmonic bundles. G 0 −→ E is K-equivariant. Hence, K is a K-equivariant holomorphic vector bundle. Similarly H 0 (X, K⊗L ⊗m2 ) is a K-representation, and G 1 is K-equivariant holomorphic vector bundle, and G 1 −→ K 2 is K-equivariant.
The K-representations on H 0 (X, E ⊗ L ⊗ m1 ) and H 0 (X, K ⊗ L ⊗m2 ) naturally induce G-representations on H 0 (X, E ⊗ L ⊗ m1 ) and H 0 (X, K ⊗ L ⊗m2 ). Hence, G i are naturally algebraic G-equivariant vector bundles on X. Moreover, the morphism G 1 −→ G 0 is G-equivariant and algebraic. Hence, E is a G-equivariant algebraic vector bundle on X.
An equivalence
8.2.1 Good filtered Higgs bundles associated to homogeneous good wild Higgs bundles Let X be a connected complex projective manifold with a simple normal crossing hypersurface H. Let G be a complex reductive group acting on (X, H). Let K be a compact real form of G. The actions of G and K on X are denoted by ρ. Let κ : G −→ C * be a character. The induced homomorphism K −→ S 1 is also denoted by κ.
Let (E, ∂ E , θ, h) be a (K, ρ, κ)-homogeneous harmonic bundle on X \ H which is good wild on (X, H). We obtain a good filtered Higgs bundle (P h * E, θ) on (X, H). Because each P h a E is naturally a K-equivariant holomorphic vector bundle on X, P h * E is naturally G-equivariant by Lemma 8.6. Because k * θ = κ(k)θ for any k ∈ K, we obtain g * θ = κ(g)θ for any g ∈ G. Therefore, (P h * E, θ) is a (G, ρ, κ)-homogeneous good filtered Higgs bundle on (X, H).
Let L be a G-equivariant ample line bundle on X.
Uniqueness
Let (E, ∂ E , θ, h) be a (K, ρ, κ)-homogeneous harmonic bundle on X \ H which is good wild on (X, H). Let h ′ be another pluri-harmonic metric of (E, ∂ E , θ) such that (i) h ′ is K-invariant, (ii) P h ′ * E = P h * E. The following is clear from Proposition 2.15.
Proposition 8.8 There exists a decomposition (E, ∂ E , θ) = m i=1 (E i , ∂ Ei , θ i ) such that (i) the decomposition is orthogonal with respect to both h and h ′ , (ii) there exist a i > 0 (i = 1, . . . , m) such that h ′ |Ei = a i h Ei , (iii) the decomposition E = E i is preserved by the K-action.
Let (E, ∂ E , θ) be the Higgs bundle on X \ H obtained as the restriction of (P * V, θ).
Theorem 8.9 Suppose that (P * V, θ) is µ L -stable with respect to the G-action. Then, there exists a K-invariant pluri-harmonic metric h of (E, ∂ E , θ) such that P h * E = P * V. If h ′ is another K-invariant pluri-harmonic metric of (E, ∂ E , θ), there exists a positive constant a such that h ′ = ah.
We obtain a Hermitian metric h := k * h 1 dk of E. Then, h is K-invariant. Moreover, h = h E0 ⊗ K Ψ dk holds. Hence, h is a pluri-harmonic metric of (E, ∂ E , θ) such that P h * E = P * V. Hence, we obtain the claim of the theorem. The uniqueness is clear.
Corollary 8. 10 We obtain the equivalence between the isomorphism classes of the following objects.
|
v3-fos-license
|
2021-11-09T14:41:32.304Z
|
2021-11-09T00:00:00.000
|
243850418
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://insightsimaging.springeropen.com/track/pdf/10.1186/s13244-021-01113-3",
"pdf_hash": "30860777879e57324a72e0dc24e3b776c4be7555",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45890",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "30860777879e57324a72e0dc24e3b776c4be7555",
"year": 2021
}
|
pes2o/s2orc
|
Impact of COVID-19 on radiology education in Europe: a survey by the ESR Radiology Trainees Forum (RTF)
Background The ongoing COVID-19 pandemic has significantly affected radiology services around the globe. The impact of the crisis on radiology education in Europe has yet to be determined, in order to identify measures to achieve optimal training of radiologists during pandemics. The aim of this survey was to evaluate the impact of the pandemic on young radiologist members of the European Society of Radiology (ESR). Methods A survey consisting of 28 questions was developed and distributed using SurveyMonkey to all ESR European radiologist members in training. The survey sought to collect information on three main themes, ‘demographics’, ‘training level’ and ‘effects of COVID-19’. The responses were statistically analysed with the use of R programming using descriptive statistics. Results A total of 249 responses from 34 countries were collected. Specific training on COVID-19 was not offered to 52.2% (130) of the participants. A total of 196 participants were not redeployed to other specialities but only 46.2% of institutions allowed residents to work from home. E-learning was offered at 43% of the departments and most participants (86.2%) were not allowed to switch from clinical work to research. A minority (n = 13) were suspended with (30.8%) or without salary (38.5%) or were forced to take vacation/yearly holiday leave (7.7%) or sick leave (23%). Almost half of the participants did not have access to personal protective equipment and a minority of them had their financial status affected. Conclusions The ongoing SARS-CoV-2 outbreak has significantly affected all aspects of postgraduate radiology training across the ESR member countries. Supplementary Information The online version contains supplementary material available at 10.1186/s13244-021-01113-3.
Introduction
The global crisis caused by the ongoing COVID-19 pandemic has significantly affected health services around the globe. According to data released by the World Health Organisation in April 2021, the pandemic continues to significantly disrupt health services in the majority of countries, resulting in staff redeployment, facility closures, patient hesitance to seek medical help and postponement of elective surgery [1]. These disruptions have also affected radiology services, such as the need for additional post-examination equipment sterilization and the risk imposed by close contact of
Open Access
Insights into Imaging radiologists with a potentially contagious patient, especially during ultrasound and interventional radiology procedures [2]. Health service disruption has also affected medical education at undergraduate and postgraduate levels [3][4][5][6][7]. Direct patient contact during the time of training is critical for the successful education of competent medical practitioners. Radiology education at the resident level has been significantly affected by changes in health service administration during the pandemic. Reports from the United States of America highlight difficulties in resident recruitment [8], interaction with patients and a multitude of training aspects, including reduced supervision, limited research possibilities and almost eliminated didactic learning [4].
Radiology training in Europe differs compared to the rest of the world with regards to the training curriculum, the administration of postgraduate medical training and accreditation requirements. In addition, variability in COVID-19 burden across European countries has led to a great variety of measures for the containment of the pandemic, which have affected radiology services and radiology resident education to a different extent among the European Society of Radiology (ESR) member countries. Therefore, the Radiology Trainees Forum (RTF) of the ESR created a survey to collect information on the impact of the pandemic on the education of radiology residents. This manuscript presents the results of the survey which reveal the impact the pandemic has had on young radiologists who are members of the ESR. The importance of the results presented herein lies in the evaluation of the extent that COVID-19 has affected various aspects of radiology training. This will allow the identification of suitable measures that can be applied to ensure seamless resident training during this and future pandemics.
Study design-participants
A survey consisting of 28 questions was developed by the Radiology Trainees Forum (RTF) to perform a comprehensive assessment of the effects of COVID-19 on radiology training in European Countries. ESR and RTF national delegates distributed the survey using SurveyMonkey to all ESR European radiologist members in training. The survey was distributed to radiology residents at all levels of training throughout the participating countries. Subspecialty fellows were also encouraged to participate, indicating their field of subspecialisation. Responses were automatically recorded. No institutional review board approval was required for this study (Fig. 1).
Questionnaire structure
Questions covered three basic themes: 'demographics' , 'training level' and 'effects of COVID-19' . The 'Demographics' category included three questions on the country of residence, the age and sex of participants, whereas the 'training level' category included four questions on the year of training, the total duration of radiology training in the respective country, whether they are in a general radiology or subspecialty program and the type of training institution (university, regional, central or rural hospital). The remaining questions (21/28) were used to assess the impact of COVID-19 on various aspects of training, the modes of training delivery, the use of e-learning material, the type of work they conducted during the pandemic, the preparation of their institution to face a pandemic, the impact on their financial and work status and the supervision process. The detailed survey is provided as Additional file 1.
Statistical analysis
Categorical responses to questionnaire questions were presented as frequencies and percentages and continuous variables were presented as mean ± standard deviations. Participant responses were mapped on the world map with the use of R programming (v 4.03, www.R-proje ct. org).
Results
A total of 249 resident responses were collected from 34 countries, with responses per country ranging between 1 and 41 (Fig. 2). Turkey and Poland were the countries that provided the most responses to the survey (41 and 21 responses respectively). Between 10 and 20 responses were received from the Netherlands, Estonia, Germany, Romania, Czech Republic, Denmark, Portugal, France and Greece. The mean ± SD age of participants was 31.3 ± 5 years (range 23-61) of whom 121 (48.6%) were male and 128 (51.4%) were female. Participants were uniformly spread along training years with 65.5% of them (163 residents) working at university hospitals in 5-year radiology programmes and were being trained on general radiology (91.6%-228 residents) (Fig. 3). The rest of the participants worked at regional hospitals (16.06%), central hospitals (12.05%), local/rural hospitals (2.01%) and other types of institutions (e.g., private institutions) (4.42%).
Training on COVID-19 was not offered to 52.2% (130) of the participants. For those who received COVIDspecific training, this was mainly focused (in 86.4% of cases) on the radiological diagnosis of the disease and the required safety measures (in 73.6% of cases) and less on the clinical management of the disease (in 33.6% of of them reported that they were educated by international publications available online. A smaller number of participants (106-46.1%) mentioned that they were educated by European online publications, whereas 90 of them also read local or national papers. Finally, national or hospital guidelines provided information about the disease to 138 (60%) of the participants (Fig. 4).
With regards to the services that the participants were asked to provide during the pandemic, 85.2% (196 participants) replied that they were not redeployed to other specialities. The limited number of radiology trainees that were redeployed were asked to fill places in COVID-19 clinics, infectious disease departments, intensive care units and emergency departments and one participant was asked to refrain from patient contact due to ongoing pregnancy. The pandemic changed trainee views on the role of radiologists in 9.1% of cases with the majority of them reporting that they recognised the vital role of radiologists in the pandemic, with a limited number of trainees (3 out of 211 who replied to question 14) mentioning that radiological services were misused during the pandemic and that they felt that they became "service providers" without retaining any significant influence on patient management.
Radiology practices changed in most institutions during the pandemic by allowing more home office hours, reducing interpersonal contacts, providing online multidisciplinary meetings and following hospital safety measures. Only 46.2% of institutions allowed residents to use their home office in providing radiology services, provided they had appropriate secure access to the private network of the hospital and sufficient hardware to meet diagnostic requirements.
Training opportunities delivered in the form of e-learning were offered at 43% of the departments during the pandemic. 72.2% of the responding trainees were more or less satisfied with the usage of online learning possibilities. 48.3% of the participants rated the general usage of online education as a positive experience. Online training was considered an acceptable part of radiology training for 54.3% of participants, the majority of respondents specified that a certificate or Continuing Medical Education (CME) credits are needed. Importantly, supervision was greatly affected in participating countries because of COVID-19 with 38.7% of the participants reporting that they received limited or delayed feedback and 17.2% no feedback for their work. The vast majority of participants (86.2%) were not provided with the opportunity to switch from clinical work to research and 13 of them reported that they had been suspended because of COVID-19 either with (30.8%) or without salary (38.5%) or had been asked to take vacation/yearly holiday leave (7.7%) or sick leave (23%). In addition, the pandemic affected the compulsory residency examinations with 49 participants having the exam date postponed and subsequently the duration of their residency extended and 26 participants had the exams changed to an online format. 62 (33.3%) responders mentioned that no examinations were allowed during the outbreak (Fig. 5).
In terms of resident wellbeing during the outbreak, 51.6% of the participants have infected themselves or had colleagues infected by Sars-CoV-2. A small number of residents (14.1%) reported that they did not have access to the required safety equipment while dealing with COVID-19 patients and almost half of the participants (51.4%) did not have access to mental health resources during the pandemic. Finally, 7% of radiology trainees (n = 13) had their financial status affected by the pandemic because of their suspension from service.
Discussion
This study aimed to evaluate the effect COVID-19 has had on radiology training throughout ESR member countries. The results of the survey demonstrated great variability within and between different countries and revealed that the pandemic significantly influenced supervision, teaching, examinations and the work environment of the participants. Importantly, the results showed that the outbreak has significantly impacted the quality of life of participants, affecting their health and their financial status.
Resident training was significantly affected over the course of the pandemic, with significant disruption to supervision. Almost half of our participants reported no access to online training and received limited or no feedback from their supervisors. Our results come to confirm the views of Alvin et al. [4] who foresaw that the current condition would be disadvantageous for supervision and proposed the application of remote meetings with supervisors to at least partly control the problem. A similar experience was reported from a Texas-based institution where interaction with supervising faculty has decreased either because of social distancing or because residents and fellows were deployed to other posts [3]. Our results show that the majority of trainees were not redeployed to other posts. Therefore, the effects on supervision could be mainly attributed to social distancing measures applied both for trainees and for their supervisors. Theoretical radiology training during the outbreak has been primarily administered online. In some settings, this resulted in reduced resident learning time in contact with an experienced faculty member. A number of measures to electronically mitigate this problem have been proposed in literature such as adopting virtual readouts with the use of a camera so that senior radiologists can remotely supervise trainees [9]. Such proposals are hindered by the need to install costly hardware and PACS systems at the home office of young radiologists. However, the use of video call systems to host webinars and facilitate multidisciplinary team meetings has been proposed as a viable alternative [10]. At the same time, online learning enabled trainees to access the lectures held in other clinical institutions and created additional learning possibilities. Indeed, our participants reported a significant reliance on online education during the pandemic and the ESR and other radiology focused societies have enriched their webinar programs and modified the format of major conferences to facilitate the online dissemination of information while promoting social distancing. Such measures, however, cannot replace teaching for modalities that require patient contact (e.g. ultrasound and interventional procedures) [2] and cannot replace day-to-day supervision and case discussion with supervisors.
Exam delivery was disrupted to a variable extent across the countries of the participants, including where exams were cancelled for the whole duration of the outbreak. These changes have affected residents who were required to extend their training program and wait for qualifying exams. The core examination of the American Board of Radiology had been postponed for a significant time, delaying the process of graduation and posing hurdles for subsequent subspecialty training [4,11]. Institutions like the EBR (European Board of Radiology) and the RCR (Royal College of Radiologists) have attempted to offer online proctored examinations as an alternative. The European Diploma in Radiology (EDiR) offered by the EBR stands as a viable alternative since it can be delivered by the local radiological societies and can also be delivered on-site for hospitals and institutions. In the past few months, several onsite EDiR e-examinations have been carried out, among others, in various cities in Croatia, France, Italy, Poland and countries outside Europe. Finally, the European Board of Radiology (EBR) has developed a software tool to conduct e-examinations in collaboration with the local national professional societies, scientific institutions and other speciality sections allowing them to organise their examinations locally for their residents.
The first e-examinations were successfully held in September and October by the Consejo Mexicano de Radiología, allowing local residents to perform their board exams.
The well-being of young radiologists has been directly impacted by the SAR-CoV-2 outbreak. Approximately half of the participants reported that either themselves or their colleagues had been infected. In addition, some trainees were suspended, influencing their financial and psychological status and only half of them had access to mental health services. A recent study in the UK showed that 48% of radiology trainees wellbeing deteriorated during the pandemic [11] and trainees with ongoing student loans, health insurance and familyrelated financial obligations are expected to have their income reduced not only by the reduction in working hours [4] but also by the reduced reporting workload [4,11] which could lead to delayed retirement and other financial hurdles. Potential solutions to this problem include offering paid leave to trainees who contract or become ill with COVID-19, to make sure that loan repayment can be extended and to make available mental health services for trainees with psychological struggles.
One limitation of our study is that the survey was not completed by residents in all ESR member countries. However, distribution of the survey was achieved through national radiological societies and the degree of participation was the maximum achievable given the ongoing pandemic situation. The size of the participant cohort was also affected by the voluntary nature of the survey. However, the validity of our results is indicated by abstract similarities with results presented from individual countries [11].
Conclusion
The ongoing SARS-CoV-2 outbreak has significantly affected all aspects of postgraduate radiology training across the ESR member countries. Identification of these affected areas will assist in the development and implementation of mitigation measures and the preparation for potential future resurgences of SAR-CoV-2 or alternative similar situations.
|
v3-fos-license
|
2018-04-03T05:02:56.503Z
|
2017-03-15T00:00:00.000
|
16007993
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0171836&type=printable",
"pdf_hash": "c59759094923178f80f158d138899eee2f55b695",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45892",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science"
],
"sha1": "c59759094923178f80f158d138899eee2f55b695",
"year": 2017
}
|
pes2o/s2orc
|
Blue-light induced accumulation of reactive oxygen species is a consequence of the Drosophila cryptochrome photocycle
Cryptochromes are evolutionarily conserved blue-light absorbing flavoproteins which participate in many important cellular processes including in entrainment of the circadian clock in plants, Drosophila and humans. Drosophila melanogaster cryptochrome (DmCry) absorbs light through a flavin (FAD) cofactor that undergoes photoreduction to the anionic radical (FAD•-) redox state both in vitro and in vivo. However, recent efforts to link this photoconversion to the initiation of a biological response have remained controversial. Here, we show by kinetic modeling of the DmCry photocycle that the fluence dependence, quantum yield, and half-life of flavin redox state interconversion are consistent with the anionic radical (FAD•-) as the signaling state in vivo. We show by fluorescence detection techniques that illumination of purified DmCry results in enzymatic conversion of molecular oxygen (O2) to reactive oxygen species (ROS). We extend these observations in living cells to demonstrate transient formation of superoxide (O2•-), and accumulation of hydrogen peroxide (H2O2) in the nucleus of insect cell cultures upon DmCry illumination. These results define the kinetic parameters of the Drosophila cryptochrome photocycle and support light-driven electron transfer to the flavin in DmCry signaling. They furthermore raise the intriguing possibility that light-dependent formation of ROS as a byproduct of the cryptochrome photocycle may contribute to its signaling role.
Introduction
Cryptochromes are a family of blue-UV/A light absorbing flavoprotein receptors found throughout the biological kingdom [1][2][3]. They are implicated in the regulation of growth and development in plants, and in the entrainment of the circadian clock in animals [4,5]. A wellestablished biological function of Drosophila cryptochrome (DmCry) is to contribute towards setting the circadian clock in response to light [6]. It does so by binding to the Drosophila core clock protein Timeless (Tim) and the E3 ubiquitin ligase Jetlag in the presence of blue light. As a result, Tim is degraded and can no longer participate in its natural feedback loop involving interaction with the transcriptional activator complex Clock:Cycle (Clk:Cyc) [7]. In this way, the 24-hour internal oscillation is disturbed, and may even be completely stopped in constant light. DmCry has more recently been shown to have other functions independent of its role in the circadian clock, including direct light sensing in neurons [8] and response to stress [9].
The current paradigm for DmCry signaling is that DmCry undergoes a light-dependent conformational change which exposes binding sites for partner proteins such as Tim and Jetlag to initiate the signaling reaction. Crystal structure of full-length DmCry shows that a small (30aa) C-terminal extension is folded against an N-terminal light sensing domain, in the pocket close to the flavin cofactor [10,11]. When this C-terminal extension is deleted, the partner proteins bind constitutively to the N-terminal domain of DmCry and therefore the response becomes constitutive. These observations indicate that the C-terminal domain of DmCry is released from the protein surface upon illumination, allowing partner proteins access to the N-terminal domain in a light-dependent manner [6,7]. A similar paradigm involving light-initiated conformational change appears to hold for cryptochrome responses in other systems [1 -5].
The current challenge in the field is to define the nature of the photochemical reaction which gives rise to the conformational change initiating the signaling response. Like all cryptochromes, DmCry binds a light-sensing FAD cofactor within a hydrophobic pocket adjacent to the C-terminal domain of the protein [10,11]. The flavin redox state is in the oxidized form in the purified protein when maintained in the dark. Illumination results in photoreduction to the anionic FAD •redox state [12 -14]. These redox state transitions have been shown to accompany a light-induced conformational change and initiate biological signaling both in vitro [15] and in vivo [16], Therefore, the anionic flavin semiquinone redox state was proposed as the active signaling conformation [15,16].
Controversy involving this mechanism revolved around the observation that mutants involved in electron transfer to the flavin in DmCry which fail to photoreduce in vitro nonetheless retained biologically activity in vivo [13,17]. This was taken as evidence that formation of the anionic radical (FAD •-) redox state is not required for biological signaling. However, this apparent contradiction is resolved by the demonstration that mutants which fail to undergo photoreduction in vitro are nonetheless photoreduced in vivo [16,18,19], indicating that formation of the anionic radical redox state occurs by alternate electron transfer routes and therefore can initiate signaling. Further controversy has arisen on apparent methodological grounds [compare e.g. 15,20], and on whether redox changes could explain the light sensitivity of DmCry responses and half-life of the signaling states in vitro [13,17]. This has prompted the suggestion of a mechanism of DmCry activation where the anionic radical (FAD •-) instead of oxidized FAD, may be the light-absorbing species, which undergoes some unspecified photoreaction [17,20].
Recently, it has been shown that plant cryptochromes produce reactive oxygen species (ROS) as a result of illumination [21,22]. This follows from observations in vitro that a consequence of the Arabidopsis cryptochrome flavin redox cycle is the formation of ROS and hydrogen peroxide (H 2 O 2 ) in isolated proteins [23,24]. Specifically, Arabidopsis cry1 and cry2 both undergo flavin reduction from FAD ox to a mixture of radical (FADH • ) and reduced (FADH -) redox states as a consequence of illumination. Upon return to darkness, these proteins are reoxidized to the FAD ox redox state by a process that releases superoxide and hydrogen peroxide both in vitro [23] and in vivo [21,22]. ROS accumulation occurs in the nucleus where cryptochromes are localized, in contrast to mitochondrial and cytoplasmic compartments which produce ROS via metabolic enzymes.
To help resolve existing controversies concerning DmCry activation and also to provide novel insights into the oxidative signaling mechanism of DmCry, we have here re-examined the DmCry photocycle of flavin reduction/reoxidation in detail by a kinetic modeling approach. This allowed us to calculate the quantum efficiency of photon absorption and show that this reaction has the light sensitivity to serve a signaling role. We further determined the half-life of the presumed signaling state (FAD •-) in vitro and show that it is consistent with published estimates of the lifetime of the signaling state of DmCry in vivo [17,20]. Finally, we have added a new dimension to the DmCry signaling paradigm by demonstrating the formation and accumulation of intracellular ROS as a result of illumination. These results suggest the possibility that direct enzymatic production of ROS by cryptochrome may represent an alternate oxidative signaling mechanism that may be conserved across phylogenetic lines.
Protein samples and photoreduction experiments
These methods were used for results presented in Figs 1, 2 and 3. The full-length Drosophila cryptochrome cDNA sequence was cloned into the pAcHLT-A baculovirus transfer vector (BD Biosciences, San Jose, Ca.) in-frame with the N-terminal 6-His tag. The DmCry protein was expressed in Sf21 insect cell cultures by established methods and purified over an NTA nickel affinity column as previously described [25]. Photoreduction experiments were performed at 21˚C in a buffer of 50mM phosphate, pH 7.5 and 10mM β-mercaptoethanol. A control expression construct (Spa1) consisted of the full-length SPA1 cDNA [26] cloned into the pDEST10 baculovirus transfer vector (Thermo Fisher Scientific, Waltham, Ma.) and introduced for expression in Sf21 insect cells. Spa1 was chosen as it is involved in light signaling in plants but has no photoactive pigment and is not directly responsive to light [27].
Kinetic analysis
Kinetic analysis and numerical methods for determination of quantum efficiency and half-life were performed as described previously [28] using optical spectra from isolated DmCry. Details of the present analysis are included in the Supplementary Material (S1 Text).
Detection of ROS in purified protein samples
For determination of ROS, DmCry protein at a concentration of 30 μM in PBS at pH 7.4 and in the absence of added reducing agent was illuminated with blue light (3000 μmol m -2 sec -1 ) for 30 minutes at 0˚C. Aliquots were taken at the indicated times and frozen into liquid nitrogen prior to ROS determination. H 2 O 2 detection: 3μl of protein sample was diluted into 0.3 ml of 50 mM of sodium phosphate buffer at pH 7.4 and adjusted to a final concentration of 10 μM Amplex UltraRED (Invitrogen/Thermo Fisher Scientific, Waltham, Ma.) and 0.2 U of horse radish peroxidase (Sigma Aldrich, St Louis, Mo. USA). After 30 minutes of incubation time in the dark, fluorescence was read in triplicate from each sample (100 μl volume for each reading) in 96 well plates with a Cary Eclipse fluorescence spectrophotometer (Varian) at absorption 560 nm, emission 590 nm. Fluorescence units were converted to concentration of H 2 O 2 by a standard curve of concentration vs. fluorescence units as described previously [21]. The H 2 O 2 concentration displayed on the Y-axis of the graphical representation refers to the total concentration of H 2 O 2 in the undiluted protein sample. ROS detection using dichlorofluorescein fluorescent substrate: Transient formation of ROS was monitored by addition of 1 mM CM-H 2 DCFDA (5,6-chloromethyl-2,7-dichlorodihydrofluorescein diacetate, Molecular Probes, Life Technologies, Grand Island, NY, USA) to the protein samples immediately prior to illumination. In the text, we have abbreviated the name of this reagent to DCFH-DA. The fluorescence was read in triplicate from each sample (100 μl undiluted sample volume for each reading) in 96 well plates with a Cary Eclipse fluorescence spectrophotometer (Varian) at excitation/emission of 490/530nm.
Detection of ROS in Sf21 insect cell culture
Insect cells expressing either DmCry or Spa1 expression constructs were harvested 72 hours post-infection and resuspended in PBS buffer (50 mM sodium phosphate, 150 mM NaCl buffer pH 7.4) at a final concentration of 2 x 10 5 cells/ml. DCFH-DA was then added to a final concentration of 1 mM prior to illumination at 22˚C for the indicated times and light qualities (Fig 4). Illumination was in 24-well microtitre plates placed directly under the light source for 10 minutes, with 1 ml cell cultures per well. Subsequent to illumination, cells were harvested, washed twice in PBS, then lysed in a final volume of 0.5m l PBS with the addition of 0.1% In the dark, the protein-bound cofactor (FAD ox ) is shown in the oxidized redox state. Light absorption triggers flavin photoreduction [12 -14] at a rate constant k. Reoxidation to the FAD ox state occurs spontaneously in the dark at a rate constant k b . Possible changes in Cterminal conformation linked to redox state interconversion are diagrammed [15]. Triton X-100. 80 μl aliquots of the whole cell lysate were transferred to individual wells of a 96 well microtitre plates and measured at excitation/emission of 490/530nm.
Immunofluorescence labelling of DmCry
Sf21 cells incubated during 2 hours on glass coverslips were exposed to dark or blue light for 15 min and fixed with 2% paraformaldehyde for 10 min at room temperature (RT), The FAD ox concentration was obtained from the absorbance at 450nm from panel A according to Eq. S7 (see S1 Text). The red triangles represent the experimental data, and the blue curve is the fit of the experimental data with the two-states reoxidation model (Eq. S8). From the fit the reoxidation rate resulted k b = 0.0021 s -1 , (half-life of τ 1/2 = 5.5 min). The goodness of the fit was excellent (R 2 % 1). (C) Isolated purified DmCry protein was illuminated for 30 s at the indicated blue light fluence rates I. Normalized absorption spectra are presented. (D) Calculated forward rate constant k versus photon fluence rate I (red triangles). For each I, the rate constant k was calculated by numerically solving the two-states kinetic equations (see S1 Text, Eq. S1), with the concentration of FAD ox , obtained from panel C and reoxidation k b obtained from panel B. We fit k as function of I by using the linear equation k = σ I. The fit is reported as blue curve in Fig 2. From the fit the photo-conversion cross section was σ = 9.2 x 10 −4 μmol -1 m 2 . From σ the quantum yield ϕ was calculated according to σ = 2.3 ε ox (450) ϕ, by using the experimentally calculated extinction coefficient ε ox (450) = 1130 mol -1 m 2 (11300 M -1 cm -1 ). The quantum yield resulted ϕ = 0.35.
https://doi.org/10.1371/journal.pone.0171836.g002 permeabilized with 0.1% Triton X-100 and then incubated with an anti-DmCry rabbit polyclonal antibody [28] [16] and an Alexa 488-conjugated anti-rabbit secondary antibody. Coverslips were mounted in Fluoroshield with DAPI (4', 6'-diamino-2-phenylindole) and viewed by a Leica upright SP5 confocal microscope with a 63X objective. DAPI and Alexa 488 were, respectively, excited at a 405 and 488 nm wavelengths, and the Emission fluorescence intensities and DIC were detected by using a photomultiplicator between 498 and 561 nm, and a transmission photomultiplicator, respectively. Two channels were recorded sequentially at each z-step. Z series projections and merge images were performed using ImageJ software
Intracellular localization of ROS
Sf21 living cells expressing DmCry or the control SPA1 construct were washed 2 times in PBS (pH 7.4) and incubated in PBS containing 58 μM DCFH-DA (Molecular Probes, Life Technologies, Grand Island, NY, USA) for 10 min in the dark, then exposed to blue light during 5 min and observed immediately either between glass and coverslips with a Zeiss AxioImager.Z1/ ApoTome microscope or in an observation chamber with an inverted Leica TCS SP5 microscope. Green fluorescence from DCFH-DA was excited at 488 nm. Zeiss AxioImager.Z1/Apo-Tome observations were done by using a 10X objective. Emission fluorescence intensities were detected by using the Zeiss filter set 38 Endow GFP shift free; EX BP 470/40, BS FT 495, EM BP 525/50 and differential interference contrast (DIC) with an Analy DIC TransLight. The images were digitally captured with a CCD-camera (AxioCam MRm) using the software Axiovision (version 4.7.2, Carl Zeiss).
Inverted Leica TCS SP5 microscope observations were done by using a 40x objective. Emission fluorescence intensities were detected by using a photomultiplicator between 498 and 561 nm and DIC by using a transmission photomultiplicator. Z series projections were performed using ImageJ software (W. S. Rasband, ImageJ).
Kinetic modeling of the Drosophila cryptochrome photocycle
Purified preparations of DmCry protein have been shown to undergo a photoreduction reaction in vitro [12 -14]. This involves transition from the FAD cofactor bound in the oxidized redox state (FAD ox ) to the anionic radical (FAD •-). Upon return to darkness, the flavin spontaneously re-oxidizes in the presence of molecular oxygen to restore the resting (FAD ox ) state, giving rise to a continuous photocycle under constant illumination (Fig 1). Therefore, the concentration of the FAD •flavin radical redox state depends on the rate constants k and k b according to the two-states kinetic model (see S1 Text in Supplementary Material). This lightdependent redox reaction has been linked to biological activity in a number of studies [15,16]. However, the kinetic parameters of the reaction (light sensitivity, lifetime of redox state intermediates, and quantum yield) have yet to be rigorously established.
We therefore first derived a two-states kinetic model of the DmCry photocycle. Photoreduction experiments were performed by illuminating purified DmCry protein samples at increasing intensities (photon fluences I) of blue light for 30 seconds in the presence of a mild reductant (10mM β-mercaptoethanol) (Fig 2). Flavin reduction to the anionic radical (FAD •-) redox form could then be observed by spectral decrease at 450nm.
We have estimated the dark reoxidation rate k b (dark reversion time) for reoxidation of flavin from the FAD •back to FAD ox . For this experiment, DmCry protein samples were illuminated at maximum light intensity and then returned to darkness. Spectra were taken at defined dark intervals t d (Fig 2A). From these data it was possible to estimate the reoxidation half-life (see [28] and S1 Text). Fig 2B reports (red triangles) the normalized FAD ox concentration calculated from panel (A) as a function of the dark time t d , and (blue curve) the fit of the data with the two-states dark reoxidation model equation reported in the legend (Eq. S7). The dark reoxidation rate resulted k b = 0.0021 s -1 , which gives a half-life of τ 1/2 = 5.5 minutes.
Calculation of the quantum yield for flavin reduction was performed using the data obtained from spectra reported in Fig 2A and 2B. A two-state kinetic model was used as described previously for Arabidopsis cry [28]-see also detailed description of the methods in S1 Text. Fig 2D shows (red triangles) the forward rate constant k as a function of the blue light intensity (fluence rate) I used in panel (C), and (blue curve) the linear fit k = σ I of the data. From the linear fit we estimated a photoconversion cross section of σ = 9.2 10 −4 (in μmol -1 m 2 units), which allowed us to calculate a quantum yield of ϕ = 0.35 by using an estimated molar extinction coefficient of ε ox (450) = 11300 M -1 cm -1 [29] (for details of calculations see [28] and S1 Text).
The quantum yield for flavin reduction of 0.35 is well within the range for biological signaling molecules and comparable with that reported for Arabidopsis cry2 of 0.19 [28]. The halflife of the FAD •redox state is also comparable to that of Arabidopsis cry1 and cry2, where photoreduction leads to formation of the neutral radical (FADH • ) redox state [28]. DmCry by contrast forms the charged anionic radical (FAD •-) both in vitro or in vivo [14,16]. These data suggest a significant stabilizing effect of the intraprotein environment on the flavin radical in DmCry. ) or other radical formation [23] Aliquots of DmCry were removed at the indicated times and the concentration of H 2 O 2 determined (Fig 3A). The concentration of H 2 O 2 increased in a linear fashion over a time period of 30 min (Fig 3A).
DmCry illumination induces the formation of ROS
To directly assay for short-lived ROS such as O 2 •or other intermediate ROS, we further analysed the protein samples with a general indicator of ROS formation, the fluorescent probe DCFH-DA [30]. In this assay, the fluorescence substrate DCFH-DA was added to the protein immediately prior to illumination (Fig 3B). Aliquots were analysed at the given time points for fluorescence resulting from the formation of ROS. Illumination indeed caused a linear increase over time in signal for DmCry (Fig 3B), whereas control protein samples at the same concentration such as BSA showed no increase (not shown). We conclude the signal is due largely to the production of ROS as a result of DmCry flavin reoxidation.
DmCry illumination induces the formation of ROS in living cells
To determine that DmCry illumination also leads to the induction of ROS in living cells we analysed Sf21 insect cell cultures expressing recombinant DmCry from baculovirus expression constructs. For this assay, the fluorescent substrate DCFH-DA was added to the cell incubation medium prior to illumination (see Methods). As a negative control, we used Sf21 insect cells expressing a different construct lacking photoactive pigments, namely SPA1 (Suppressor of Phy A), a plant protein that is implicated in light responsivity in plants but is not photochemically active [26,27]. This control was used to correct for possible non-specific effects on ROS induction, since viral infection and recombinant protein expression could of themselves initiate stress response that is unrelated to DmCry activation. The results showed that illumination of DmCry expressing cell cultures induced a clear increase in ROS production compared to control cell cultures (Fig 4A). This effect was observable already as low as 40μmol m -2 sec -1 blue light, and increased with increasing illumination (Fig 4A). At very high light intensities (160 μmol m -2 sec -1 ) ROS formation also increased modestly in the negative control cell (SPA1 expressing) cultures, indicating non-specific effects of blue light illumination on cellular stress. Induction of ROS was not observed in darkness or red light (Fig 4b), consistent with a requirement for activation of cryptochrome.
DmCry illumination induces formation of ROS in the nucleus
Information on the localization of ROS was obtained through staining of DmCry expressing cell cultures with DCFH-DA during illumination (see Methods). After 10 minutes blue light illumination, DmCry expressing cells showed a significant increase in fluorescence as compared to control cells (expressing the SPA1 construct) in blue light, but not in darkness ( Fig 5A). This validates our biochemical data (Fig 4) and confirms that rapid ROS induction is a consequence of DmCry illumination in vivo. To obtain details of intracellular localization, confocal microscopy was used after blue light exposure (Fig 5B). Diffuse fluorescence could be seen throughout the cell but was also localized within the nuclear compartment. Particularly pronounced vesicular structures are likely endoplasmic reticulum surrounding the nucleus, and may arise as a consequence of the cell's attempts to remove excess ROS by secretion into the extracellular medium.
To obtain information concerning the localization of DmCry, we performed immunostaining with anti-DmCry antibody. In dark-adapted cells, staining can be seen primarily in the cytosol and is largely absent in the nucleus (Fig 5C), consistent with DmCry localization in the insect cells exposed to blue light. Living Sf21 cells stably expressing DmCRY were treated with DCFH-DA, exposed to dark or blue light and viewed by (A) a Zeiss AxioImager.Z1/ ApoTome using a 10x objective (bar 100 μm) (B) an inverted Leica TCS SP5 microscope. Images show single confocal z section that cross the nucleus. Diffused fluorescent ROS staining can be seen in nucleus and cytoplasm. Punctuate and intense fluorescent ROS staining also colocalizes perfectly with ER (endoplasmic reticulum) surrounding the nucleus. Scale bar: 10 μm. (C) Sf21 stably expressing DmCry were fixed with paraformaldehyde, permeabilized with Triton X100, incubated with an anti-DmCry1 rabbit polyclonal antibody and an Alexa 488-conjugated anti-rabbit secondary antibody, DNA were stained with 4 0 ,6 0 -diamino-2-phenylindole (DAPI). Cells were observed with a Leica TCS SP5 confocal microscope. Images show projections of optical sections that cross the nucleus. Scale bar, 10 μm.
https://doi.org/10.1371/journal.pone.0171836.g005 cytosol. After 15min illumination with 80 μmoles m -2 sec -1 blue light, immunostaining of these same cell cultures showed significant increase in cryptochrome localization in the nucleus. These data indicate that DmCry is localized primarily in the cytoplasmic compartment in the dark, but moves into the nuclear compartment upon activation. Similar results were previously obtained in the case of the Arabidopsis cry2 [22].
In sum, these data show colocalization of ROS biosynthesis with DmCry protein expression, and are thereby consistent with primary synthesis of ROS by cryptochromes in living cells.
Discussion
One of the key questions in DmCry activation concerns the nature of the photoreactions that are implicated in biological activity. Numerous studies have linked a redox state interconversion to signaling in DmCry, including by action spectroscopy that shows oxidized flavin as the photosensor in the dark-adapted state [16,31], that flavin photoreduction accompanies light activation [12,14], and that redox change of the flavin in vitro can induce conformational change and productive interaction with substrate proteins even in the absence of light [15].
However, there has also been a great deal of controversy concerning this mechanism. Much of the confusion has resulted from the observation that mutants of DmCry that do not undergo flavin reduction in vitro [13,17,20] have been reported to retain biological activity in vivo, thereby disputing that formation of the flavin radical is required for biological function. This observation can be explained by the fact that cryptochrome mutants defective in electron transfer in vitro are nevertheless photoreduced in vivo by alternate routes [16,18,19]. Therefore they indeed form the flavin radical redox state in response to light in vivo and their observed biological activity is expected. Confusion concerning the role of flavin reduction in has been further exacerbated by studies that incorrectly analysed cryptochrome mutant phenotypes, either by scoring constitutive dark phenotypes as 'light activated' or performing experiments far above saturating light intensities such that differential responsivity was missed (for full discussion of recent literature, see [15]). Nonetheless, a signaling mechanism based on activation by redox state interconversion has not been conclusively demonstrated and one of the weaknesses has been that the DmCry photocycle has not been rigorously characterized and correlated to DmCry signaling events.
For this reason, one of the main goals of the present study was to ascertain whether a photocycle based on flavin reduction (Fig 1) is indeed compatible with known characteristics of in vivo signaling by DmCry. It should be cautioned in this context that there can be considerable variation in reported light sensitivity of DmCry dependent phenotypes in vivo, which is not necessarily linked to the actual light sensitivity of the receptor. For instance, light-induced proteolysis of DmCry and signaling in neuronal firing [8,17] require much higher apparent irradiance than phase shifting of the circadian clock [32]. Such variability is a classic feature of biological signaling reactions, which result from signal amplification events that are far downstream of the receptor [33]. Nonetheless, the quantum yield obtained for flavin reduction of 0.35 is well within the range for biological signaling molecules and is comparable with the quantum efficiency of both Arabidopsis cry2 [28] and phot1 [34], which are both sensitive plant flavoprotein receptors operating in a low fluence range. The fact that flavin reduction is efficient and occurs in response to relatively low light intensity (Fig 2) is consistent with a signaling role in vivo.
Another kinetic feature of the DmCry photocycle consistent with a biological signaling role is the relatively long half-life (5.5 min) of the anionic radical (FAD •-) redox state. This indicates that, if the anionic radical redox state is indeed the activated signaling state of DmCry, then biological activity is predicted to persist for several minutes after the end of illumination. Indeed, proteolytic experiments have shown that even a single flash of light of less than 1 msec is sufficient to induce biological activation of DmCry, which however is only apparent after a delay of several minutes in the ensuing dark interval [13]. Other studies suggest a half-life of up to 15 minutes for the signaling state of DmCry in vivo, as estimated by the lifetime of the activated conformational state [17], consistent with the extended lifetime of the anionic radical (FAD •-) redox state shown in this study.
The second finding presented here is the demonstration that ROS are formed upon photoactivation of DmCry. A possible mechanism for the one electron reduction of O 2 by FAD •and subsequent production of H 2 O 2 is shown in the reaction scheme (S1 Fig) included in the Supporting Information. The dismutation of superoxide would be facilitated at pH7.4 by the presence of small amounts of hydroperoxy radical, leading to H 2 O 2 and O 2 as final products.
In this reaction scheme (S1 Fig), every time the flavin in DmCry becomes reduced by light, there is a molecule of H 2 O 2 formed during the subsequent reoxidation step. The rate of this reoxidation (k b -Fig 1 of main text) is not dependent on light and so occurs at a constant rate during illumination. Thus, illumination of DmCry should result in synthesis of ROS in proportion to the concentration of the protein, the extent of flavin reduction (dependent on the light intensity), and the overall illumination time. In keeping with this expectation, the concentration of H 2 O 2 formed by DmCry in vitro increased linearly over time and was indeed proportional to the protein concentration (see Fig 3a, which reports the concentration of H 2 O 2 formed over a 30 minute time period by a protein sample at 30 μmolar concentration). Furthermore, the DCFH-DA fluorescent substrate, which detects O 2 •in addition to other ROS products, likewise showed a linear increase in ROS subsequent to illumination of the isolated protein ( Fig 3B). Significantly, light-induced formation of ROS could also be detected in living cells, and colocalized with DmCry in both cellular and nuclear compartments of Sf21 insect cells (Fig 5). Since ROS production is a direct enzymatic property of DmCry illumination irrespective of cellular partner proteins and cofactors (see Fig 3), it should occur even at the lower DmCry protein concentration in the natural cellular evnvironment. These results, taken together with prior in-cell EPR spectroscopy [16], show that cycles of flavin reduction/reoxidation must occur in vivo in response to continuous DmCry illumination under physiological conditions. DmCry flavin must furthermore be in the oxidized (FAD ox ) redox state in the dark for this to occur, which contradicts an alternate suggestion that the anionic radical redox state may represent the resting, dark-adapted state of DmCry [13,20] and undergoes some unspecified photoreaction.
Finally, our results present the intriguing possibility that enzymatic biosynthesis of ROS by DmCry may contribute to its signaling role. It should be emphasized that the conformational change in DmCry triggered by flavin reduction occurs well before the reverse (reoxidation) reaction that generates ROS, and therefore it may be difficult to determine which signaling effects are due solely to ROS formation. Nonetheless, ROS in and of itself is an important regulator of cellular stress and ageing across phylogenetic lines [35]. A recent report in Drosophila indicates that restoring normal levels of CRY in ageing flies restores normal rhythmicity and improves longevity [36], whereas novel DmCry responses have been linked to regulation of genes implicated in stress response and ROS signaling [37]. It is not excluded that some of these effects may be due to activation of redox sensitive transcription factors by ROS synthesized by DmCry Alternatively, a recent intriguing report has shown modulation of redox activated potassium channels of the plasma membrane by drosophila cryptochromes. This suggests a possible ROS signaling role of DmCry that may involve ROS-dependent activation of a cytosolic redox sensitive substrate [38].
|
v3-fos-license
|
2019-07-18T14:22:05.932Z
|
2019-07-16T00:00:00.000
|
197424096
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00405-019-05553-y.pdf",
"pdf_hash": "10325e07665560596e9a3afd56a04e1b03093590",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45895",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "10325e07665560596e9a3afd56a04e1b03093590",
"year": 2019
}
|
pes2o/s2orc
|
Inter-rater reliability of seven neurolaryngologists in laryngeal EMG signal interpretation
Purpose Laryngeal electromyography (LEMG) has been considered as gold standard in diagnostics of vocal fold movement impairment, but is still not commonly implemented in clinical routine. Since the signal interpretation of LEMG signals (LEMGs) is often a subjective and semi-quantitative matter, the goal of this study was to evaluate the inter-rater reliability of neurolaryngologists on LEMGs of volitional muscle activity. Methods For this study, 52 representative LEMGs of 371 LEMG datasets were selected from a multicenter registry for a blinded evaluation by 7 experienced members of the neurolaryngology working group of the European Laryngological Society (ELS). For the measurement of the observer agreement between two raters, Cohen’s Kappa statistic was calculated. For the interpretation of agreements of diagnoses among the seven examiners, we used the Fleiss’ Kappa statistic. Result When focusing on the categories “no activity”, “single fiber pattern”, and “strongly decreased recruitment pattern”, the inter-rater agreement varied from Cohen’s Kappa values between 0.48 and 0.84, indicating moderate to near-perfect agreement between the rater pairs. Calculating with Fleiss’ Kappa, a value of 0.61 showed good agreement among the seven raters. For the rating categories, the Fleiss’ Kappa value ranged from 0.52 to 0.74, which also showed a good agreement. Conclusion A good inter-rater agreement between the participating neurolaryngologists was achieved in the interpretation of LEMGs. More instructional courses should be offered to broadly implement LEMG as a reliable diagnostic tool in evaluating vocal fold movement disorders in clinical routine and to develop future algorithms for therapy and computer-assisted examination.
Until now, in most ENT or phoniatric departments LEMG is still not commonly used in clinical routine. Internationally, clinicians still mainly use laryngoscopy and stroboscopy for diagnosing vocal fold paralysis/paresis. According to Wu et al. [9], only 1.7% of the otolaryngologists responded to use LEMG for the diagnosis of vocal fold paresis (VFP) in the US. Similar result was also presented for Europe by Volk et al. [10], with only 3.6% of the responding that LEMG is the most important 1 3 tool for diagnosing VFP. This may be due to the lack of agreement on methodology, interpretation, validity, and clinical application of LEMG when using LEMG [11][12][13][14][15][16].
To minimize these problems, guidelines for using LEMG have been developed, and complemented with workshops on LEMG. Also partnerships with neurological departments have been established to share knowledge and join efforts to promote the implementation into clinical routine. A working group on neurolaryngology of the European Laryngological Society (ELS) is dealing with the evaluation of existing guidelines for LEMG performance and for the identification of issues requiring further clarification [12]. The primary assignment of the working group was to teach the key techniques of LEMG surgery. The group published a proposal for a set of recommendations for LEMG and initiated a registry with the aim of collecting LEMG data recorded according to these published recommendations [12,17]. Meetings and workshops have been organized for participants of the registry and other professionals interested in LEMG and neurolaryngology with the aim of providing a sufficient level of standardization and data quality.
Critical clinical information on the electrophysiologic status of the larynx can be reliably obtained by LEMG [18]. Beside the diagnosis of most neuromuscular diseases of the larynx, some clinicians consider LEMG as the required diagnostic tool for certain neuromuscular disorders of the larynx, such as VFP [18].
The initial diagnosis of respiratory immobility of the vocal fold is made during laryngeal examination, when there is a reduction or an absence of abduction or adduction of the true vocal fold as seen during laryngeal examination. Laryngeal paralysis is the most frequent cause of the vocal fold immobility. For its diagnosis, LEMG is an important diagnostic tool, particularly when performed 10-14 days after the onset of vocal fold immobility [13]. A diagnosis of arytenoid fixation is based on normal electrical activity patterns of the LEMG [19], while abnormal electrical activity patterns, including patterns of denervation or reinnervation, support the diagnosis of vocal fold paralysis [20].
Interpretation of LEMG signals comprises the recognition and evaluation of patterns, such as insertion activity, spontaneous activity, fibrillations, positive sharp waves, polyphasic action potentials and motor unit potential (MUP) recruitment. Appearance and interpretation of these signals also depend on the grade of volitional agonistic and antagonistic muscle activation during the evaluation. The absence of spontaneous activity, fibrillations or positive sharp waves, and presence of good motor recruitment, with or without polyphasic action potentials in LEMGs are signs for excellent prognosis.
To fully encompass the cause of vocal fold disorders using LEMG, signal recordings of the thyroarytenoid (TA), cricoarytenoid (CT) and posterior cricoarytenoid (PCA) muscles, which are innervated by the recurrent laryngeal nerve (RLN), are recommended for evaluation.
In case of RLN injury, the larynx is rarely totally denervated or paralyzed. Notably, adductor and abductor axons, as well as sensory and autonomic fibers run interwoven within the common trunk of the RLN [21]. When the laryngeal nerve is injured, the regeneration of these nerve components takes place to various degrees. Improper axonal redirection of nerve fibers into inappropriate muscle is possible and may occur in nerve trunks that supply multiple muscles [22]. This abnormal reinnervation is called synkinesis [23]. Actually, electromyography (EMG) typically reveals evidence of muscle activity despite the functional finding of immobility [24,25]. Crumley has extensively discussed the imperfect regenerative ability of the RLN [21,23,26,27].
The neurolaryngologist can draw conclusions on the functionality of the axons and neuromuscular junctions by interpretation of the LEMG signals that are acquired by needle electrodes placed in each target muscle tested. However, in the absence of reliable computer-assisted signal quantification methods, the interpretation of LEMGs remains based on subjective recognition of descriptive characteristics by each individual examiner. Thus, since the interpretation of LEMG seems partly a subjective matter and likely depends on training and experience level of the rater, the inter-rater agreement on diagnosing LEMGs is of particular interest and objective of the presented study.
In the first evaluation step, the examiners analyzed and classified the selected LEMGs. In the second step, the classification of the examiners was initially tested against each other using the Cohen's Kappa. Then, the results of the examiners' evaluation were analyzed using the Fleiss' Kappa.
Historically, percent agreement (number of agreement on a rating/total number of ratings) was used to determine inter-rater reliability [28]. However, chance agreement due to raters guessing is possible. To take this element of chance into account, in 1960, Jacob Cohen proposed the kappa statistic to provide more accurate measurement of the reliability between two raters making decisions about how a particular unit of analysis should be categorized. Cohen's Kappa measures the percentage of agreement between two raters and calculates the degree to which agreement can be accredited to chance [29]. For assessing the observer agreement between more than two raters, Joseph Fleiss proposed the generalization of unweighted kappa [30]. It is to mention that Fleiss' Kappa, one of the most common indices to quantify multiple-raters agreement [31], is the extension of William Scott's π index [32,33].
Methods
From May 2012 to March 2014, laryngologists from 14 different European clinical departments with special interest in neurolaryngology joined a multicenter registry to collect LEMG datasets, and to learn more about the indications for performing LEMG and the interpretation of the results. The local ethics committees gave approval in all participating hospitals (Ethical Committee of the University Department of Jena, No. 5145-04/17). The departments had the possibility to send staff experts to perform LEMG together [34].
For this study, seven experienced neurolaryngologists from Germany and Austria-five otolaryngologists and two phoniatricians-have been selected to evaluate pre-recorded LEMG data according to the guidelines of the European Laryngological Society [ Only signal recordings of maximum volitional activity of single muscles, that have been acquired during agonistic maneuvers, were included in this study, while evaluation of possible synkinetic reinnervation of several muscles were not part of the study.
From a multicenter LEMG registry consisting of 371 LEMG datasets, 52 representative LEMGs have been selected as not all LEMGs in the registry were usable for a study purpose due to insufficient length and shape of the LEMG recordings. From the 52 selected LEMGs, 26 LEMGs referred to LEMGs of the thyroarytenoid muscle (TA), 21 to posterior cricoarytenoid muscle (PCA) and 5 to cricothyroid muscle (CT). The evaluation of the selected LEMGs was blinded, since the examiners had no knowledge about the original classification of the selected LEMGs.
In this evaluation study, Cohen's Kappa was used to measure the agreement between two raters who each classify N LEMG samples into C equally exclusive categories. Cohen's Kappa statistic measures inter-rater reliability. Interrater reliability happens when data raters give the same score to the same data item.
The kappa statistic varies from 0 to 1. The kappa results could be interpreted as shown in Table 1.
To calculate Cohen's Kappa, following formula was used: P a represents the actual observed proportion of agreement and P e the proportion of agreement expected by chance. P a k = P a − P e 1 − P e is calculated by the LEMG diagnoses in agreement per total number of subjects (LEMG samples). Since the kappa is based on the Chi square table, the value of P e can be calculated with the following formula [35]: Since Cohen's Kappa is only suitable for evaluating the inter-rater reliability between two raters, we further used the Fleiss' Kappa [30] to obtain the values for interpreting the agreements of diagnoses among the 7 examiners for their expertise opinions on the selected 52 LEMG samples.
In our study, we had N LEMG samples and r rates per subject. All raters had to assign each LEMG sample in one of the C mutually exclusive categories. The LEMG samples were represented by the subscript i, where i = 1, …, 52, and the categories of the scale by the subscript j, where j = 1, …, 4.
The number of the raters who assigned the ith LEMG sample to the jth category was defined as r ij . The proportion of all assignments to the jth category was defined as p j , which according to Scott [32] and Fleiss [30] is And the proportion of pair of raters agreeing in the ith subject was defined as P i , which is The overall extent of agreement measured by the mean of the P i s as proposed by Fleiss [30] and Fleiss et al. [31] is, therefore,
3
The mean proportion of agreement, as proposed by Scott [32] and Fleiss [30],meaning for both categories from measures the degree of agreement based on chance. As suggested by Fleiss [30], we obtained the kappa statistic by correcting the overall extent of agreement for the mean proportion of agreement based on chance and normalized: To measure the extent of agreement beyond chance in assignment to category j proposed by Fleiss [30], the following formula was used: For the interpretation of kappa coefficient, Fleiss proposed the categories: poor agreement (k Fleiss < 0.40), good agreement (k Fleiss between 0.40-0.75) and excellent agreement (k Fleiss > 0.75) [30].
Results
To interpret the inter-rater reliability between two raters, we have performed the calculation of Cohen's Kappa. The kappa results from the seven examiners against each other can be found in Table 2.
As shown in Table 2, when comparing the results of the raters against each other, the Cohen's Kappa value ranges from 0.48 to 0.84, which means from moderate agreement to near-perfect agreement. Calculation of the rater pairs using Cohen's Kappa, 42.86% achieved a moderate agreement and 52.38% a substantial agreement, whereas 4.76% of the rater pairs reached near-perfect agreement.
The inter-rater agreement of rater pairs for each category is presented in Table 3. "Category reliability between rater pairs".
For the category "no activity/electric silence", the kappa value ranges from 0.55 to 0.96, meaning from moderate agreement to near-perfect agreement. The kappa values for the categories "single fiber activity" and "strongly decreased recruitment pattern" range from 0.30 to 0.87 and from 0.39 to 0.89, respectively, meaning for both categories from fair agreement to near-perfect agreement. As for the category "mildly decreased recruitment pattern", the kappa value lies between 0.42 and 1, which means between moderate agreement and perfect agreement.
In other words, assessing the inter-rater agreement of rater pairs regarding the given categories using Cohen's Kappa, in the category "no activity/electric silence" 14.29% of rater pairs achieved moderate agreement, 38.09% substantial agreement and 47.62% near-perfect agreement. In the category "single fiber activity", fair agreement was achieved by 23.81%, moderate agreement by 47.62%, substantial agreement by 19.05% and near-perfect agreement by 9.52% of rater pairs. For the category "strongly decreased recruitment pattern", 14.29% of rater pairs achieved fair agreement, 57.14% moderate agreement, 23.81% substantial agreement and 4.76% near-perfect agreement. And in the category "mildly decreased recruitment pattern", 23.81% of rater pairs achieved moderate agreement, 52.38% substantial agreement, 19.05% near-perfect agreement and 4.76% perfect agreement.
As seen in Table 4, we had 52 LEMG samples in our study. For each LEMG sample, we had seven ratings. Thus, for the evaluation of overall agreement among the seven examiners, we used the formula described by Fleiss to calculate the kappa value and got a result of 0.61. It means there was a good agreement among the seven raters in their expert opinion on the selected LEMGs.
We also measured the extent of agreement beyond chance in assignment to category j proposed by Fleiss [30]. In all rating categories, the seven examiners also achieved a good agreement. The Fleiss' Kappa values range from 0.52 to 0.74. In the rating category "no activity/electric silence", almost an excellent agreement among the seven examiners has been achieved. These values are detailed in Table 5. When analyzing the differences between "strongly decreased recruitment pattern", "mildly decreased recruitment pattern", and "normal/dense recruitment pattern", the inter-rater reliability was much worse.
Discussion
VFP accounts for an important part of clinical workload in an ENT department. In a university department in Germany, Austria and Switzerland VFP was diagnosed eight times per month, which emphasizes the magnitude of the problem in clinical routine [10]. Wu and Sulica also reported the exact same prevalence for US American laryngology experts [9]. In daily clinical examination of voice disorders, laryngoscopic or videostroboscopic examination is still the most frequently used diagnostic method, although LEMG is recognized as a valuable diagnostic tool for more than 60 years, especially in differentiating neurogenic from structural causes for vocal fold immobility. Although LEMG is the best tool for diagnosing laryngeal paresis objectively and possesses a high predictive value for the outcome of VFP with poor prognosis, many laryngologists still do not routinely used it. The causes might be lacking of agreement on methodology, interpretation, validity and clinical application of LEMG [10].
In this study, we could achieve a reasonable inter-rater reliability among rater pairs and all of the seven raters in general, despite that using Cohen's Kappa 23.80% and 14.29% of rater pairs achieved only fair agreement in the category "single fiber activity" and "strongly decreased recruitment pattern", respectively. Whereas assessing the inter-rater reliability between the 7 examiners using Fleiss' Kappa, it showed good agreement among the raters. In the rating category "no activity/electric silence", the best agreement among the raters could be observed, while in the rating category "single fiber activity" the least agreement among the observers. This might be attributed to the more difficult differentiation of the latter signal pattern or a not precise enough definition. Also the training level of the raters could explain the differences in the interpretation. Though the inter-rater agreement was acceptable, this indicates that, the way on how to interpret the LEMG is still imperfect. Till now, most classifications describing spontaneous and voluntary EMG characteristics are of descriptive or semi-quantitative nature [17]. Automated signal pattern recognition by validated software algorithms are requested, but not yet well established for LEMG [36]. Thus, the interpretation of the LEMG findings is still considered to be subjective and might account for the different interpretation of the LEMG findings among the raters [37]. LEMG is a valuable diagnostic tool for investigation the causes of vocal fold immobility and estimating the degree of laryngeal nerve damage in laryngeal paralysis. The laryngologist can use this information to make more rational decisions regarding the type and timing of phonosurgery in patients with laryngeal paralysis [37]. Even further, patient counseling on novel therapy options can be performed more sophisticated, if the prognosis on nerve regeneration and restoring of vocal fold function can be estimated by the examiner/physician [38].
For example, the detection of laryngeal synkinesis is of importance for alternative therapy concepts like botulinum toxin infection, electric laryngeal stimulation or laryngeal pacing [39]. Botulinum toxin might weaken the M. thyroarytaenoideus in episodic dyspnoea attacks. Electrical stimulation may promote the specificity of reinnervation of denervated laryngeal muscles. This is an important finding, since 70% of patients having bilateral VFP become synkinetically paralyzed despite successful reinnervation [40].
LEMG is also the only method to show if a subtle vocal fold motion asymmetry is due to a neurologic insult and affords information regarding the sidedness of the abnormality, which may not be obvious from laryngoscopic examination [41].
In summary, a good inter-rater agreement between the participating neurolaryngologists was achieved in the interpretation of LEMGs. For further improvement, the provision of refined definitions of the LEMG rating categories is recommended. We believe that precise interpretation of LEMG signals provides key understanding of the spectrum of the neurogenic causes of vocal fold movement impairment and in consequence plays a major part in the decision on current and emerging therapeutic approaches. Expert agreement on signal interpretation is also needed to establish a solid basis for the development of software-based pattern recognition algorithms that might simplify and, therefore, encourage and spread the clinical application of this valuable diagnostic tool. However, more opportunities of training and workshops should be offered and work on objective quantification methods should be encouraged. A comprehensive network of applied clinical LEMG diagnostic routine should be strived for in ENT and Phoniatric Departments to provide the best basis for therapeutic decisions.
Conclusion
Overall, LEMG is a useful diagnostic tool for advanced diagnostic evaluation of laryngeal pareses and can play a key role in identifying the optimum therapeutic spectrum for each individual patient. Due to the preceding training of the raters, an acceptable inter-rater agreement could be achieved. Yet to further improve inter-rater agreement among neurolaryngologists and encourage a broader use of LEMG as a diagnostic tool, more instructional and handson courses should be offered. Also, as LEMG interpretation is still semi-quantitative and subjective, more experience is needed to achieve a better inter-rater agreement to refine the method, establish clearer definitions of the rating categories and provide diagnostic criteria for future diagnostic algorithms for computer-assisted examinations. In the near future, the LEMG should become one of the gold-standard diagnostic tools for examination of vocal fold movement disorders.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2011-05-10T00:00:00.000
|
8363749
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://zookeys.pensoft.net/lib/ajax_srv/article_elements_srv.php?action=download_pdf&item_id=2329",
"pdf_hash": "09dfd42d36e68232442a165726950d5138417bc3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45896",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "ccf0acf8569f3c784cba7ad42a96a81e78f40fc6",
"year": 2011
}
|
pes2o/s2orc
|
First descriptions of copepodid stages, sexual dimorphism and intraspecific variability of Mesocletodes Sars, 1909 (Copepoda, Harpacticoida, Argestidae), including the description of a new species with broad abyssal distribution
Abstract Mesocletodes Sars, 1909a encompasses 37 species to date. Initial evidence on intraspecific variability and sexual dimorphism has been verified for 77 specimens of Mesocletodes elmari sp. n. from various deep-sea regions, and ontogenetic development has been traced for the first time. Apomorphies are a strong spinule-like pinna on the mx seta that is fused to the basis, P2–P4 exp3 proximal outer seta lost, P1–P4 enp2 extremely elongated, furcal rami elongated, female body of prickly appearance, female P2–P4 enp2 proximal inner seta lost. Intraspecific variability involves spinulation, ornamentation and size of the body and setation and spinulation of pereiopods. Sexually dimorphic modifications of adult females include prickly appearance of the body, P1 enp exceeds exp in length, P1 coxa externally broadened, seta of basis arising from prominent protrusion, hyaline frills of body somites ornate. Sexual dimorphism in adult males is expressed in smaller body size, haplocer A1, 2 inner setae on P2–P4 enp2 and on P5 exp, P5 basendopodal lobe with 2 setae. Some modifications allow sexing of copepodid stages. The female A1 is fully developed in CV, the male A1 undergoes extensive modifications at the last molt. P1–P4 are fully developed in CV. Mesocletodes faroerensis and Mesocletodes thielei lack apomorphies of Mesocletodes and are excluded.
introduction
Expeditions to the Southeast Atlantic (DIVA-1 [Balzer et al. 2006], DIVA-2 [Türkay and Pätzold 2009] and part of ANDEEP III [Fahrbach 2006]), the Southern Ocean (AN-DEEP I and II [Fütterer et al. 2003]), the South Indian Ocean (CROZEX [Pollard and Sanders 2006]), the central Pacific (NODINAUT [Galéron and Fabri 2004], the North Atlantic (Porcupine Abyssal Plain, PAP [see Kalogeropoulou et al. 2010 for summary] and the Great Meteor Bank [Pfannkuche et al. 2000]) ( Fig. 1) provided numerous specimens of the genus Mesocletodes Sars, 1909a. Belonging to the family of Argestidae Por, 1986a, Mesocletodes is considered to be a typical and primarily deep-water dwelling taxon (compare overview in George 2004 andGeorge 2008). The total number of Mesocletodes in deep-sea samples amounts to almost 50% of all Argestidae Por, which in turn form one of the most abundant taxa of harpacticoid copepods therein. Due to the high frequency in deep-sea samples and conspicuous morphological characters, Mesocletodes is informative for chorological, faunistic and biogeographic research. The number of specimens as well as species diversity are substantial, but species are well discernible.
Mesocletodes nowadays comprises 36 species (Menzel and George 2009;Wells 2007). All allied species show characteristic morphological features that allow rapid recognition in metazoan meiofauna samples: body of cylindrical shape, A1 segment 2 with conspicuous protrusion bearing a strong seta, md gnathobase with broad grinding face, P1 exp2 without inner seta, P1 exp3 without proximal outer spine, spines of this segment with subterminal tubular extensions, P2-P4 exp1 without inner seta, P2-P4 enps at most 2-segmented, telson square in dorsal and ventral view and furcal rami long and slender (cf. Menzel and George 2009).
The sex ratio of harpacticoid copepods in the deep sea is strongly biased towards females (e.g. Shimanaga et al. 2009;Shimanaga and Shirayama 2003;Thistle and Eckman 1990) and it is very difficult or nearly impossible to connect males and females of some species (e.g. Menzel and George 2009;Seifried and Veit-Köhler 2010;Vasconcelos et al. 2009;Willen 2006;Willen 2009;Willen and Dittmar 2009), indicating extremely poecilandric populations (Por 1986b). Concerning Argestidae, males could be connected to females for Eurycletodes Sars, 1909b, Argestes Sars, 1910, and Hypalocletodes Por, 1967. Since the establishment of Mesocletodes early in the 20 th century (Sars, 1909a), this has been possible only for two species plus the herein described species. For 32 species of this genus only females are known, while exclusively males are known for two species.
Most of the species descriptions of Mesocletodes are based on few adult specimens (29 descriptions contain one to five type specimens, three descriptions are based on six to ten specimens, four descriptions are based on 11 to 16 specimens). Thus, neither intraspecific variability nor the process of ontogenetic development is reported for any species of Mesocletodes. Expeditions during the DIVA and ANDEEP campaigns yielded 54 out of 66 adults of Mesocletodes elmari sp. n. (more than 80%). The comparatively high frequency of specimens is probably explicable by the greater sampling effort in contrast to the CROZEX, NODINAUT, OASIS expeditions and sampling at the PAP as well as during previous campaigns. Repeated multicorer sampling of the same station (Martínez Rose et al. 2005) greatly enhances, for the first time, the opportunity of finding the same species again in one station or region. This implies that more specimens of one species are available, making investigations on intraspecific variability, specification of sexually dimorphic modifications and retracing of the ontogenetic development possible for the first time (cf. George 2008).
The aim of this publication is to convey an initial impression of the extent of sexually dimorphic modifications, ontogeny and intraspecific variability for the genus Mesocletodes, using Mesocletodes elmari sp. n. as an example.
Material and methods
Sediment samples were taken with a multicorer (Barnett et al. 1984) in different oceanic regions: Southeast Atlantic (DIVA-1, DIVA-2 and part of ANDEEP III), South-ern Ocean (ANDEEP I and II), South Indian Ocean (CROZEX), central Pacific (NO-DINAUT), North Atlantic (PAP and Great Meteor Bank) (Fig. 1, Table 1). Adult Harpacticoida were extracted from all samples, whereas copepodid stages are only available from the campaigns DIVA-1, DIVA-2 and ANDEEP.
Altogether 77 specimens (56 adult females, 10 adult males, 2 CV females, 3 CV males, 5 CIV males and 1 CIII) were found. The type material of Mesocletodes elmari sp. n. consists of 7 specimens (2 females plus 1 each of the other discovered stages). The type material was deposited in the collection of the Senckenberg Forschungs institut und Naturmuseum Frankfurt (Germany). The remaining 70 specimens are mounted on slides and kept in the collection of the DZMB in Wilhelmshaven (Germany).
The material was mounted on separate slides using glycerol as the embedding medium. Identification at the species level and drawings were carried out using a Leica microscope DM2500 equipped with a camera lucida and interference contrast with a maximum magnification of 1600x.
The CLSM photograph of a Congo-red stained female was taken with a Leica TCS SP5 mounted in a Leica DM5000. Preparations and settings were made according to Michels and Büntzow (2010).
P6 integrated into GF (Fig. 8 B), reduced to a fused opercular plate, armed with 1 short spine on each side (see asterisk in Fig. 8 B). GF with single aperture, accompanied by 1 row of spinules on each side. FR (Fig. 3 G) long and slender, ornate, ventral spinules between seta VII and III. Approximately 13 times as long as broad (measured at base). Close to base ventrolaterally with 1 notch-like pore at external side ( Fig. 3 G, C). Extremely elongated between setae VII and III. Seta I close to seta II. Seta VII triarticulate. Seta III located on dorsal side subterminally. Setae IV-VI located terminally. FR laterally with subterminal tube pore (see arrow in Fig. 3 G).
Description of adult male paratype (Allotype) (Figs 8-11) The adult male corresponds to the adult female in all morphological characters unless deviations are mentioned below.
Morphological variability (cf. Table 1). The body length including FR is variable: for adult females between 0.57 and 1.06 mm (the majority measured 0.7 to 0.9 mm), for adult males between 0.4 and 0.7 mm, for CV females between 0.5 and 0.75 mm, for CV males between 0.5 and 0.59 mm, for CIV males between 0.4 and 0.64 mm.
The spinulation also seems to be highly variable: the row of spinules ventrally at the telson ranges from numerous, long and slender to few, short and stout. In total, 16 specimens show setular tufts in the FR: six adult females, one CV male and the five CIV males bear setular tufts close to the telson, four adult females close to seta VII. The amount of spinules in A1 segment 3 varies. Four out of 56 adult females, all adult males and copepodid stages possess a non-ornate hyaline frill. A very rare feature (in two adult females, all CIV males) is also the presence of outer setae in P2-P4 enp2 or just in P2 enp2 (one adult female). The number of eggs (2-20) is variable, too.
Allocation of Mesocletodes elmari sp. n. to Mesocletodes and its position within this genus
Allocation of M. elmari sp. n. to the taxon Mesocletodes is indisputable since all specimens show the apomorphies recognized by Menzel and George (2009): 1) second A1 segment with a strong protrusion bearing 1 strong, bipinnate seta, 2) proximal outer spine of P1 exp3 reduced, 3) spines of P1 exp3 equipped with STE and 4) blades of md gnathobase forming a strong, grinding tooth.
The phylogenetic relationships within Mesocletodes are still under discussion. However, a first approach is possible: M. elmari sp. n. is considered to belong to the "Mesocletodes inermis group" as it lacks the characteristic cuticular processes on cephalothorax and telson that are regarded to be autapomorphic to the M. abyssicola-group (Menzel and George 2009). The extreme elongation of the FR is assumed to be convergent in the new species and the M. abyssicola-group because several recently observed, but as yet undescribed species of Mesocletodes without cuticular processes on cephalothorax and telson also show elongated FR (personal observation). Future investigations, however, will have to prove the phylogenetic relevance of the elongated FR for the M. abyssicola-group.
M. elmari sp. n. shows a distinct mxl exopodal segment, and the enp is incorporated into the basis. By contrast, a distinct endopodal segment is described for the mxl of M. bodini (Soyer 1964;Soyer 1975) and M. irrasus (T. and A. Scott 1894), whereas the exp is considered to be absent. According to Huys and Boxshall (1991) and Seifried (2003), however, the distinct segments of M. elmari sp. n., M. bodini and M. irrasus are homologous to the exp of other Harpacticoida. The description for M. irrasus and M. bodini is therefore erroneous because they show an articulated exp instead of an articulated enp.
Justification of Mesocletodes elmari sp. n. as a new species
From a morphological point of view M. elmari sp. n. is similar to M. bodini and M. parabodini as these three are the only species of Mesocletodes with elongated P1-P4 enp2. M. elmari sp. n., however, shows clear autapomorphies [plesiomorphic states in brackets] that justify it as a new species: 1) mx seta that is fused to the basis, bears a conspicuously strong spinule-like pinna [seta without spinule-like pinna] 2) P2-P4 exp3 proximal outer seta lost [seta present] 3) P1-P4 enp2 extremely elongated [not elongated] 4) FR strongly elongated between setae III and VII [not elongated] 5) female body of a prickly appearance created by setules that are widened at their bases [no prickly appearance] 6) female P2-P4 enp2 proximal inner seta lost [seta present] Character 1): The mx seta that is fused to the basis carries a conspicuously strong spinule-like pinna in M. elmari sp. n. The corresponding seta in other species of Mesocletodes is usually bipinnate with the pinnae of equal size. The loss of all pinnae except one at the anterior side plus the modification of this pinna towards a spinule-like appearance is not recorded for any other species of Mesocletodes or Argestidae and is therefore regarded here as derived. This modification thus is considered to be autapomorphic to M. elmari sp. n. Character 2): M. elmari sp. n. lacks the proximal outer seta on P2-P4 exp3. The reduction of outer pereiopodal ornamentation is considered to be derived according to the rule of oligomerization (Huys and Boxshall, 1991), but various harpacticoid taxa, including species of Mesocletodes lack this seta convergently. The loss of the proximal outer seta on P2-P4 exp3 is thus considered to be species-specific and therefore autapomorphic to M. elmari sp. n. Character 3): Endopodal segments of species of Mesocletodes are very short and there are never more than two of them in this genus, many species even have only one single segment. The extreme elongations in P1-P4 enp2 are unique for M. elmari sp. n. and are considered to be the result of lengthening of the distal endopodal segment. Ontogenetic stages of males do not show a suture that might indicate a fusion of the distal segment with the preceding. Extreme elongations of P1-P4 enp2 are therefore considered here to be autapomorphic to M. elmari sp. n. A less extreme elongation of these segments, however, occurs also in M. bodini and M. parabodini. Character 4): The FR of Mesocletodes are longer than wide, with setae IV, V and VI located terminally, whereas setae I, II, III and VII are located closer to or in the proximal part of the ramus. An extreme elongation between setae III and VII has been discussed as an apomorphy for the Mesocletodes abyssicola-group (Menzel and George, 2009). However, lacking cuticular processes on cephalotorax and/or telson, M. elmari sp. n. does not show the other two apomorphies of the Mesocletodes abyssicola-group. The extreme elongation of FR thus is considered here to occur convergently in M. elmari sp. n. and species belonging to the M. abyssicola-group. Character 5): Females of M. elmari sp. n. are characterized by the prickly appearance of the body somites dorsally and laterally. Such coverage is absent in other species of Mesocletodes and is therefore regarded here as derived, i.e. an autapomorphic character for M. elmari sp. n.
Character 6): Endopodal segments do not seem to be fused in M. elmari sp. n. (see character 3). The proximal inner seta on P2-P4 enp2 in males is considered to be reduced in females. The lack of the proximal inner seta on P2-P4 enp2 is therefore considered here to be autapomorphic to females of M. elmari sp. n.
Intraspecific variability in Mesocletodes elmari sp. n.
Intraspecific variability in deep-sea harpacticoids has recently been revealed to be extremely high. For instance, George (2008), Seifried and Martínez Arbizu (2008) as well as Gheerardyn and Veit-Köhler (2009) were able to show that neither setation nor segmentation, nor total length of appendages has to be a reliable character for species discrimination in deep-sea Harpacticoida. Variability in Argestidae has only been recorded for the pereiopodal chaetotaxy of Argestes angolaensis George, 2008 (George 2008 andpersonal observations), and for the shape and number of ventral spinules on the telson in the argestid genus Eurycletodes Sars, 1909b (Menzel in press).
Although clear apomorphies were recognized for M. elmari sp. n., careful morphological examination of the 77 specimens revealed high intraspecific variability (cf. Table 1). The total length of FR, the number and the shape of spinules in various parts of the body, the ornamentation of the hyaline frill and the setation of P2-P4 enp2 is variable. Moreover, few specimens bear setular tufts in various positions on the FR. Setular tufts on the FR near seta VII have only been recorded for M. bodini (Soyer 1975) and M. parabodini (Schriever 1983), but corresponding structures near the basis seem to be unique in M. elmari sp. n. Although setular tufts on the FR seem to be species-specific for M. bodini and M. parabodini, the importance of those cuticular structures for species discrimination or even for unraveling phylogenetic relationships remains unclear.
Sexual dimorphism in Mesocletodes
Many morphological characters of species belonging to Mesocletodes are entirely different in both genders. Nevertheless, the identification keys for Mesocletodes are exclusively based on the morphology of females (e.g. Wells 2007), possibly due to the fact that merely two males have been described to date. With the aid of these keys, it is nearly impossible to connect a male of Mesocletodes to the corresponding female. Consequently the number of species in any deep-sea sample is overestimated, which means faunistic and ecological analyses at the species level are subject to a strong bias. As follows, it appears urgent to quantify the sexually dimorphic modifications in Mesocletodes.
Sexual dimorphism in adults. The descriptions of Mesocletodes contain only females, with the exception of four species: exclusively the male is described for M. angolaensis and M. fladensis (the latter description is poorly detailed). Both genders are described for M. faroerensis and M. thielei. However, these two species bear a proximal outer spine in P1 exp3 and 3 inner setae on P3 exp3. Moreover, M. faroerensis bears an inner seta on P1 exp2 and 3 inner setae on P3 exp3, and the md gnathobase of M. thielei does not form a strong grinding face. Consequently, both species lack autapomorphies of Mesocletodes (cf. Menzel and George 2009). Even though the descriptions are poorly detailed and the type material of both species is not available any more, the characters in question are not to be misinterpreted. Thus, M. faroerensis and M. thielei have to be excluded from Mesocletodes. Future investigations will have to unveil their generic attribution within Argestidae. Consequently, M. elmari sp. n. is the only known species with matching males and females and therefore convenient for investigations on sexually dimorphic modifications in Mesocletodes.
Sexually dimorphic modifications in males of basal Argestidae, such as Argestes (George 2008), and Bodinia George, 2004(George 2004 include the A1, P5, P6, and the body size, whereas males of Mesocletodes show many more affected characters. The modifications in M. elmari sp. n. males are comparable to the ones observed in M. angolaensis and numerous undescribed males from deep-sea samples (personal observation) and are therefore considered to be a good representation of male sexual dimorphism in Mesocletodes. 1) The body tapers distally and the setation especially in P1-P4 is very rich and strongly developed in comparison to females. These morphological characters are likely adaptations that help males to stay in the bottom currents once resuspended (cf. characteristics of "typical emergers" [Thistle and Sedlacek 2004;Thistle et al. 2007]) and thus would allow them to explore the sediments for mates.
2) The gut of adult males of Mesocletodes is generally empty (personal observation), but the body is filled with several spermatophores instead of food as is reported for several Harpacticoida (cf. Menzel and George 2009;Shimanaga et al. 2009;Wells 1965;Willen 2005). Since the gut of CIV males and CV males of M. elmari sp. n. is well filled with sediment or detritus, feeding seems to be abandoned at the last molt. It has not been investigated yet whether the gut and digestive tissue are present in adult males. However, the abandonment of feeding and the production of extremely large and numerous spermatophores might be an adaptation to the sparsely populated and oligotrophic deep-sea environments and is therefore considered to represent a derived character state. 3) Mouthparts are either absent, strongly reduced or complete, but apparently not utilized for feeding. Along with the complete reduction of mouthparts, the cephalothorax of M. angolaensis is slightly depressed in the lateral view and lacks the part that encloses the mouthparts in females. Although the mouthparts of the male of M. elmari sp. n. do not differ from the female, the ventral edge of the male cephalothorax is less rounded than in the female, but less reduced than in M. angolaensis.
However, not only the empty gut or the reduction of mouthparts indicates the abandonment of feeding in adult males, but also the A1: most setae on the A1 of the adult male of M. elmari sp. n. are smooth, merely some in the grasping region of the A1 (segments 3-6) are bipinnate (Fig. 10 A). However, all setae that are smooth in the adult male are strongly pinnate in the two preceding copepodid stages (Figs. 10 B, 14 A). Thus, the loss of pinnae is regarded as another sexually dimorphic modification in adult males since the regression or poorer development of setal elements is typical of non-feeding male copepods (Boxshall and Huys 1998).
Females are generally considered to show the whole character set of a species while the modifications in males are considered to be due to sexual dimorphism (but see George 1998;George 2006a for Ancorabolidae). It is likely, however, that adult females, too, show characters that are connected to the gender because the CV females of M. elmari sp. n. do not show characters that are typical of adult females: prickly appearance of the body created by setules that are widened at their bases, coxa of P1 externally widened and basal inner seta arising from a prominent protrusion, strongly bent outwards and overlying the enp, P1 enp exceeding exp in length, all extremities bearing conspicuously numerous and strong spinules, and hyaline frill of body somites ornate.
Sexual dimorphisms in juveniles. Sexually dimorphic modifications expressed in copepodid stages of M. elmari sp. n. allow sexing during ontogenetic stages, at least from CV onwards; it is only partially resolved for this species if sexing of CIV is possible because all discovered CIV seem to be of the same gender. A similar constraint applies to the single individual of CIII. This copepodid stage, however, is assumed not to show sexual dimorphism (e.g. Dahms 1990) and is therefore not discussed here.
Sexing of CV. The male CV and the female CV of M. elmari sp. n. are distinguishable from the adults by virtue of the overall smaller body size, the lack of the penultimate urosomite and the non-articulated P5 exp. Moreover, the female CV lacks the GF, the male CV lacks the spermatophores and shows strong differences from the adult male in the A1 (Fig. 15 B, C): only six out of nine A1 segments are articulated and several setae are lacking. The position and number of developed setae in these segments, however, resemble the adult male A1 more than the adult female A1 (compare Figs 4 A, 10 A, B, 15 A-C).
Sexing of CIV. Careful examination of the A1 and the P5 suggests sexing of the discovered CIV as males.
The five inner setae on the third segment of the CIV A1 (Figs 14 A, 15 D) are almost evenly distributed as is the case in the CV male (Figs 10 B, 15 C). The CV female A1 (cf. Fig. 4 A) has the aes on the fourth segment, while it is on the third in the CV male (Figs 10 B, 15 C). As follows, if the CIV were females, a separation of the aes-bearing segment from the third segment should happen at the next molt. This does not seem plausible, however, because four setae on female segment 3 (Figs 4 A, 15 A) are close to each other in the middle of the segment, the fifth seta inserts distally. An elongation proximally and distally of the evenly distributed four setae in CIV segment 3, plus shortening of the distances between these setae, is not likely. However, an addition of three inner setae at the molt from CIV (Figs 14 A,15 D) to CV (Figs 10 B,15 C) in the distal part of this segment (see solid squares in Figs 10 B, 15 C) and maintenance of the distances between the five setae addressed above appear likely. The A1 of the CIV is therefore considered herein to show male characteristics.
The P5 endopodal lobe of the four CIV (Fig. 8 F) has one short, outer seta, one long medial seta and one inner cuticular protrusion, and is therefore in accordance with the CV male (Fig. 8 E). The setation of P5 exp, however, resembles the CV female. Nevertheless, the small depression on the proximal inner edge of the exp (see asterisk in Fig. 8 F) might indicate the emergence of a seta at the next molt, which is only present in males. It is unclear, however, whether harpacticoid CIV show sexually dimorphic modifications in P5 exp. It seems that the CIV of M. elmari sp. n. do, whereas the opposite is reported for the CIV of an undescribed species of Orthopsyllus Brady and Robertson, 1873 (Huys 1990).
P2-P4 enp2 of the discovered CIV bear one inner seta, which is in accordance with female adults and CV. The male adult and CV bear two inner setae in these segments, with the distal seta being homologous to the single seta in the adult female. However, previous studies suggest that endopodal setation is not complete in harpacticoid CIV (Dahms 1990;Dahms 1993;Huys 1990). Thus, the addition of the proximal inner seta at the molt to CV is considered to be likely.
Ontogenetic development of Mesocletodes elmari sp. n.
Although copepodid stages amount to between 30% and more than 50% of the total deep-sea harpacticoid assemblage, they are excluded from faunistic analyses because confident specific allocation is not possible for many families. For investigations on phylogeny, however, juveniles may be the key to plausible theories (e.g. Ferrari 1988;Fiers 1998;Huys and Boxshall 1991).
Many species descriptions contain short remarks on Mesocletodes relationships with other genera and species within the genus. Phylogenetic investigations have been subject to one study to date (Menzel and George 2009), whereas ontogenetic studies on Mesocletodes are pending. However, not all copepodid stages of M. elmari sp. n. are available, and a comparison with juvenile stages of other species of Mesocletodes is impossible due to the lack of knowledge. The ontogeny of M. elmari sp. n. is therefore presented here in a rather descriptive way, but with the purpose to serve as a background for future studies.
A2, mouthparts and FR of Harpacticoida are complete with respect to segmentation and setation from CI onwards (cf. Dahms 1990;Dahms 1992;Dahms 1993). A1 and pereiopods, by contrast, develop gradually by every molt, which is also the case for the habitus: at each molt from CI to adult, one body somite is added anterior to the telson. CV thus shows seven free trunk segments, CIV shows six, and CIII shows five free trunk segments between cephalotorax and telson. Reproductive organs (GF in females and spermatophores in males) are developed at the molt to adult.
A1. The female A1 of M. elmari sp. n. is complete at least at CV, whereas the male A1, which is available from CIV onwards, undergoes extensive modifications at each molt. Segments 3 to 5 of the adult male are part of the third compound segment in CIV males, three setae (marked by solid squares in Figs 10 B, 15 C) are added to this compound segment at the molt to CV. The strongest modifications appear at the molt to adult: the third compound segment is simultaneously separated into segments 3, 4 and 5. Segment 6 of the adult male is distinct at least from CIV onwards, but the proximal seta is added at the molt to CV. Segment 7, directly preceding the geniculation, is not present prior to the molt to adult male.
The characteristic Mesocletodes seta (strong, bipinnate, arising from a conspicuous protrusion, see Menzel and George 2009) and a subterminal seta occur at CV in males (compare setae marked by asterisks in Figs 4 A, 10 A, B, 15 A-C). This is likely the case for females, too, as the second A1 segment does not show sexually dimorphic modification regarding the number and position of setae.
Although sexing of the single discovered CIII was impossible, its A1 provides valuable ontogenetic information for M. elmari sp. n. with respect to the first and the last two A1 segments. These segments, moreover, are not sexually dimorphically modified in CIV or later stages.
Segment 1 lacks a seta at least from CIII onwards (Figs 4 A, 10 A, B, 14 A, F). The presence of a seta on this segment in CI and CII, but the loss at the molt to CIII is discussed to be the case for some harpacticoid species (cf. Boxshall and Huys 1998;Dahms, 1989). This, however, could not be followed for M. elmari sp. n due to the lack of earlier stages than CIII. A similar constraint applies to the development of the last two segments, which are complete at least at CIII (see schematics in Fig. 15), but should also be since CI, as it would be the case in many harpacticoids (cf. Dahms 1989 and references therein). P1-P5. Copepodid development of CI to CV implies extensive changes in P2-P5 with respect to segmentation and setation at each molt. P1 exopodal setation, however, is complete from CI, endopodal setation from CII (Dahms 1993). Changes from the last copepodid stage to adults are restricted to the increase in size (e.g. Dahms 1993;Ferrari 1988). Although earlier stages than CIII have not been found, the investigations on M. elmari sp. n. are considered to provide an adequate insight into postnaupliar development of P2-P4 in Mesocletodes since the progress of the P4 in CIII is comparable to the P2 in CI (Dahms 1993).
Outer elements on the pereiopods of M. elmari sp. n. occur earlier during ontogeny than inner setae, exps and enps are affected likewise (see P2-P4 of CIII and CIV, Figs 13 B-D, 14 C-E ) (cf. Dahms 1993;Ferrari 1988;George 2001). The development of setae in M. elmari sp. n. is complete at the latest in CIII for P1 (however, it should already be complete in CI, see above), or in CIV for P2-P4 respectively. The separation of the second and third exopodal segments of P1-P4, however, occurs at the molt to CV. P1-P3 endopodal segmentation is complete at the latest in CIII of M. elmari sp. n., whereas P4 still shows a 1-segmented enp at this stage.
In CIV males the P5 endopodal lobe corresponds to the one in CV and adult, whereas the P5 exp lacks the proximal inner seta (Fig. 8 D, E, F) (see section Sexual dimorphisms in juveniles).
On the basis of adult specimens, Menzel and George (2009) recognized four apomorphies for Mesocletodes (see above). The above addressed ontogenetic development of M. elmari sp. n. shows that none of them is characteristic of adults only, but rather appear already during juvenile development.
The characteristic Mesocletodes seta on the second A1 segment is developed from CV onwards of both genders. This segment does not show sexually dimorphic modification, except that the setae of females are bipinnate, whereas males bear bare setae. All investigated stages of M. elmari sp. n. lack the proximal outer spine on P1 exp3. According to Ferrari (1988), this is caused by suppression and further indicates pedomorphosis for this character, i.e. the maintenance of juvenile characters in adults. Considering the harpacticoid pattern of leg development (Dahms 1993;Ferrari 1988), the distal part of the single P1 segment in CI or the second segment in CII-CIV is homologous to the third segment in CV and adult. These parts are fully equipped with all elements characteristic of the third segment. STEs arising from spines on P1 exp3 are only traced from CIII on for M. elmari sp. n. However, it seems likely that these extensions exist from CI as the setae they are associated with do so. The same applies to the strong grinding tooth at the md gnathobase. This is developed at least at CIII of M. elmari sp. n., but according to Dahms (1990), for example, this should be the case from CI onwards. Crosshatched segments are considered to be missing or not formed. Solid triangles: sexually dimorphically modified setae, solid squares=setae added at the molt to CV male, solid asterisks=characteristic Mesocletodes seta and the subterminal seta in segment 2 in CV and adults. Arrow marks geniculation. Various taxa of benthic harpacticoid copepods show distribution ranges at the species level that extend over thousands of kilometers across Atlantic, Southern Ocean and Pacific abyssal plains: Ancorabolidae Sars, 1909a(George 2006bGheerardyn and George 2010), Argestidae (Menzel and George 2009;Menzel in press), Canthocamptidae Sars, 1906(Mahatma 2009), Ectinosomatidae Sars, 1903 (Seifried andMartínez Arbizu 2008), Paramesochridae Lang, 1944 (Gheerardyn andPlum and George 2009).
The record of M. elmari sp. n. in the North Atlantic Ocean and South Atlantic Ocean, the Southern Ocean, the Pacific Ocean and the South Indian Ocean extends the knowledge on the distribution of Mesocletodes and points a worldwide distribution at the species level. Future studies will have to deal with the means of dispersal as well as ecological and biological needs of species belonging to Mesocletodes to help explain the distributional patterns.
kindly provided by Francis Dov Por and Ariel Chipman from the Hebrew University of Jerusalem (Israel). Thanks go to Nechama Ben-Eliahu for her hospitality during my stay in Jerusalem. The type material of M. parabodini was kindly provided by Dirk Brandis from the Zoologisches Museum Kiel (Germany). Thanks are also due to Kai Horst George, who helped to improve this manuscript. I am very grateful for the valuable and constructive criticism of two reviewers. Thanks go to Brigitte Ebbe for proofreading the English. This study was carried out within the CeDAMar project. Financial support was obtained from the DFG (GE 1086/6-1 and GE 1086/11-1).
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2015-11-25T00:00:00.000
|
16882908
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12094-015-1451-3.pdf",
"pdf_hash": "a0b4ab04319fc43c9f5b26df08aecacfc355d2e9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45897",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a0b4ab04319fc43c9f5b26df08aecacfc355d2e9",
"year": 2015
}
|
pes2o/s2orc
|
Clinical guideline SEOM: hepatocellular carcinoma
Hepatocellular carcinoma (HCC) represents the second leading cause of cancer-related death worldwide. Surveillance with abdominal ultrasound every 6 months should be offered to patients with a high risk of developing HCC: Child-Pugh A–B cirrhotic patients, all cirrhotic patients on the waiting list for liver transplantation, high-risk HBV chronic hepatitis patients (higher viral load, viral genotype or Asian or African ancestry) and patients with chronic hepatitis C and bridging fibrosis. Accurate diagnosis, staging and functional hepatic reserve are crucial for the optimal therapeutic approach. Characteristic findings on dynamic CT/MR of arterial hyperenhancement with “washout” in the portal venous or delayed phase are highly specific and sensitive for a diagnosis of HCC in patients with previous cirrhosis, but a confirmed histopathologic diagnosis should be done in patients without previous evidence of chronic hepatic disease. BCLC classification is the most common staging system used in Western countries. Surgical procedures, local therapies and systemic treatments should be discussed and planned for each patient by a multidisciplinary team according to the stage, performance status, liver function and comorbidities. Surgical interventions remain as the only curative procedures but both local and systemic approaches may increase survival and should be offered to patients without contraindications.
Introduction
Hepatocellular carcinoma (HCC) represents the fifth most common cancer in men and the ninth in women (7.5 and 3.4 % of all cancers, respectively). This is the second leading cause of cancer-related death worldwide, with approximately 745,500 deaths during the year 2012. The incidence varies widely according to geographic location so, while in the EU it is approximately 8.6/100,000 people, in certain regions of Asia and Africa this rate reaches up to 120/100,000 people. This is mainly related to the different level of exposure to specific risk factors. HCC frequency is 4-8 times higher in men. The median age for diagnosis is 60 in low-incidence areas. The incidence in Spain is around 17/100,000 for men and 6.5/100,000 for women. With a 9.7/100,000 mortality rate, HCC is the eighth cause of cancer-related death in Spain [1].
HCC is usually diagnosed in cirrhotic patients (60-80 %). Patients with cirrhosis due to chronic hepatitis B virus infection (HBV) have a 100 times increased risk of suffering HCC, thus being the main etiology in high-incidence countries. The risk of HCC in patients with cirrhosis, secondary to the hepatitis C virus (HCV), is 1-2 % per year, causing most of the new cases in Europe. Both the co-HBV infection and alcohol consumption increase the risk. As viral load and active viral replication are associated with a higher likelihood of developing HCC, antiviral therapies can potentially reduce the risk of patients with this kind of chronic hepatitis. Nonalcoholic fatty liver disease (NAFLD) represents an increasingly frequent underlying liver disease in patients with HCC, especially in developed countries [2]. Other causes of chronic hepatitis such as hemochromatosis and aflatoxin are less common etiologies for HCC.
Surveillance
Surveillance should be offered to patients with a high risk of developing HCC: Child-Pugh A-B cirrhotic patients, all cirrhotic patients on the waiting list for liver transplantation, high-risk HBV chronic hepatitis patients (higher viral load, viral genotype or Asian or African ancestry) and patients with chronic hepatitis C and bridging fibrosis. Despite the increasing incidence of nonalcoholic fatty liver disease in developed countries, surveillance of these patients, although endorsed by some guidelines [3], remains, at the present time, controversial.
An abdominal ultrasound (US) every 6 months is the method of choice, as it has shown to be superior compared to three-and twelve-monthly intervals [4,5]. There is no role for AFP or other oncomarkers in HCC screening [6]. There are no data to support the use of multidetector computed tomography (CT), or dynamic magnetic resonance imaging (MRI) for surveillance. Appropriate recall procedures should be in place in case a nodule is found in a screening US (new nodules that measure more than 1 cm, or nodules that enlarge over a time interval).
Diagnosis of lesions \1 cm
Pathology studies have shown that the majority of nodules smaller than 1 cm, which can be detected in a cirrhotic liver, are not HCCs. In these cases, a tighter follow-up with three-monthly US should be done. If the size does not change, surveillance every 3 months should be continued; if the diameter changes, the nodule should be diagnosed according to its size. After 2 years of this tighter follow-up, if there are no changes, the 6-month surveillance follow-up can be resumed.
Diagnosis of lesions C1 cm
If the diameter is C1 cm, the characteristic findings on dynamic CT/MR of arterial hyperenhancement with ''washout'' in the portal venous or delayed phase are highly specific and sensitive for a diagnosis of HCC. However, these criteria should not be used in patients with no baseline hepatic disease. On the other hand, a lesion that displays these findings on contrast US may also be a cholangiocarcinoma, making this technique less suitable for the noninvasive diagnosis of HCC. It is not useful for tumor staging either.
Several studies have shown that dynamic MRI has a slightly better performance than CT for the diagnosis of HCC, although there were limitations to these studies [7]. Therefore, one should utilize the locally available expertise, whether MRI or CT. In all cases, they should be performed using standardized technical specifications.
Alpha-fetoprotein should not be used as a diagnostic test due to the possibility of elevated levels in patients with non-HCC malignancies and nonmalignant diseases.
In those who do not have these characteristic features, a directed biopsy of the mass may be needed in order to confirm a diagnosis of HCC. However, there is no indication for biopsy of a focal lesion in a cirrhotic liver when the patient is a candidate for resection, or in patients with poor performance status or multiple comorbidities.
Pathological diagnostic criteria for HCC and the differential diagnosis with dysplastic lesions have been proposed [8]. Stromal invasion or tumor cell invasion into the portal tracts or fibrous septa defines HCC and is not present in dysplastic lesions.
Staging
Both the extension of the diagnosis and the basal hepatic cellular injury determine the prognosis of hepatocellular carcinoma (HCC).
In addition to giving prognostic information, staging should allow guiding treatment options, defining their impact and facilitating the exchange of information in a standardized way [9].
Systems of classification for the staging of HCC establish different scores based on clinical parameters related to the situation and tumor characteristics, liver functionality and the general state of health of the affected patient ( Fig. 1). Stages are correlated in each set with the prognosis, and some classifications provide therapeutic guides with information on prognosis after application [10,11].
There is no consensus as to which classification predicts better survival rates in patients with HCC. The classification of the Barcelona Clinic Liver Cancer Staging System (BCLC) has been validated externally and is endorsed by the European Association for the Study of the Liver (EASL) and by the American Association for the Study of Liver Disease (AASLD) [11]; this is standard procedure in occidental countries [12] (Fig. 2).
Recommendation
The BCLC staging system has been validated externally, and it collects information on the situation of the tumor, liver functionality and the general condition of the patient; it also establishes therapeutic recommendations with prognostic information after treatment, and it is on the basis of these factors that we make our recommendation (level of evidence 2A; level of recommendation 1B level).
Management of local disease: liver resection (LR) and liver transplantation (LT)
In general, LR is preferred in early-stage HCC patients who have no cirrhosis or well-preserved liver function, whereas LT is recommended for those patients with a compromised liver function.
LR should be offered to patients with solitary or limited multifocal HCC (stage BCLC-A), with no major vascular invasion or extrahepatic spread, no portal hypertension (defined as hepatic venous pressure gradient\11 mmHg or platelet count [100.000), adequate liver reserve (Child-Pugh class A and highly selected Child-Pugh class B7) and an anticipated liver remnant of at least 30-40 % in patients with cirrhosis and at least 20 % in noncirrhotic patients (evidence 2A; recommendation 1B) [13].
Anatomical resections are recommended (evidence 3A; recommendation 2C). Expected perioperative mortality rate of LR in cirrhotic patients is in the range of 2-3 %.
Adjuvant therapies after LR (e.g., sorafenib) have not been shown to improve outcome, and observation is the standard of care (evidence 1A; recommendation 1A) [14]. Portal vein embolization (PVE) resulted in an increase of 8-27 % in future liver remnant volume with a morbidity rate of 2.2 % and no mortality (evidence 3A; recommendation 2C) [15]. Another hypertrophy-inducing strategy is the associating liver partition with portal vein ligation for staged hepatectomy (ALPPS) approach. However, this procedure is associated with a morbidity rate of 68 % and a mortality rate of 12 % [16].
The first randomized controlled trial to investigate whether LR (partial hepatectomy) or transcatheter arterial chemoembolization (TACE) yields better outcomes in patients with resectable multiple HCC, conducted on 173 Asian patients, found a survival advantage for LR over TACE (41 vs. 14 months) (evidence 2A; recommendation 2C) [17].
Patients within Milan criteria (MC) (single HCC nodule \5 cm or up to 3 nodules \3 cm each, with no macrovascular involvement and no extrahepatic disease) could be considered for LT (from either a dead or living donor) (evidence 2A; recommendation 1A), achieving a 5-year overall survival of more than 70 % and a 5-year recurrence rate of \10 % [18]. Perioperative mortality and 1-year mortality are expected to be approximately 3 % and \10 %, respectively.
Bridge or downstaging strategies could be considered in selected cases, if the waiting list for LT exceeds 6 months (evidence 2D; recommendation 2B). Nonetheless, in those cases exceeding MC, neoadjuvant treatments or ''bridging therapies'' to downstaging tumors to MC for LT are not recommended (evidence 2D; recommendation 2C) [19].
Patients with tumor characteristics slightly beyond MC and without microvascular invasion may be considered for LT. However, this indication requires prospective validation (evidence 2B; recommendation 2B). In the absence of molecular markers, both tumor size and number are important factors of post-LT recurrence that should be taken into account whenever selecting HCC patients beyond MC for LT.
Management of local disease: local ablative treatment
Local ablation is considered the first-line treatment option for patients at early stages, not suitable for liver transplantation or surgery, or a therapeutic option avoiding tumor progression until liver transplantation (evidence 2A; recommendation 1B).
These therapies are based on the injection of substances in the tumor (ethanol, acetic acid), or on changes in temperature [radiofrequency ablation (RFA), microwave, laser, cryotherapy].
The most widely used are percutaneous ethanol injection (PEI) and RFA. Other ablative techniques such as microwave and cryoablation are still under investigation [20].
Both RFA and PEI have excellent results in tumors B2 cm (90-100 % complete necrosis), but for bigger tumors, the probability of achieving a complete necrosis is greater with RFA (evidence 1A: recommendation 1C). Five randomized controlled trials and two large meta-analyses showed that RFA obtains a better survival in early HCC, especially for tumors [2 cm [20,21]. Currently, RFA stands as the best ablative treatment in tumors of \5 cm, but it has some limitations in cases where it is not technically feasible (tumors located close to other organs or large vessels). In these situations (10-15 %), PEI is recommended [20,22] (evidence 1D; recommendation 1A).
The recurrence rate after percutaneous treatment is as high as for surgical resection, and it may achieve 80 % at 5 years [22].
Management of locally advanced disease
The management of locally advanced disease includes transarterial chemoembolization (TACE), radioembolization and radiotherapy. These strategies can also be used in patients with early-stage HCC and with contraindications for radical therapies, and prior to liver transplants in patients who are estimated to have a long waiting time for their operation.
Transarterial chemoembolization (TACE)
Indications TACE is indicated for those patients with large or multifocal HCCs that are not amenable to resection or local ablation, with well-preserved hepatic function (i.e., Child-Pugh A or B cirrhosis), a good performance status and no vascular invasion, main portal vein thrombosis, extrahepatic disease spread, encephalopathy or biliary obstruction.
Methodology TACE consists of the injection of a chemotherapeutic agent into the hepatic artery with or without lipiodol, and with or without a procoagulant material. TACE is currently available in some centers using drug-eluting beads (DEBs) [23].
Efficacy TACE improves overall survival; rates of 2 years have been reported in randomized trials, around 31-63 %. TACE induced partial or complete response in 15-55 % of patients [24][25][26]. DEB-TACE induced similar rates of objective response and disease control compared with conventional TACE and has also been associated with improved tolerability with a significant reduction in serious liver toxicity and a significantly lower rate of doxorubicinrelated side effects.
Repeated TACE TACE should be limited to the minimum number of procedures necessary to control the tumor.
Combination therapy The potential additive effect of combined therapy (sorafenib ? TACE) over TACE alone has been directly addressed in two randomized phase II trials and a single randomized phase III trial, none of which suggest clear benefit [27,28].
Summary TACE recommendations: TACE is recommended for patients with asymptomatic large or multifocal HCC (BCLC stage B) with normal hepatic function and without vascular invasion or extrahepatic spread (evidence 2A; recommendation 1A).
Radioembolization
Radioembolization using intraarterial injection of labeled microspheres induces extensive tumor necrosis (occluding small vessels combined with the emission of radiation in the tumor bed) with an acceptable safety profile.
Indications It could be considered as an alternative to TACE for patients with advanced HCC who are candidates for TACE, but who have macrovascular invasion such as a branch or lobar portal vein thrombosis [29].
Summary Radioembolization with Y-90 spheres is an alternative to TACE in cases of macrovascular invasion, excellent liver function and the absence of extrahepatic spread (evidence 3C; recommendation 3C).
Indications 3D-CRT is a reasonable option for patients who have failed other local modalities and have no extrahepatic disease, limited tumor burden and relatively wellpreserved liver function. SBRT could also be recommended for patients with relatively small HCCs, who either are inoperable or refuse surgery and other local ablation techniques (evidence 3C; recommendation 3C).
Recommendations
Locoregional therapy (transarterial chemoembolization [TACE], radioembolization and RT] is the preferred treatment approach for patients that are not amenable to surgery or liver transplantation.
The choice of nonsurgical treatment modality is empiric and influenced by local expertise and institutional practice. Few trials have directly compared any of the available therapies with one another, and there is little consensus as to when one modality should be chosen over another.
Patients with disease spread outside the liver, and patients with major portal vein thrombosis should be considered for systemic therapy rather than liver-directed therapies.
Treatment of metastatic disease
The standard treatment for patients with tumors invading the portal vein, having nodes or distant disease with an ECOG PS 1-2 and liver function Child-Pugh A, is the sorafenib (400 mg/12 h), an oral multikinase inhibitor whose clinical benefit has been tested in two different clinical trials: the SHARP trial (NCT00105443) [31] where 602 patients with advanced HCC were randomized to receive either sorafenib, 400 mg twice daily, or a placebo. Overall survival was significantly longer in the sorafenib group (10.7 vs. 7.9 months in the placebo group; HR 0.69; 95 % CI 0.55-0.87, p \ .001). A similar trial was performed in Asian countries with 226 patients with similar design [32]: The median overall survival was 6.5 months for the sorafenib group versus 4.2 months for the placebo group (HR 0.68; 95 % CI 0.50-0.93, p = .014). The most common Sorafenib-related adverse events were hand-foot skin reaction and diarrhea.
The efficacy of sorafenib for patients with Child-Pugh class B or C liver function remains unclear. On the other hand, trials are also ongoing to evaluate the benefit of sorafenib combined with either TACE or chemotherapy [33].
HCC is resistant to chemotherapy. The drugs used (doxorubicin, cisplatin…) achieve response rates of 10 % with no impact on survival.
Ramucirumab is a recombinant human IgG1 monoclonal antibody against VEGFR-2 and avoids binding of ligands VEGF-A, VEGF-C and VEGF-D. In order to explore new options in second line, a phase-3 clinical trial REACH (NCT01140347) [34] was conducted where patients stage C or B refractory or not amenable to locoregional therapy that had previously received sorafenib were randomly assigned to receive intravenous ramucirumab (8 mg/kg) or a placebo every 2 weeks. The investigators concluded that the second-line treatment with ramucirumab did not significantly improve survival over the placebo in this setting of patients. However, subgroup analysis has suggested a potential benefit for those patients with alpha-FP values [400 ng/ml, which is currently being assessed in a new clinical trial, focusing on this specific setting.
Tivantinib and other c-met inhibitors (INC280, foretinib, MSC2156119 J and golvatinib) are currently being evaluated in second line after sorafenib [35,36]; its benefit must be elucidated in further clinical trials.
In patients with ECOG PS of 3-4 and/or poor liver function (Child-Pugh C), cancer therapy is not indicated, and only palliative care is recommended.
Monitoring
There is no evidence to guide the optimal post-treatment surveillance strategy in patients undergoing locoregional therapy for HCC. Recommendations are based on the consensus that earlier identification of disease recurrence may facilitate patient eligibility for investigational studies, or other forms of treatment.
Patients who undergo a complete resection are at risk from disease recurrence and second primary HCCs. Most patients who experience recurrence after resection have recurrent disease confined to the liver. The main goal of post-treatment surveillance is early identification of disease that might be amenable to subsequent local therapy. The determination of AFP is recommended [37], if it was initially elevated, every 3 months for 2 years and then every 6 months, and imaging (CT or MRI) every 3-6 months for 2 years and then every 6-12 months.
Re-evaluation according to the initial workup should be considered in the event of disease recurrence.
|
v3-fos-license
|
2024-03-03T16:58:17.128Z
|
2024-02-29T00:00:00.000
|
268176084
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/journals/neurology/articles/10.3389/fneur.2024.1334000/pdf",
"pdf_hash": "67467011f0c6b124c158546ce9cf652459b71cdd",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45898",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4352fe98547a52c3a258d245778780a4c9fb61ed",
"year": 2024
}
|
pes2o/s2orc
|
Case report: Acute HHV6B encephalitis/myelitis post CAR-T cell therapy in patients with relapsed/refractory aggressive B-cell lymphoma
Background The development of chimeric antigen receptor (CAR)-T cell therapy has revolutionized treatment outcomes in patients with lymphoid malignancies. However, several studies have reported a relatively high rate of infection in adult patients following CD19-targeting CAR T-cell therapy, particularly in the first 28 days. Notably, acute human herpesvirus 6 B (HHV6B) reactivation occurs in up to two-thirds of allogeneic hematopoietic stem cell transplantation patients. Case presentations Herein, we describe a report of HHV6B encephalitis/myelitis in three patients with relapsed/refractory diffuse large B-cell lymphoma post CAR T-cell therapy. All three patients received multiple lines of prior treatment (range: 2–9 lines). All patients presented with fever that persisted for at least 2 weeks after CAR-T cell infusion (CTI). Both the onset time and duration were similar to those of the cytokine release syndrome (CRS); nevertheless, the CRS grades of the patients were low (grade 1 or 2). Delirium and memory loss after CTI were the earliest notable mental presentations. Neurological manifestations progressed rapidly, with patients experiencing varying degrees of impaired consciousness, seizures, and coma. Back pain, lumbago, lower limb weakness and uroschesis were also observed in Patient 3, indicating myelitis. High HHV6B loads were detected in all Cerebral spinal fluid (CSF) samples using metagenomic next-generation sequencing (mNGS). Only one patient required high-activity antivirals and IgG intravenous pulse treatment finally recovered, whereas the other two patients died from HHV6B encephalitis. Conclusion Considering its fatal potential, HHV6B encephalitis/myelitis should be urgently diagnosed post CAR-T cell-based therapy. Furthermore, hematologists should differentially diagnose these conditions from CRS or other immunotherapy-related neurotoxicities as early as possible. The results of this study demonstrate the potential of mNGS in the early diagnosis of HHV6B infection, particularly when the organism is difficult to culture.
Introduction
The introduction of chimeric antigen receptor (CAR)-T cell therapy has rapidly transformed the treatment landscape for lymphoid malignancies.In patients with relapsed or refractory disease who previously had limited treatment options, CAR-T cell therapy has shown impressive responses, including complete responses (CRs) in approximately 80% of patients with acute lymphoblastic leukemia (ALL) and 40-60% of those with aggressive lymphomas (1)(2)(3)(4).The results of our serial clinical trials indicate that sequential infusion of CAR19/22-T cell was safe and efficacious in treating patients with R/R B-cell lymphomas (5,6).
Importantly, several studies have reported that because of extensive prior antitumor therapies, the use of lymphodepleting chemotherapy, and severe therapy-associated toxicities, such as cytokine release syndrome (CRS) and B-cell aplasia, adult patients receiving CD19-targeting CAR-T cell therapy exhibit a high rate of infection, particularly in the first 28 days (7)(8)(9).Most early infections are bacterial, occurring in approximately 16.5% of patients, whereas respiratory viral infections predominate at later time points (7).Reactivation of Epstein-Barr virus (EBV) and cytomegalovirus (CMV) is the most frequent (7).However, two cases of human herpesvirus 6 B (HHV6B) encephalitis (10) and one case of HHV6B myelitis after CAR T cell therapy (11) have been previously reported.
HHV6 is a member of the β-herpesvirus subfamily and comprises two species, HHV6A and HHV6B (12).HHV6 infects approximately 90% of the population by 2 years of age, manifesting as Roseola infantum, and then remains dormant in the host with possible reactivation during immunosuppression (12).HHV6 reactivates in up to two-thirds of allogeneic hematopoietic stem cell transplant (HSCT) patients, typically within 3 weeks post-transplantation, and encephalitis develops in only a small proportion of patients experiencing HHV6 reactivation (13).
Human herpesvirus 6 (HHV-6), particularly the HHV-6B strain, is a major cause of encephalitis and other complications following allogeneic hematopoietic stem cell transplantation (HSCT) (14).Clinically, HHV-6 encephalitis typically presents 2-6 weeks after HSCT with symptoms of confusion, memory loss, seizures and insomnia.Diagnosis involves the detection of HHV-6B DNA in the cerebrospinal fluid, where mild protein elevation and lymphocytic pleocytosis are also noted.See below for notes on HHV6 viral load detection in CSF as it relates to encephalitis.Brain Magnetic Resonance Imaging (MRI) may initially appear normal, but may later show hyperintense lesions and abnormalities in the limbic region, particularly in the medial temporal lobes, indicating inflammation.Diagnosis is based on patient history, PCR for HHV-6 DNA and other tests, with chromosomally integrated HHV-6 (CIHHV-6) complicating the diagnosis due to the presence of inherited HHV-6 DNA (15).Risk factors include HHV-6 reactivation and poor T-cell function, with prognosis ranging from complete recovery to persistent neurological problems or death (16).Treatment includes antiviral drugs such as ganciclovir and foscarnet, and new strategies such as adoptive immunotherapy.Treatment is usually given for 3 weeks or until HHV-6 DNA is cleared.Prevention strategies are not well established and HHV-6B is also associated with other post-HSCT conditions such as myelosuppression, although evidence is limited (15).Overall, HHV-6B is a significant cause of infectious encephalitis after HSCT and improved diagnosis and treatment are needed, although there are few reports of HHV-6B encephalitis following CAR-T treatment (10,11).
Herein, we report three cases of relapsed/refractory diffuse large B-cell lymphoma R/R with HHV6 early reactivation involving the central nervous system after CD19/CD22 CAR-T cell cocktail therapy at our institution between 2019 and 2020.
Case report
As shown in Table 1 and Figure 1, patient 1 was a 47-year-old male who experienced relapse after six lines of therapy.The patient received CD19/CD22 CAR-T cell cocktail therapy following autologous stem cell transplantation (ASCT).Prophylactic antiviral therapy with ganciclovir was administered 2 weeks before stem cell infusion.Treatment was complicated by grade 2 CRS and immune effector cell-associated neurotoxicity syndrome (ICANS).Engraftments were delayed until death on the 30th day after CTI, even though he was continuously administered the low-activity antiviral acyclovir (0.4 g po bid) after transplantation (13).Given the clinical course and subsequent evidence, it remains uncertain whether the death was due to ICANS or possible HHV6 encephalitis.
Patient 2 was a 31-year-old female who experienced relapse after nine lines of therapy (Table 1; Figure 1), including ASCT (July 2018) and autologous CD22/CD19 CAR-T cell cocktail therapy (August 2019).She received salvage allogeneic CD22/19 CTI in the 3rd month after autologous CAR-T cell therapy.No prophylactic antiviral therapy was administered before or after allogeneic CD22/19 CTI.The CTI was complicated by grades 2 CRS and ICANS.IVIgG intravenous pulse therapy (0.4 g/kg iv for 3 days) in the late course failed to improve the outcomes (Figure 2A).Similar to Patient 1, the exact cause of death, whether ICANS or HHV6 encephalitis, remains unclear due to overlapping clinical presentations.
Patient 3 was a 50-year-old female who experienced relapse after two lines of therapy (Table 1; Figure 1).She was enrolled in a study of sequential infusion of CD22 and CD19 CAR-T cells following ASCT.She regularly received prophylactic antiviral therapy with intravenous ganciclovir for 2 weeks before autologous stem cell infusion.Subsequent therapy was complicated by grade 2 CRS and grade 4 ICANS.However, the patient's clinical course also raised significant concerns about HHV6 infection.The patient required high-activity antiviral therapy (13), combined ganciclovir (5 mg/kg, iv, q12h), and foscarnet (60 mg/kg, iv, q12h) until clearance of HHV6B DNA in the blood, additionally with IgG infusions for 7 days at acuteonset (Figure 2A).Finally, the patient achieved CR after 7 months of follow-up, without HHV-6 reactivation.
Some typical and notable clinical details of these three patients can be summarized as follows: All three patients presented with a fever lasting at least 2 weeks and were resistant to low-activity antiviral and other antimicrobial regimens (Figure 2A) within 1 week after CTI.All patients developed low-grade CRS (grades 1/2).Delirium and memory loss that appeared 2 weeks after CTI were the earliest mental presentations.Neurological manifestations progressed rapidly, with patients experiencing varying degrees of impaired consciousness, seizures, and coma.Back pain, lumbago, lower limb weakness, and uroschesis were observed in Patient 3, which were symptoms that may be consistent with myelitis, though they were not specific to this condition alone.Brain CT scans were performed in Patient 1 and Patient 3 when they presented with the aforementioned manifestations, and no obvious abnormalities were observed.CSF specimens were collected for pathogen detection (17).CTCAE was referenced according to the criteria outlined in the Common Terminology Criteria for Adverse Events Version 5.0, as published by the U.S. Department of Health and Human Services, National Institutes of Health, National Cancer Institute.Neutrophil engraftment was defined as an absolute neutrophil count (ANC) > 500 cells/μL without medicine supports for 5 consecutive days.Platelet engraftment was defined as a platelet count > 20 × 10 9 /L without transfusion for 7 consecutive days.Lymphopenia was defined as an absolute lymphocyte count (ALC) < 300 cells/μL.Clinical characteristics that influenced the grading for each patient, as follows: Patient 1: Presented with altered mental status, incoherent speech, personality changes, involuntary limb tremors and shallow coma.Patient 2: Presented with initial loss of long-term memory and lucidity, followed by disorientation, dissociation, apathy, inability to recognize relatives, anterograde amnesia and status epilepticus.Patient 3: Presented with back pain and urinary retention, possibly indicating spinal cord involvement, and later developed altered consciousness and sudden seizures and status epilepticus. 10.3389/fneur.2024.1334000 Frontiers in Neurology 04 frontiersin.orgwhen mental dysfunction occurred.High loads of HHV6B were detected in all CSF samples (Figure 2B) using mNGS.The identified sequence read numbers corresponding to HHV6B were 9,605, 41,626, and 126,904, with genomic coverages of 72.55, 83.32, and 87.5%, respectively.The most recent developments in molecular techniques, namely mNGS, have the potential to provide novel opportunities for the accurate investigation of infections.However, viral loads were confirmed using the most accurate and generally accepted droplet digital polymerase chain reaction (ddPCR) (18,19).The sequences of the primers and probes used in ddPCR are listed in Supplementary Table S1.Patient 1 had an HHV-6 copy number ratio of 251.78 in blood and 503.85 in CSF.Patient 3 had 1172.41 in blood and 4694.16 in CSF.Patient 2, tested in CSF only, had a copy number ratio of 1714.69 copies/μg cell-free DNA (cfDNA).Based on HHV-6B DNA in the CSF coinciding with acute-onset altered mental status, short-term memory loss, confusion, and seizures, the diagnosis of HHV-6 encephalitis/myelitis was confirmed (15).As shown in Figure 2C, serum cytokine levels were also analyzed for correlation with those in the CSF.A positive correlation was observed.Moreover, the concentrations of IL-6 and IL-8 were significantly higher in the CSF than in the serum.The degree of IL-8 enrichment was significant.According to a previous study on HHV6B, CSF cell counts were often unremarkable (15), whereas the fold changes in glucose, lactate, and protein levels were extremely elevated relative to the lower reference values (Figure 2D, the actual values and reference ranges for each CSF component in Supplementary Table S2).Notably, we also observed concomitant severe hyponatremia (Na + < 130 mmol/L) and/or hypernatremia (Na+ > 150 mmol/L).This suggests that it may be caused by the abnormal regulation of the central nervous system by blood sodium.
Discussion
The increasing use of CAR T-cell therapy for the treatment of malignancies has led to improvements in the survival of patients with R/R aggressive B-cell lymphoma.However, this therapy also poses risks of infections, particularly during the early period after CAR-T cell therapy.In our observation, we identified three patients who developed acute HHV6B encephalitis/myelitis reactivation after receiving CAR-T cell therapy, and two of them developed fatal encephalitis.
Therefore, it is crucial to urgently recognize HHV-6B encephalitis/myelitis following CAR T-cell therapy due to its potential for fatality.Hematologists should recognize and distinguish it from overlapping CRS or other immunotherapyrelated neurotoxicities as early as possible.However, similar clinical symptoms and onset times may present a crucial challenge in accurately diagnosing CNS dysfunction in these patients.A case report by Rebechi et al. also demonstrates a significant overlap in the clinical signs and symptoms associated with CAR-T-associated neurotoxicity and HHV-6 encephalitis (10).Handley et al. presented a scenario where the attribution of neurological symptoms and signs to ICANS could potentially lead to CNS infections being overlooked (11).The traditional diagnostic approach is particularly challenging for patients who receive CAR-T cell therapy owing to the overlapping clinical manifestations of infectious and noninfectious causes.HHV6B-associated encephalitis/myelitis may also be ignored as a CAR T therapyrelated neurotoxicity due to a lack of knowledge.ICANS as a clinical entity has no diagnostic gold standard for diagnosis (17).While ICANS does have some distinctive clinical features such as apraxia or aphasia out of proportion with overall mental status, there is no definitive rule-in test to establish a diagnosis.Biomarkers such as Electroencephalography (EEG) and MRI are often normal or nonspecific in ICANS, although when positive can provide a diagnosis.Transcranial Doppler sonography (TCD) can be a helpful biomarker in the condition based on limited data.Seizure is very rare in ICANS, and while systemic inflammatory marker elevation in the form of CRS can occur preceding or concurrently with ICANS, it can also occur in isolation without inflammatory markers.Importantly, timing is a critical factor in consideration of pre-test probability for ICANS.The late onset of neurotoxic symptoms in the patients here, with lumbar puncture (LP) performed 17-27 days after transplantation, is the primary clinical factor we would use in these patients to drive diagnostics for a secondary cause of altered mental status (20,21).
Given the limitations of traditional diagnostic methods, Rebechi et al. skillfully employed PCR technology to detect HHV6B in CSF and peripheral blood.However, they identified certain shortcomings in this approach.While HHV6B activation could be detected in the peripheral blood, it did not necessarily indicate progression to encephalitis.Additionally, they explored the potential of MRI for diagnosing HHV6B encephalitis; nevertheless, the typical radiological abnormalities associated with HHV6B encephalitis were only confirmed in one case (10).Handley et al. reported the case of a patient with refractory DLBCL treated with CAR T therapy.The patient developed CRS and ICANS on day 5 post-CTI, followed by acute bilateral lower limb weakness progressing to paralysis on day 10.Upper limb reflexes were normal but with increased tone.Based on the MRI findings, a diagnosis of ICANS-related myelitis was considered.Treatment with corticosteroids did not improve the condition, and a lumbar puncture on day 14 showed elevated protein levels.PCR tests for CMV, HSV, VZV and enterovirus on CSF were all negative.On day 16, the HHV-6 PCR on CSF was positive, and treatment with foscarnet was started.However, the patient's condition continued to deteriorate and eventually led to death.Conventional PCR testing is limited by its inability to test for multiple viruses simultaneously and its relatively slow turnaround time, which is a challenge in urgent clinical scenarios (11).Therefore, in this case report, we propose mNGS as a novel molecular technique that offers new possibilities for the early diagnosis of HHV6B.This method is not only accurate and rapid but also provides new opportunities for investigating infections, especially in cases where the pathogen is difficult to culture.It showcases unique advantages and application prospects.It should be noted that mNGS typically offers a shorter turnaround time (within 48 h) but incurs higher costs compared to dedicated PCR.It is also important to acknowledge that these factors, as well as the cost-benefit analysis, To date, prophylactic or preemptive anti-HHV-6 therapy is not recommended for preventing HHV6B reactivation or encephalitis after HSCT (27-29) on the following considerations: First, antiviral drug selection should consider side effects, including the nephrotoxic (foscarnet) and myelosuppressive (ganciclovir) properties of the available agents.There is moderate evidence of a causal relationship between HHV6B and myelosuppression and allograft failure (30-32).Once a side effect occurs, this can lead to a vicious cycle.Second, previous data have demonstrated that preemptive ganciclovir or foscarnet did not significantly reduce the risk of HHV-6B encephalitis (33, 34), although the incidence and titer of HHV-6B DNA in the plasma were dramatically lower in patients receiving prophylactic antivirals (28, 29).In transplant recipients with clinical presentations of encephalitis/myelitis without other evident causes, empirical treatment for HHV6B should be considered (35).This seems to be in accordance with the outcome of Patient 3 in our study.For HHV6B encephalitis/myelitis, at least 3 weeks of antiviral therapy should be administered until clearance of HHV-6 DNA from the blood and, if feasible, CSF (35).
Our report aims to remind clinicians of two critical points: First, not to overlook the possibility of fatal HHV-6 CNS infection following CAR-T therapy, especially in the presence of delirium or sleep deprivation.Second, the importance of mNGS for early and rapid pathogen screening, combined with ddPCR for confirmation and clinical correlation, is critical for early diagnosis and intervention, potentially saving lives.This is exemplified in our cases, where patients 1 and 2 had worse outcomes due to delayed detection, while Patient 3, benefiting from our accumulated experience, had a better prognosis with early detection and treatment.
Although our study has important implications, it has some limitations.Remarkably, the small sample size did not allow us to analyze the risk factors or optimal therapeutic strategies.Future studies are needed to elucidate the potential risk factors for reactivation of HHV-6 in associated complications of post-CAR-T cell-based therapy, especially those involving the central nervous system, and the exploitation of effective and safe strategies to mitigate HHV-6 reactivation warrants continued attention.
Statement
The Common Terminology Criteria for Adverse Events, version 5.0 (CTCAE v5.0), created by the National Cancer Institute, is a versatile tool in clinical research, providing a comprehensive framework for grading a wide range of adverse events, including neurological conditions like encephalitis.In contrast, the ASTCT (American Society for Transplantation and Cellular Therapy) criteria, while highly specialized, are specifically tailored for CRS and neurotoxicity in Immune Effector Cell therapies, offering a more focused approach compared to the broader applicability of the CTCAE.Due to the initial diagnosis of suspicious ICANS being revised to encephalitis after mNGS evaluation, we graded the neurological symptoms using CTCAE v5.0 instead of ASTCT criteria, ensuring a more appropriate and comprehensive grading system for the patient's diagnosed condition.
FIGURE 1
FIGURE 1 Timeline from the diagnosis of lymphoma, through the course of treatment, to the diagnosis of HHV6B encephalitis via lumbar puncture with mNGS.ASCT, Autologous Stem Cell Transplantation; auto, autologous; allo, allogeneic; LP, lumbar puncture; mos., months; d, day(s).
TABLE 1
Clinical characteristics and outcomes.
|
v3-fos-license
|
2024-02-29T06:17:08.162Z
|
2024-02-27T00:00:00.000
|
268038893
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jbioleng.biomedcentral.com/counter/pdf/10.1186/s13036-024-00411-w",
"pdf_hash": "d4050d22628df01e16fdc694fd1fc82a059d2a94",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45899",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "defbd6bc70de421e8d7eded17faedb03c78539f1",
"year": 2024
}
|
pes2o/s2orc
|
An innovative lab-scale production for a novel therapeutic DNA vaccine candidate against rheumatoid arthritis
Background Recent therapeutic-plasmid DNA vaccine strategies for rheumatoid arthritis (RA) have significantly improved. Our pcDNA-CCOL2A1 vaccine is the most prominent and the first antigen-specific tolerising DNA vaccine with potent therapeutic and prophylactic effects compared with methotrexate (MTX), the current “gold standard” treatment for collagen-induced arthritis (CIA). This study developed a highly efficient, cost-effective, and easy-to-operate system for the lab-scale production of endotoxin-free supercoiled plasmids with high quality and high yield. Based on optimised fermentation culture, we obtained a high yield of pcDNA-CCOL2A1 vaccine by PEG/MgCl2 precipitation and TRION-114. We then established a method for quality control of the pcDNA-CCOL2A1 vaccine. Collagen-induced arthritis (CIA) model rats were subjected to intramuscular injection of the pcDNA-CCOL2A1 vaccine (300 μg/kg) to test its biological activity. Results An average yield of 11.81 ± 1.03 mg purified supercoiled plasmid was obtained from 1 L of fermentation broth at 670.6 ± 57.42 mg/L, which was significantly higher than that obtained using anion exchange column chromatography and a commercial purification kit. Our supercoiled plasmid had high purity, biological activity, and yield, conforming to the international guidelines for DNA vaccines. Conclusion The proposed innovative downstream process for the pcDNA-CCOL2A1 vaccine can not only provide a large-scale high-quality supercoiled plasmid DNA for preclinical research but also facilitate further pilot-scale and even industrial-scale production of pcDNA-CCOL2A1 vaccine.
Background
In recent years, substantial progress has been made in the development of tolerating DNA vaccines as a novel strategy for the treatment of rheumatoid arthritis (RA) [1][2][3][4].Among these major advances, our therapeutic pcDNA-CCOL2A1 vaccine is the most prominent because it is the first antigen-specific tolerising DNA vaccine encoding chicken type II collagen with a 4837 bp full-length cDNA [5].Notably, a series of recent studies have demonstrated that the pcDNA-CCOL2A1 vaccine has potent therapeutic and prophylactic effects compared with those of methotrexate (MTX), the current "gold standard" treatment for collagen-induced arthritis (CIA) in rats [6,7].Furthermore, intramuscular injection vaccination with the pcDNA-CCOL2A1 vaccine can induce better specific humoral and cellular immune responses than subcutaneous and intravenous injection vaccination against CIA.A single subcutaneous or intramuscular injection with the pcDNA-CCOL2A1 vaccine can maintain the curative effect for over a month, greatly improving drug compliance [8].In addition, the pcDNA-CCOL2A1 vaccine was confirmed to be safe, non-immunogenic, and well-tolerated, with no detectable adverse clinical events [9].More importantly, no exogenous CCOL2A1 gene was integrated into the host genome after inoculation [10].These results strongly indicate the high drug-ability of the pcDNA-CCOL2A vaccine, which inspired us to further develop efficient downstream process technologies of the DNA vaccine for preclinical research and clinical applications.
Recently, an increasing number of DNA vaccines and gene therapies have entered the preclinical research and clinical application stages, respectively [11,12].However, for DNA vaccines or gene therapies to be formally approved for preclinical research and clinical applications, obtaining sufficient high-purity and high-quality supercoiled plasmid DNA is essential.Hence, the US Food and Drug Administration (FDA, USA), European Medicines Evaluation Agency (EAEM, Europe), and National Medical Products Administration (NMPA, China) have issued regulatory documents related to the preparation of pharmaceutical-grade plasmid DNA.These documents describe the entire production process, including the selection of cell lines, raw materials, purification, identification, and final production and marketing [13][14][15].In addition, to ensure safety, residual linear and denatured plasmids, genomes, endotoxins, and other impurities must be removed to a maximum extent from the purified final products of supercoiled plasmid DNA.Thus, establishing simple, efficient, economical, easily controlled, high-quality, widely applicable, and easy-toscale separation and purification methods for sufficient supercoiled plasmid DNA has become one of the biggest challenges in this field today, although classical, commonly used, and commercially available purification kits can meet the needs of small-scale academic research in the laboratory.
Moreover, owing to differences in the target genes, expression vectors, and host bacteria of each genetic engineering product, their downstream process technologies are also different.No mature, standardised, and universal downstream process technologies have been developed for therapeutic DNA vaccines.This has become a major obstacle restricting the clinical application of DNA vaccines [16].In this study, we attempted to develop and optimise a high-efficiency, cost-effective, easy-to-operate system for lab-scale separation and purification of the pcDNA-CCOL2A1 vaccine, which will not only provide a large-scale yield of high-quality vaccine products for preclinical research but also facilitate further pilot-scale and industrial-scale production of the pcDNA-CCOL2A1 vaccine.
Fermentation
The fermentation processes were carried out according to our optimised conditions, as previously described [18].The final product pcDNA-CCOL2A1 was quantified using a Synergy HT Multi-Mode microplate reader (BioTek Instruments, Inc., Winooski, VT, USA).
Alkaline lysis
The bacterial pellet obtained from a 100-mL flask of fermentation broth was suspended in 10 mL suspension buffer (25 mM Tris/HCl, 10 mM EDTA, 50 mM glucose, pH 8.0).The bacterial suspension was then mixed with 20 mL lysis solution (0.2 N NaOH, 1.0% SDS) and incubated at 25 °C for 7 min, as described by Sambrook and Rusell [19].The resulting lysate was neutralised with 20 mL neutralisation buffer (3 M KAc, pH 5.5) and incubated on ice for 10 min.The turbid lysate was centrifuged at 4 °C for 30 min at 12,000 × g.Cellular debris, genomic DNA, and most of the host proteins were removed from the bacterial lysis fluid via centrifugation, and the supernatant was immediately transferred to a fresh vessel for further extraction and purification [20].
Plasmid DNA purification
LiCl is widely used to precipitate high-molecular-weight RNA and proteins [19].An equal volume of precooled LiCl (4 M) was added to the cleared lysates, which were collected from the alkaline lysis process.The resulting supernatant was precipitated using isopropyl alcohol and washed with 70% ethanol to elute the LiCl.The RNase [recombinant RNase, Sangon Biotech (Shanghai) Co., Ltd., China] and protein fragments produced during cell lysis were removed by performing phenol extraction twice.The standard ethanol precipitation method was subsequently used to remove the organic reagents introduced during extraction.Plasmid DNA was then precipitated using polyethylene glycol (PEG)/MgCl 2 (20% PEG-8000, 15 mM MgCl 2 ) to elute small molecular DNA and RNA fragments (Fig. 1).
Endotoxin removal
Triton X-114 phase separation is a simple and cost-effective strategy for eliminating endotoxins from plasmid DNA [21,22].Repeated validation experiments showed that the endotoxin residues met the requirements set forth by NMPA, EAEM, and FDA (≤ 10 EU/mg as per the NMPA and EAEM, ≤ 40 EU/mg as per the US FDA) [13][14][15] (Fig. 2).
Anion exchange chromatography (AEC)
Before AEC, the endotoxins in the plasmid were removed using Triton X-114.The endotoxin-free bacterial lysate was processed using a Qiagen Anion-Exchange Resin packed in a chromatography column to a final bed volume of 50 mL and equilibrated with equilibration buffer (750 mM NaCl; 50 mM MOPS, pH 7.0; 15% isopropanol; 0.15% Triton X-100) at a flow rate of 15-20 mL/ min.The plasmid samples were then loaded at a flow rate of 15-20 mL/min.The chromatography column was washed with wash buffer (40 mL; 1.0 M NaCl; 50 mM MOPS, pH 7.0; 15% isopropanol) at a flow rate of 20-30 mL/min, followed by elution of the plasmid DNA with elution buffer (18 mL; 1.6 M NaCl; 50 mM MOPS, pH 7.0; 15% isopropanol) at a flow rate of 8-10 mL/min.The eluted plasmid DNA solution was immediately transferred to a new vessel and mixed with isopropanol (0.7 volumes).The DNA pellet was washed with endotoxinfree 70% ethanol, air-dried for approximately 20 min, and re-dissolved with endotoxin-free water (2 mL).
Purification using a commercial purification kit
An Endo-Free Plasmid Mega kit (Qiagen, Valencia, CA, USA), which was designed for the purification of endotoxin-free plasmid DNA, was used as the control trial.The assay was conducted following the manufacturer's plasmid purification protocols.
Concentration and purity analysis
The final plasmid DNA was quantified using a multimode microplate reader at a wavelength of 260 nm.The total DNA concentration was calculated using the formula: A 260 × dilution factor × 50 μg/mL.
Endotoxin analysis
Endotoxin contamination was assessed using a Limulus Amebocyte Lysate (LAL) Assay kit (Xiamen Bio-endo Technology, Co., Ltd., China) according to the manufacturer's instructions.The A 405 of the samples was detected using the multi-mode microplate reader.The detection level of this assay kit was 0.1 EU/mL.
Detection of E. coli proteins
The concentrations of E. coli proteins in the final plasmid product were determined using a Micro E. coli DH5α Protein Assay Reagent kit (WENZHOU Kemiao Biotechnology, Co., Ltd., China), according to the manufacturer's protocol.The absorbance of the reaction mixture was measured at 450 nm using the multi-mode microplate reader.The detection range of this assay kit was 0.8-24 ng/L.
Detection of E. coli genomic DNA
E. coli genomic DNA was detected using quantitative real-time polymerase chain reaction (qPCR) under the following parameters; each 20-μL reaction system contained 10 μL of 2 × T5 Fast qPCR Mix (SYBR Green I), 1 μL of 10 μM forward/reverse primers, and 1 μL gDNA template.Thermal cycling was performed using an FQD-96A Sequence Detection System (BIOER Technology, Hangz, China) with the following thermocycling parameters: inactivation of reverse transcriptase at 95 °C for 5 min, then 40 cycles of 95 °C for 15 s, 56 °C for 15 s, and 72 °C for 20 s, with single-point fluorescence detection.The sequences of the E. coli 16S rRNA primers used in this study are as follows: forward, 5′-CCG TTT CTC ACC GAT GAA CA-3′; reverse, 5′-GCT GTC GAT GAC AGG TTG TT-3′.
Detection of residual RNA
Residual RNA was detected using horizontal electrophoresis on 0.8% agarose gels (5 V/cm, 25 min).The gels were analysed using ImageJ (Laboratory for Optical and Computational Instrumentation, LOCI, University of Wisconsin).
Supercoiled DNA analysis
Superhelical structure evaluation of the plasmid DNA was performed using high-performance liquid chromatography (HPLC; Alliance 2795; Waters, USA) on an Ultimate AQ-C18 column (Waters, USA) with the mobile phase A as 0.1 M TEAA (pH 7.0) and mobile phase B as acetonitrile.The column was first equilibrated with 2 CV of phase A. Following sample injection (300 μL), linear gradient elution was performed with the following elution conditions: the initial ratio of phase A: phase B was 100%:0%, elution was for 25 min, and the final ratio of phase A: phase B was 2%:98%.
Sequencing analysis
The final plasmid DNA purified using the three purification processes was subjected to sequencing and restriction digestion analyses.Sequencing was conducted, and the results were compared with the reported CCOL2A1 sequence using the DNAMAN v5.2.2 software (Lynnon Biosoft Corp., USA).The purified plasmid DNA was linearised with EcoRI and HindIII (TaKaRa Bio, Inc., Shiga, Japan).
Detection of biological activity
Six-week-old inbred female Wistar rats (the Animal Breeding Centre of the Academy of Military Medical Sciences, Beijing, China) were randomly divided into six groups (10 animals per group).The inoculation groups were injected with the pcDNA-CCOL2A1 vaccine intramuscularly (300 μg/kg) in the left hind limb.The positive control group received an intramuscular injection of MTX (0.75 mg/kg) once a week for 4 weeks, while the negative control received a single intramuscular injection of normal saline (NS) [7].Fourteen days after vaccination with the DNA plasmid, a CIA model was induced and evaluated as described previously [7,8].All the animal experiments were carried out in accordance with the National Research Council's Guide for the Care and Use of Laboratory Animals.
Statistical analysis
For descriptive analyses, data are presented as means ± SD. Analysis of variance was performed to evaluate the significance level between the experiment and control groups using the SPSS13.0software.If the test of homogeneity of variance showed no homogeneity of variance between groups (P > 0.05), the least significant difference test was used for multiple comparisons between groups.Otherwise, Tamhane analysis was performed.
Yield of the therapeutic pcDNA-CCOL2A1 vaccine
An overview of the plasmid DNA yield is presented in Table 1.The plasmid yield was described using three parameters: concentration, volumetric plasmid yield (the weight of the plasmid DNA per litre of fermentation broth), and special plasmid yield (the weight of the plasmid DNA per gram wet cell weight).The volumetric plasmid yield obtained from the combination procedure of PEG/MgCl 2 precipitation and Triton X-114 was 11.81 ± 1.03 mg/L, significantly higher than that obtained using AEC and a commercial kit (P = 0.008 and 0.032, respectively).The volumetric plasmid yield obtained using AEC was 6.62 ± 0.79 mg/L, which was not significantly different from that obtained using a commercial kit (6.52 ± 0.15 mg/L; P = 0.997).The special plasmid yield obtained using the combination procedure was 1.78 ± 0.30 mg/g, almost twice that obtained using AEC (0.83 ± 0.05 mg/g, P = 0.001) and the commercial extraction kit (0.77 ± 0.11 mg/g, P = 0.001).The concentrations of the plasmid DNA purified using the combination procedure were also significantly higher than those from the other two processes used in this study (Table 1).The final plasmid concentration in the PEG/MgCl 2 group was 670.60 ± 57.42 mg/L, which was also twice that obtained using AEC and a commercial purification kit, at 330.81 ± 39.61 and 325.75 ± 7.67 mg/L, respectively.
Quality and purity of the therapeutic pcDNA-CCOL2A1 vaccine
Purifying plasmid DNA vaccines for therapeutic applications and animal trials essentially aims to eliminate host residues, such as endotoxins, proteins, genomic DNA, and RNA.To reduce side reactions and ensure the reproducibility of plasmid DNA vaccine activity, the impurities in plasmid DNA vaccine should meet certain acceptance criteria [13][14][15].Table 2 provides information on the impurities, analytical methods, testing results, and acceptance criteria.The LAL assay showed that the amount of endotoxin residues complied with the FDA, NMPA, and EAEM acceptance criteria, regardless of the method used.Contaminants, such as E. coli, were not detected using enzyme-linked immunosorbent assay (ELISA), meaning its content was far below the detection limit of 1 μg/mg [13][14][15].The amounts of E. coli genomic DNA in the plasmid DNA purified using the PEG/MgCl 2 precipitation protocol and AEC were 2.09 ± 0.18 μg/ mg and 3.45 ± 0.57 μg/mg, respectively, both of which were significantly lower than the 7.49 ± 0.07 μg/mg plasmid DNA obtained using the commercial kit (P = 0.000 and 0.018, respectively) [13][14][15].All residues of E. coli genomic DNA in the final products obtained using these three methods complied with the EAEM guidelines [14].
A gel imaging scanning analyser was used to quantify the RNA residues in the plasmid product.The plasmid DNA obtained using PEG/MgCl 2 precipitation and the commercial kit had no visible RNA bands, whereas the plasmids obtained using the AEC method had obvious RNA bands, with a percentage of 10.11%.This is because both RNA and plasmid DNA are nucleic acid molecules with negative charges and could not be effectively separated using AEC alone.Therefore, a combination of tangential flow filtration and multi-step chromatography or selective precipitation is commonly used to remove RNA contaminants from plasmid DNA [20,23,24].Our results confirmed that PEG/MgCl 2 precipitation could effectively precipitate small-molecule RNA from the pcDNA-CCOL2A1 vaccine.
Homogeneity analysis of the therapeutic pcDNA-CCOL2A1 vaccine
Supercoiled DNA is the active isoform of plasmid DNA for expression, translation, and eliciting immune responses in vivo; it directly affects the biological activity of gene products.Other isoforms of plasmid DNA, such as open circular, linear, and denatured plasmid DNA, are commonly generated during fermentation and purification.Furthermore, the content of supercoiled DNA in the final product is related to the host strains and plasmid [25].The FDA recommends a supercoiled plasmid content of ≥ 80%, whereas the NMPA and EAEM recommend more than 90% [13][14][15].In this study, supercoiled DNA analysis of the final product was performed via agarose gel electrophoresis and HPLC.The results demonstrated a high percentage of supercoiled plasmid purified using PEG/MgCl 2 purification and the commercial kit, with supercoiled DNA percentages of 94.98% and 93.67%, respectively (Fig. 3).However, the amount of supercoiled plasmid obtained using AEC was 71.24%, indicating that different isomers of the pcDNA-CCOL2A1 vaccine could not be effectively separated using AEC alone.
Molecular characterisation of the therapeutic pcDNA-CCOL2A1 vaccine
Regardless of the purification process, all the obtained plasmid samples were suitable for downstream applications, such as sequencing and enzyme digestion (Fig. 4).
Biological activity of the therapeutic pcDNA-CCOL2A1 vaccine
The bioactivity of the pcDNA-CCOL2A1 plasmid produced from the three different purification processes was evaluated, as reported previously [7,8].The results showed that the plasmid product obtained through the different purification methods could significantly reduce the incidence and severity of CIA in rat models, consistent with our previous investigation [7].Additionally, the physical characteristics of the experimental rats remained normal during the entire observation period.A detailed demonstration of the biological activity of the plasmid is shown in Fig. 5.
Discussion
In this study, we innovatively developed a high-efficiency, cost-effective, and easy-to-operate system for the lab-scale separation and purification of the pcDNA-CCOL2A1 vaccine.Furthermore, we confirmed that the residual E. coli protein, genome, RNA, and endotoxins in the final supercoiled pcDNA-CCOL2A1 vaccine product completely conformed to the international criteria for DNA vaccines [13][14][15].Our results will not only provide sufficient high-quality and high-yield pcDNA-CCOL2A1 vaccine for preclinical research but also promote further pilot-scale and even industrialscale production of pcDNA-CCOL2A1 vaccine.Notably, some important advances worthy of in-depth discussion emerged from this study.The production of genetic engineering products requires a complex biological engineering system that includes upstream and downstream process technologies.After successfully optimising the upstream process technologies, the downstream process technologies are more critical to the industrialisation of genetic engineering products.Downstream process technologies in genetic engineering typically include large-scale cultures of engineering bacteria (cells) and the separation, purification, and identification of expression products that meet clinical use standards [12].In recovering genetically engineered products, we should not only pay attention to the use of highly selective separation and purification methods but also consider the reasons that affect the biological activity of the final product and the low utilisation rate of the culture medium during fermentation.Thus, several separation and purification steps are necessary, making downstream process technologies more complex than upstream process technologies.Because a large amount of the pcDNA-CCOL2A vaccine is required for efficacy studies, safety analyses, pharmacokinetic research, and vaccine stability investigations in preclinical trials, we have successfully established a three-tier cell bank and demonstrated the genetic stability of the engineered E. coli DH5α carrying the pcDNA-CCOL2A1 plasmid to produce this DNA vaccine with high potential [26].Furthermore, we have systematically optimised the fermentation process for the engineered E. coli strain and greatly increased the yield of plasmid DNA by 51.9%, with the plasmid DNA yield per unit of bacterial liquid reaching 16.97 mg/L [18].Statistically, the protein content of cell lysates obtained after fermentation is the largest in the unit dry weight, accounting for approximately 55% of the total weight, followed by RNA, which accounts for approximately 21%, and other impurities, such as endotoxins and genomic DNA, which account for approximately 21%; the plasmid DNA only accounts for approximately 3% [27].Therefore, maximising the yield of plasmid DNA is the first goal of the purification process.In addition, plasmids larger than 10 kb will increase the difficulty of the purification process because larger plasmids are easily affected by shear force.Moreover, obtaining a relatively high proportion of supercoiled plasmid DNA is difficult.Similarly, the yield and purity of the plasmid may also be reduced.Therefore, to facilitate subsequent downstream purification, the pcDNA 3.1( +) expression plasmid, which has a length of only 5.428 kb, was used in our therapeutic pcDNA-CCOL2A1 vaccine [5].These advances provide a solid foundation for further large-scale separation and purification of the pcDNA-CCOL2A1 vaccine with high quality and yield.Downstream separation and purification technologies of genetic engineering should meet some requirements.First, the technical conditions should be mild to maintain the biological activity of the target product.Second, the approach should exhibit good selectivity and effectively separate the target product from the complex mixture to achieve a high purification ratio.Additionally, the yield should be high, and the two technologies should connect directly without the need to process or adjust the materials, which can reduce the number of process steps.Finally, the entire separation and purification process should be fast, meeting the requirements of high productivity.Thus, different separation and purification strategies and technical routes are usually formulated for the target plasmid DNA depending on the application, such as lab-scale, pilot-scale, and industrial-scale stages.For example, at the lab-scale stage, purifying plasmid DNA usually involves using a commercial purification kit or the hexadecane trimethylammonium precipitation method.However, these two methods have numerous shortcomings.For example, the quality of the final product is not controllable.Plasmid DNA obtained by different technicians in different batches showed instability and poor reproducibility regarding yield and residual impurities.To purify different plasmid DNA, it is necessary to repeatedly explore the best purification conditions [28].Additionally, the purification cost is high.The endotoxin removal solution is a patented component of the kit product and cannot be recycled and reused, significantly increasing the purification cost.The chromatogram column used in purification is also non-renewable; thus, the chromatography step greatly increases the purification cost [29,30].The approach is also time-consuming, and increasing the reaction to a larger scale is difficult.The use and residual presence of some solvents may also cause safety hazards.Therefore, chromatography techniques are undoubtedly one of the best methods for large-scale plasmid DNA purification, with the advantages of high resolution and high separation efficiency.Commonly used methods include affinity chromatography (AC), size-exclusion chromatography (SEC), and AEC.AC is more sensitive in terms of specificity and selectivity and has therefore become an essential step in the separation of plasmid DNA isomers.SEC is more suitable for the purification of plasmid DNA as a part of downstream purification in combination with other purification methods because the existing media cannot effectively separate the isomers of plasmid DNA.Owing to it versatile functions, AEC can remove a wide range of impurities.However, it is limited by sample volume and quality; therefore, it is suitable for use in the last purification step to achieve the final purification of residual impurities that have not been completely removed in previous purification steps [23,24].Overall, the purification of plasmid DNA with different quality requirements can be achieved via chromatography alone or in combination with other methods.Hence, it is imperative to select different chromatographic processes to obtain plasmid DNA that meets the international quality standard.However, two or more chromatography steps will increase costs while decreasing the recovery of plasmid DNA [31].
In our study, following long-term screening and comparison experiments, we optimised and combined the best separation and purification methods, PEG/MgCl 2 precipitation and Triton X-114.We initially chose a commercial purification kit alone or a chromatography purification method alone, both of which yielded unsatisfactory outcomes.Single AEC not only fails to effectively purify the plasmid DNA but also requires a combination of multi-step chromatography and ultrafiltration in addition to needing a series of high-end instruments.Besides the high costs and yield of plasmid DNA obtained, a single commercial kit also failed to meet the requirements of some preclinical experiments.The cost of producing 1 mg supercoiled plasmid DNA pcDNA-CCOL2A1 using the Qiagen Endo-Free Plasmid Mega kit was US$ 31.67;however, using a combination of PEG/MgCl 2 precipitation and Triton X-114, the cost was only US$ 1.13.Moreover, by combining PEG/MgCl 2 precipitation with Triton X-114, an average yield of 11.81 ± 1.03 mg supercoiled plasmid DNA can be purified from 1 L fermentation broth, with a concentration of 670.6 ± 57.42 mg/L, which was significantly higher than that obtained using AEC or the commercial purification kit alone.The plasmid DNA isolated and purified from 1 L fermentation broth could meet the demand of 100 CIA rat models for in vivo efficacy studies, pharmacology assays, and toxicity experiments [8].In particular, the supercoiled plasmid DNA separated and purified by combining PEG/MgCl 2 precipitation with Triton X-114 had a high purity and the same biological activity as the plasmid obtained from an internationally used commercial kit.These results indicated that our method was highly efficient, cost-effective, and easy to operate for lab-scale separation and purification of the pcDNA-CCOL2A1 vaccine.
The clinical applications of therapeutic DNA vaccines have broad prospects; however, their safety must be guaranteed before they can be applied to humans.First, quality control mainly considers the purity and consistency of the plasmid DNA vaccine as well as the presence of E. coli residual proteins, genome, and endotoxins.Second, the final product must reach a certain purity.The establishment and verification of the quality standard of the purified supercoiled plasmid DNA are essential steps in the overall purification process evaluation and are also the most critical issues for ensuring the safety and effectiveness of subsequent use [11,12,16].Different countries and regions have formulated their quality standards for the quality control of gene products used for therapeutic purposes.Both FDA and EAEM have issued a series of strict evaluation principles and quality standards as follows: host RNA cannot be detected by 0.8% agarose gel electrophoresis, protein ≤ 1 ng/µg plasmid, genomic DNA ≤ 0.002 µg/µg plasmid, endotoxin ≤ 10 EU/mg plasmid, and supercoiled plasmid DNA ≥ 90%, with a total purity of A 260 /A 280 ≥ 1.75 [11][12][13][14][15].The identification test was performed in accordance with the restriction map after restriction enzyme electrophoresis (Fig. 4).
One of the strengths of this study is that the combined purification method of PEG/MgCl 2 and Triton X-114 effectively eliminated residual impurities from E. coli in the final supercoiled plasmid DNA product, thereby conforming to the international guidance for DNA vaccines [11][12][13][14][15].The final purity of the supercoiled plasmid DNA obtained by combining PEG/ MgCl 2 precipitation with Triton X-114 was 94.98%, which was far higher than the standards of FDA and EAEM (90%).Notably, the efficiency of combining PEG/MgCl 2 precipitation and Triton X-114 to remove endotoxins was higher than that of AEC and the commercial purification kit.Removal of endotoxin is always one of the bottlenecks in the purification process of plasmid DNA.This is because an endotoxin has a saclike structure, and its molecular weight, charge, and hydrophobicity are similar to those of plasmid DNA.Moreover, the molecular weight of an endotoxin ranges from hundreds of thousands to tens of millions, with a large number of negative charges.Therefore, reducing the content of endotoxin to a safe level using the molecular sieve and AC technologies currently used for the large-scale preparation of pharmaceutical plasmid DNA is challenging.In particular, E. coli, an engineered bacterium used to prepare DNA vaccines, contains a large amount of endotoxin.Usually, a concentration of 10% wet bacteria can produce tens of thousands of EU/ mL of lipopolysaccharide (LPS), which is beyond the tolerance range of the human body.Because LPS has a strong heat source, a small amount can cause fever, blood circulation disorders, and even death from septic shock.Moreover, the presence of endotoxins can significantly affect the transfection efficiency of cells [32,33].Therefore, it is essential to remove endotoxin from the supercoiled plasmid DNA and ensure conformity to the relevant safety standards for the clinical application of plasmid DNA vaccines.
Conclusions
To the best of our knowledge, this is the first demonstration that an innovative downstream process system using only a combination of PEG/MgCl 2 precipitation with Triton X-114 that has been successfully developed for the large-scale production of the pcDNA-CCOL2A vaccine with high quality and yield.However, it remains uncertain whether this novel purification strategy and technical method can be extended to the separation and purification of other DNA vaccines.Currently, studies are underway to validate our methodology using a series of DNA vaccines for the treatment of type I diabetes, transplantation rejection, psoriasis, and other diseases.
Fig. 1
Fig. 1 Process illustration and overview of the PEG/MgCl 2 precipitation protocol.The full arrow represents the pellet fraction, and the dashed arrow represents the supernatant fraction after centrifugation at 12,000 × g.All precipitations were performed at 25 °C
Fig. 2
Fig. 2 Schematic outline of the Triton X-114 method in eliminating endotoxins from the pcDNA-CCOL2A1 vaccine
Fig. 3 Fig. 4 19 Fig. 4 (
Fig.3Purity analysis of the final plasmid products.a Analysis of the final plasmid using 0.8% agarose gel electrophoresis (5 V/cm, 35 min).Lane 1: plasmid purified via anion exchange column chromatography (AEC).Lane 2: plasmid purified using a commercial kit.Lane 3: plasmid purified using a combination of PEG/MgCl 2 precipitation and Triton X-114.OC: open circle; SC: supercoiled circle.b The amplification curves include gene-positive samples assessed with three duplicates per sample using real-time quantitative PCR (RT-qPCR).c Analysis of plasmid forms after purifying the pcDNA-CCOL2A1 vaccine via high-performance liquid chromatography (HPLC)
Fig. 5
Fig.5 Assessment of the biological efficacy of the plasmid DNA vaccine on collagen-induced arthritis (CIA) rats.a The incidence of arthritis in the experimental rats during the experimental period.b The arthritis score of the experimental rats during the experimental period.The plasmid DNA vaccine was administered on day 14 before the onset of arthritis, and arthritis progression was monitored over 28 days using a macroscopic scoring system.The data represent the average gross clinical scores for rats treated with 300 μg/kg of vaccine, normal saline (NS), and methotrexate (MTX) (n = 10 for each group).Error bars have been omitted for clarity.These data are representative of three independent experiments, which yielded similar results
Table 1
Yield of the pcDNA-CCOL2A1 vaccine following different purification procedures Data are expressed as the means ± standard deviations of three independent experiments *P < 0.05, **P < 0.001 Abbreviations: AEC Anion exchange column chromatography, PEG/MgCl 2 Polyethylene glycol/magnesium chloride
Table 2
Quality analysis of the pcDNA-CCOL2A1 vaccine via different purification proceduresData are expressed as the mean ± standard deviation (SD) of three independent batch experiments.*P < 0.05, ** P < 0.001 Abbreviations: AGE Agarose gel electrophoresis, ELISA Enzyme-linked immunosorbent assay, EU/mg Endotoxin units per milligram, gDNA genomic DNA, HPLC Highperformance liquid chromatography, LAL assay Limulus amoebocyte lysate assay, Q-PCR Quantitative polymerase chain reaction a NMPA guidelines
|
v3-fos-license
|
2020-03-05T20:47:48.615Z
|
2020-01-30T00:00:00.000
|
213020517
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "http://journal2.um.ac.id/index.php/jpg/article/download/12006/pdf",
"pdf_hash": "371a36ea8614387245f8a1dd456b6ce859adb4fc",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45902",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "371a36ea8614387245f8a1dd456b6ce859adb4fc",
"year": 2020
}
|
pes2o/s2orc
|
The local wisdom and land use of paddy field in Sukarame Village, Cisolok Sub-district, Sukabumi Regency
Local wisdom is an understanding of a culture that has been inherited in a place from generation to generation by word-of-mouth. Indonesia, which still adheres to agriculture, always involves local wisdom in the use of their lands. Land-use is a visualization of the earth's surface cover, which results in a various earth formation, both natural and human-made. In the Village of Sukarame, there was a representative of Kasepuhan, which came from the Banten Kidul indigenous community, namely Kasepuhan Ciptagelar. This community still adheres to the traditional farming methods until now. The purpose of this study was to determine the relationship between land use and local wisdom from an indigenous population that lives in the Village of Sukarame. The methodology carried out in this study was a qualitative descriptive method by conducting an interview, field observation, and documentation. The data used in this study include High-Resolution Satellite Imagery from BIG (2018) and questionnaires. The analysis was also carried out in a qualitative descriptive manner. The result of the research indicates that there’s a relationship between paddy fields and the tradition of Ciptagelar in the Village of Sukarame. The distribution of paddy fields and the culture of Ciptagelar are characterized by the type of paddy, which are the ones harvested once a year with a local variety. The location of those paddy fields is only located in the Hamlet of Lebak Lengsir and Hamlet of Pamokoan, whose communities still adhere to the tradition of Ciptagelar.
INTRODUCTION
Local wisdom is an understanding of a culture that has been inherited in a place from generation to generation by word-of-mouth. Etymologically the local wisdom consists of the word 'Wisdom': good value and 'Local': area/object. In general, local wisdom can be understood as local ideas that are wise, full of knowledge, good value, which is imprinted and followed by members of the community. Local wisdom has many functions, that the purpose of local wisdom is 1) conservation and preservation of natural resources; 2) human resources development; 3) development of culture and science; 4) advice, belief, literature, and taboo; 5) ethical and moral; 7) political meaning. Every phenomenon or cultural expression has always based on 1) some ideas, propositions, values, and norms; 2) patterns activities or actions of the people in the society, and 3) artifacts, so local wisdom has the same analogy. Generally, the local wisdom emerges through the internal process and passed for a long time as a result of the interaction between humans and their environment. Therefore, local wisdom that that was formed in Desa Sukarame is based on an adjusted ideas that have been inherited from generation to generation. Local wisdom emerged through a lot of processes and passed for a long time as a result of the interaction between humans and their environment. Local knowledge is an entity that is crucial for human dignity in the community.
An agrarian country such as Indonesia has a land use that has been influenced by local wisdom that is still in effect today. Land use is a visualization of the surface cover of the earth, which is the formation made by humans. Land can be interpreted as land settlers, for example; a place or area of residence and shared space, where they can use the environment to sustain and continue to develop. There is also a statement that a physical environment that includes climate, soil relief, hydrology, and plants, which, to some extent, will affect the ability of land use. Land use is defined as any form of human intervention on land to meet material and spiritual needs. As a form of human intervention on earth in order to meet material needs or spiritual needs, including grouping the types of land use as follows, 1) Inflammation, 2) Mixed annual plants, land land, not intensive, 3) Mixed annual plants, land land, intense, 4) Paddy fields, 5) Smallholder plantations, 6) Large plantations, 7) Production forests, 8) Natural forests, 9) Grazing fields, 10) Protection forests, 11) Nature reserves.
One example of land use that is influenced by local wisdom can be found in Sukarame Village in Cisolok Subdistrict, Sukabumi District. The village government in managing its household has a source of funds and wealth sought by the village government itself to supply the funds for government administration and village development. One example is the people of Sukarame Village, where most of their income comes from agriculture and farming, as many of them work as a farmer. Farmer itself is a person whose livelihood is farming. Sukarame Village, which is an expansion of Karangpapak Village, is a village consisting of 4 hamlets. The four hamlets, namely the Langkob Hamlet, Lebak Lengsir, Sukarame, and Pamokoan, the majority of their residents work as farmers, from which they own their land, cultivate other people's land, or work on government-owned land. From the four hamlets in Sukarame Village, there are two hamlets whose communities have a unique tradition in processing their agricultural products, namely Lebak Lengsir and Pamokoan Hamlets.
In Sukarame Village, precisely in Lebak Lengsir Hamlet and Pamokoan Hamlet, there are Kasepuhan representatives from the Banten Kidul indigenous people, namely Kasepuhan Ciptagelar who still upholds the traditional farming methods up to now. Kasepuhan itself comes from the word sepuh, which means old, and from that comes the word sesepuh, which means elder or an older person who usually is a leader of an organization. Besides, Kasepuhan can be referred to as an association of many heads of families and small and large villages that are bound in customs and culture. One of the Kasepuhan in West Java is Kasepuhan Ciptagelar. Kasepuhan Ciptagelar is a group of indigenous people who still carry on the tradition of ancestors (Karuhun) based on rice culture. The Ciptagelar community has local wisdom that is valuable in fostering harmony with the natural environment so that environmental sustainability is maintained. There are customary land tenure arrangements in Kasepuhan Ciptagelar, outside the technical aspects of land use, especially in the determination of the physical boundaries of Kasepuhan Ciptagelar. That statement makes the connection between land use and traditions in Kasepuhan Ciptagelar interesting to be studied more deeply.
There is a research about the correlation between local wisdom in the management of agricultural land resources in the Sileng Purba river valley in Borobudur District conducted in 2017, the results of this study indicating local wisdom in agriculture, according to the needs of watering and watering plants with spring water sources in the research area. The research about local knowledge of the Dayak Ngaju people in Central Kalimantan in conducting land preparation by doing that was done in 2017 also produced results that are related to this research. This study shows that local wisdom is still used for the preparation of agricultural research starting from the techniques for land preparation and is again doing traditional preparations, for example, thinning, repairing, and some improvements that previously need to be discussed together. Then research on the wisdom of local communities in forest management in Rano Village, Balaesang Tanjung District, Donggala District, also produced results that residents in Rano Village still uphold the traditions they know from the past. Seen from the land selection, land clearing, and agricultural processes listed in topomaradia traditional institution, which contains a set of rules and interference as an attitude to the ethnic To'Balaesan community in the area of research. This research method is how to study the land and how to manage property in Sukarame Village among residents who still carry out the tradition of the Kasepuhan Ciptagelar community. From the research conducted, it can help to find out the differences between the cultures adopted in Sukarame Village based on the number of paddy fields harvested within reach.
The benefit of this research is that it is useful to provide the latest information for the village government related to the different types of rice fields in Sukarame village so that they can focus more on managing, planning, and supervising the land. That is because this research obtains precise information related to the locations of rice fields managed by Kasepuhan Ciptagelar and those operated by other communities. This research becomes an essential topic because agricultural land, such as the paddy field, is critical to provide food and fibre for the people and is a vital source of employment in Sukarame Village. Hence, it is necessary to provide an appropriate spatial description of land uses for future planning and sustaining agricultural land by still respecting the local wisdom which has existed for a long time. Therefore, this research needs to be done to find out whether or not there is a difference from the culture that exists in each hamlet with the number of paddy fields harvested within reach. This research is categorized into cultural geography, and the research method used in this study is a descriptive qualitative method. The researcher collects data from the field and then analyze the descriptive approach to find conclusions based on the correlation of the data found in the field. The descriptive research method aims to describe, tell or describe the situation or event in an area based on facts obtained in the Jurnal Pendidikan Geografi: Kajian, Teori, dan Praktik dalam Bidang Pendidikan dan Ilmu Geografi Volume 25, Nomor 1, Jan 2020, Hal 17-24 field in the form of direct information (primary data) or indirect (secondary data). This study uses several variables that are divided into physical variables and social variables. Physical variables used include height, slope, and land use. Social variables used include information about local wisdom, habits, tools, methods, and human activities that affect land use. Physical data parameters used are the area of paddy fields and planting period. The parameters of social data are how to plant, harvest tools, and how to harvest. The process of collecting data done by several methods. In-depth information about the local wisdom 'Ciptagelar' was obtained by collecting secondary data, doing interviews with the people of Sukarame Village, observations directly in the Sukarame Village area, and documentation related to the local wisdom of 'Ciptagelar'. After collecting the data, the next step is data processing. Data processing is performed by using ArcGIS 10.1. The data obtained will be processed to produce an output in the form of a verified land use map of Sukarame Village. The results of data processing, which consist of physical variables and social variables, were analyzed using qualitative descriptive methods.
Physical Condition of Sukarame Village
Altitude is the vertical location of a point above the reference plane where the reference used is the average vertical distance from the sea level. Sukarame village is at an altitude with intervals between <200 -1500 meters above sea level. Altitude <200 meters above sea level is located in the southern part of Sukarame Village, precisely in Sukarame Hamlet. The height of the interval 201-500 meters above sea level is located in the middle of Sukarame Village, precisely Sukarame Hamlet and southwest, Langkob hamlet. The height interval of 501 -1500 meters above sea level is located in the western part, precisely in Lebak Lengsir Hamlet and east of Sukarame Village, precisely in Pamokoan Hamlet. The shape and contours of the Sukarame Village follow the Ci Sukarame flow that runs in the middle of the village. Altitude can affect the type and condition of vegetation that grows in a place so that it is often a condition for growing specific plants. In this study, altitude can affect rice yields.
A slope is a plane that connects higher ground surfaces with lower ground surfaces [13]. A slope is also the magnitude of an angle formed by the plane. Sukarame Village has a slope that includes six of the seven classes of Van Zuidam Slope Classification. The six categories are flat (0 -2%) and slightly tilted (3-7%) scattered in the Langkob, Sukarame and Pamokoan hamlets along the Ci Sukarame flow, sloping (8-15%) spread in the Langkob hamlet, rather steep (16 -30%) which dominates the Sukarame Hamlet, steep (31 -70%) which governs the Pamokoan Hamlet, and very steep (71 -140%) which dominates the Lebak Lengsir Hamlet and part of the Sukarame Hamlet.
Sukarame Village has an area of 1256.78 hectares, dominated by mixed farms of 60.70%, based on research results (figure 1). Paddy land use in Sukarame Village covers a total of 15.69% of the area and is divided into two, non-technical irrigated paddy fields once a year and two times a year. It indicated that land use is a human activity on earth that aims to meet their needs. The difference of irrigating method is due to the custom that governs planting and processing rice. The non-technical irrigated paddy fields planted once a year is only located in the Lebak Lengsir and Pamokoan Hamlet.
Lebak Lengsir Hamlet is in the northern part of Sukarame Village with an altitude of 501 -1500 meters above sea level and is a mountainous area with a very steep slope. Lebak Lengsir Hamlet has three hutments (Kampongs), namely Lebak Lengsir, Jurnal Pendidikan Geografi: Kajian, Teori, dan Praktik dalam Bidang Pendidikan dan Ilmu Geografi Volume 25, Nomor 1, Jan 2020, Hal 17-24 Remalega, and Cikuluwung. Land use in Cikuluwung is dominated by a mixed farm, then non-technical irrigation paddy fields are planted once a year (figure 2). The area of paddy fields planted once a year in Lebak Lengsir Hamlet is 83.07 hectares. Meanwhile, the paddy field planted twice a year only covers 30.89 hectares. This dominance is supported by traditional and cultural factors in Kampong Cikuluwung, which embraces Kasepuhan Ciptagelar.
Pamokoan Hamlet is located in the eastern part of Sukarame Village with a dominating height at 200 -500 meters above sea level and is a hilly area with a steep slope. Pamokoan Hamlet has two hutments, namely Cijangkorang and Pamokoan. Land use in Pamokoan Hamlet is dominated by mixed farms and is followed by non-technical irrigation paddy fields twice a year (figure 2). The area of paddy field twice a year in Pamokoan Hamlet covers 23.24 hectares. Land use in the east of Ci Sukarame at Pamokoan Hamlet is dominated by non-technical irrigated paddy fields once planted in a year (figure 1), and it covers 10.64 hectares. Some of the Pamokoan Village communities still adhere to Kasepuhan Ciptagelar as in Cikuluwung's hutment especially the people that live in the east of Ci Sukarame. Paddy fields are usually planted twice a year, and Jurnal Pendidikan Geografi: Kajian, Teori, dan Praktik dalam Bidang Pendidikan dan Ilmu Geografi Volume 25, Nomor 1, Jan 2020, Hal 17-24 this difference can be used as a sign that instead, the land belongs to the indigenous people or the non-indigenous residents.
Local Wisdom of Kasepuhan Ciptagelar
Human activities in Sukarame Village are dominated by primary activities, such as farming. The cultivated land can be one's own or someone else's, as well as the state. The primary crop commodities in Sukarame Village are rice and bananas. The harvest is then consumed by them or sold directly to consumers or mediators. Most of the people of Lebak Lengsir Hamlet, precisely in Cikuluwung Kampong, and Pamokoan Hamlet, precisely in Pamokoan Kampong follow the Kasepuhan Ciptagelar tradition.
Communities of kampongs who embrace Kasepuhan Ciptagelar still run ancestral traditions called karuhun that is based on rice culture. The people of Kasepuhan Ciptagelar determine the time to plant rice and to harvest it with the help of "Village Teacher", namely the Kereti and Kidang stars. The two stars move from east to west in tandem once a year. Around August, the Kereti Star began to emerge, which meant that they could immediately make a tool for farming. The event is called " Tanggal kereti, turun beusi ".
The emergence of Kidang Star indicates that the community can begin clearing lands and working on paddy fields. This emergence is called "Tanggal kidang, turun kujang". The types of rice planted by the community are endemic varieties that can only be found in the area. Commodities that are generally farmed by the Pamokoan community are Cere Kiara, Parejambu, Gajah Panjang, and Bulir Besar. One bundle of rice seeds will produce 30 bundles of rice, which are equivalent to 5 kilograms of rice. Rice planting in one year may only be done once and simultaneously, following Kereti and Kidang stars.
The disappearance of Kidang Star, around May, was a sign that rice had to be harvested; this was called "Tilem kidang, turun kungkang". Cutting paddy doesn't use modern tools, but use a tool called etem (figure 3) to minimize wasted rice grains, harvested rice is stored in rice barns called leuit (figure 4). Rice pounding is done traditionally without using a machine (figure 5). To turn yields into rice, they don't use modern rice cookers, but instead uses stoves. Jurnal Pendidikan Geografi: Kajian, Teori, dan Praktik dalam Bidang Pendidikan dan Ilmu Geografi Volume 25, Nomor 1, Jan 2020, Hal 17-24 The yields of the Kasepuhan Ciptagelar community cannot be sold to others, either in the form of rice or paddy, it is because the people of Kasepuhan Ciptagelar believe that rice is life. If rice is sold, it is equivalent to the life of sale or killing people. The rice harvested must be stored in a leuit to meet the needs of the Kasepuhan Ciptagelar community itself, which will eventually be used in traditional ceremonies.
Kasepuhan Ciptagelar has a ritual ceremony related to rice. They held Ngaseuk Ceremony when planting rice and Mipit Ceremony when starting to harvest rice. Nganyaran Ceremony is also held when they start cooking rice. Then the Ponggokan Ceremony is performed as an apology to the earth that has been processed for agricultural purposes. Seren Taun Ceremony is the top ceremony. The ceremony is an expression of gratitude and prayer for successful agrarian products. The climax of the Seren Taun Ceremony is Ngadieuken Pare, in which they put a bunch of mother rice into the leuit. This process was carried out by the leader of Kasepuhan Ciptagelar, Abah Ugi Sugriana Rakasiwi, and his wife, Emak Alit.
Kasepuhan Ciptagelar's local wisdom in Sukarame Village influences the condition of paddy fields in the village. With the existence of Kasepuhan Ciptagelar, there are two types of paddy fields in Sukarame Village, namely paddy fields which have a period of rice planting once a year and paddy fields that have a period of rice planting twice a year. Hamlets that adhere to the tradition of Kasepuhan Ciptagelar include Pamokoan Hamlet and Lebak Lengsir Hamlet. In both hamlets, the paddy fields have a planting period of once a year. Unlike the other hamlets, the paddy fields have a planting period of twice a year.
CONCLUSION
Land use in Sukarame Village is influenced by physical variables such as height and slope. Besides, it is also influenced by the tradition of Kasepuhan Ciptagelar, namely in Lebak Lengsir Hamlet and Pamokoan Hamlet. The existence of people who embrace Kasepuhan Ciptagelar, which manages rice culture, makes the land use in the Lebak Lengsir Hamlet dominated by non-technical irrigation paddy fields planted once a year, especially in Cikuluwung Hutment. In Pamokoan Hamlet there is also a non-technical irrigation rice field planted once a year because the people also adhere to Kasepuhan Ciptagelar. Paddy fields in general experience planting twice in one year, this difference can be used as a sign of whether the land is owned by indigenous people or nonindigenous people. Jurnal Pendidikan Geografi: Kajian, Teori, dan Praktik dalam Bidang Pendidikan dan Ilmu Geografi Volume 25, Nomor 1, Jan 2020, Hal 17-24
|
v3-fos-license
|
2020-11-07T14:06:48.049Z
|
2020-11-05T00:00:00.000
|
226271288
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0241552&type=printable",
"pdf_hash": "8f5a114fa4422ed39267c60a9a035ca6f3a1871b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45904",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "744cc8874671696c397c0f2bc04d5634d011cf2a",
"year": 2020
}
|
pes2o/s2orc
|
The association of clinical phenotypes to known AD/FTD genetic risk loci and their inter-relationship
To elucidate how variants in genetic risk loci previously implicated in Alzheimer’s Disease (AD) and/or frontotemporal dementia (FTD) contribute to expression of disease phenotypes, a phenome-wide association study was performed in two waves. In the first wave, we explored clinical traits associated with thirteen genetic variants previously reported to be linked to disease risk using both the 23andMe and UKB cohorts. We tested 30 additional AD variants in UKB cohort only in the second wave. APOE variants defining ε2/ε3/ε4 alleles and rs646776 were identified to be significantly associated with metabolic/cardiovascular and longevity traits. APOE variants were also significantly associated with neurological traits. ABI3 variant rs28394864 was significantly associated with cardiovascular (e.g. (hypertension, ischemic heart disease, coronary atherosclerosis, angina) and immune-related trait asthma. Both APOE variants and CLU variant were significantly associated with nearsightedness. HLA- DRB1 variant was associated with diseases with immune-related traits. Additionally, variants from 10+ AD genes (BZRAP1-AS1, ADAMTS4, ADAM10, APH1B, SCIMP, ABI3, SPPL2A, ZNF232, GRN, CD2AP, and CD33) were associated with hematological measurements such as white blood cell (leukocyte) count, monocyte count, neutrophill count, platelet count, and/or mean platelet (thrombocyte) volume (an autoimmune disease biomarker). Many of these genes are expressed specifically in microglia. The associations of ABI3 variant with cardiovascular and immune-related traits are one of the novel findings from this study. Taken together, it is evidenced that at least some AD and FTD variants are associated with multiple clinical phenotypes and not just dementia. These findings were discussed in the context of causal relationship versus pleiotropy via Mendelian randomization analysis.
Introduction
Genome Wide Association Study (GWAS) is a powerful approach in identifying genetic risk loci. However, the functional elucidation on how a risk locus is related to a disease still requires including samples from UK Biobank using family history as proxy and Alzheimer's Disease Sequencing Project (ADSP) allows identification of additional novel risk loci [29][30][31][32]. In wave 2, we further include 30 variants from the more recent GWAS meta-analyses [29][30][31][32] or variants identified earlier but not prioritized in wave 1 PheWAS. PheWAS approach has been previously applied to BioVU, Vanderbilt's DNA biobank where phenotypes are defined by EMR records namely ICD codes [33], or the 23andMe research database where phenotypes are defined by self-reports [34], or both [35]. It has the potential of validating target, nominating treatment indication and/or assessing safety signal especially if the effect of a genetic variant mimics the pharmacotherapy effect [36]. Given that the genes implicated in AD/FTD were implicated in cholesterol metabolism (APOE, CLU, and ABCA7) and immune response (CR1, CD33, CLU, ABCA7, TREM2, SPPL2A, SCIMP, HLA-DRB1), It is foreseeable that some of the AD variants may be also associated with metabolic/cardiovascular and/or immune-related traits. Recent development of using genetic variants as an instrument variable in GWAS summary statistics based Mendelian randomization (MR) [37] provides another means to dissect the pleiotropy vs. causal relationship between related traits.
Study participants
Cohort 1: 23andMe. All individuals included in the analyses were research participants of 23andMe who have provided electronic informed consent, DNA samples for genetic testing, and answered surveys online. The study was conducted according to human subject protocol, which was reviewed and approved by Ethical & Independent Review Services, a private institutional review board (http://www.eandireview.com). It is also consistent with the procedures involving experiments on human subjects in accord with the ethical standards of the Committee on Human Experimentation of the institution in which the experiments were done or in accord with the Helsinki Declaration of 1975. All data was completely anonymized and deidentified before access by the analyst for data analysis.
As described previously [38], DNA samples have been genotyped on one of four genotyping platforms. The v1 and v2 platforms were variants of the Illumina HumanHap550+ BeadChip (Illumina, San Diego, CA, USA), including about 25 000 custom single-nucleotide polymorphisms (SNPs) selected by 23andMe, with a total of about 560 000 SNPs. The v3 platform was based on the Illumina OmniExpress+ BeadChip, with custom content to improve the overlap with the v2 array, with a total of about 950 000 SNPs. The v4 platform is a fully custom array, including a lower redundancy subset of v2 and v3 SNPs with additional coverage of lower-frequency-coding variation, and about 570 000 SNPs. S1 Table shows which 23andMe genotype platform (v1-v4) the tested variant is genotyped on. It also shows the imputation statistics for the tested variant, including the average imputation dosages for the first (A) and second (B) alleles (freq.a and freq.b) and the average and minimum imputation quality across all batches (avg.rsqr and min.rsqr). The r 2 statistic is used to measure imputation quality, which range from 0 (worst) to 1 (best). The batch effect test is an F test from an ANOVA of the SNP dosages against a factor representing imputation batch.
Only participants enrolled by 2015 were included in this analysis. A similar approach using the same research database was previously described [34]. We tested the association with more than 1000 well-curated phenotypes (S2 Table), which were distributed among different phenotypic categories (e.g. cognitive, autoimmune, psychiatric etc.). GWAS were previously performed on these well-curated phenotypes and confirmed to replicate known associations and not to generate spurious false positives. For our standard PheWAS, we restricted participants to a set of individuals who have > 97% European ancestry, as determined through an analysis of local ancestry [39]. Briefly, this algorithm first partitioned phased genomic data into short windows of about 100 SNPs. Within each window, we used a support vector machine (SVM) to classify individual haplotypes into one of 31 reference populations. The SVM classifications were then fed into a hidden Markov model (HMM) accounting for switch errors and incorrect assignments as well as generating probabilities for each reference population in each window. Finally, we used simulated admixed individuals to recalibrate the HMM probabilities so that the reported assignments were consistent with the simulated admixture proportions. The reference population data is derived from public datasets (the Human Genome Diversity Project, Hap-Map, and 1000 Genomes-participants provided informed consent and all data was completely anonymized and de-identified before access and analysis), as well as 23andMe customers who have reported having four grandparents from the same country. A maximal set of unrelated individuals was chosen for each phenotype using a segmental identity-by-descent (IBD) estimation algorithm [40]. Individuals were defined as related if they shared more than 700 cM IBD, including regions where the two individuals shared either one or both genomic segments identical-by-descent. This level of relatedness (roughly 20% of the genome) corresponded approximately to the minimal expected sharing between first cousins in an outbred population.
The imputed dosages rather than best-guess genotypes were used for association testing in PheWAS. Participant genotype data were imputed against the September 2013 release of 1000 Genomes Phase1 reference haplotypes, phased with ShapeIt2 [41]. Genotype data for research participants were generated from four versions of genotyping chips as described previously [38]. We phased and imputed data for each genotyping platform separately. We phased using a phasing tool Finch, which implements the Beagle haplotype graph-based phasing algorithm [42], modified to separate the haplotype graph construction and phasing steps. Finch extended the Beagle model to accommodate genotyping error and recombination, to handle cases where there were no consistent paths through the haplotype graph for the individual being phased. We constructed haplotype graphs for European and non-European samples on each 23andMe genotyping platform from a representative sample of genotyped individuals, and then performed out-of-sample phasing of all genotyped individuals against the appropriate graph.
In preparation for imputation, we split phased chromosomes into segments of no more than 10,000 genotyped SNPs, with overlaps of 200 SNPs. We excluded SNPs with Hardy-Weinberg equilibrium p < 10 −20 , call rate < 95%, or with large allele frequency discrepancies compared to European 1000 Genomes reference data. Frequency discrepancies were identified by computing a 2x2 table of allele counts for European 1000 Genomes samples and 2000 randomly sampled 23andMe customers with European ancestry and identifying SNPs with a chi squared p < 10 −15 . We imputed each phased segment against all-ethnicity 1000 Genomes haplotypes (excluding monomorphic and singleton sites) using Minimac2 [43], using 5 rounds and 200 states for parameter estimation.
Association test results were computed using logistic regression for case control comparisons, or linear regression for quantitative traits. For survival traits, association test results using Cox proportional hazards regression were computed. We assumed additive allelic effects and included covariates for age, gender, and the top five principal components to account for residual population structure. The association test p value reported was computed using a likelihood ratio test, which was shown to be a better choice despite of its computational demands [44]. We reported raw p-values for the PheWAS association results, but interpret the results taking into account the number of variants and traits tested. An association with p < 0.05 / (13 � 1,234) = 3.12x10 -6 was deemed to be significant association, other associations with FDR < 0.05 was deemed to be suggestive associations.
Cohort 2: UK Biobank (UKB). Pre-computed UK Biobank PheWAS results based on Neale lab UK Biobank summary statistics were looked up via Open Target Genetics [28] (genetics.opentargets.org) for side by side comparison with PheWAS results based on the 23andMe cohort. There were three version of UKB results accessed via Open Target Genetics, Neale v1 PheWAS results were accessed in November 2019, and Neale v2 PheWAS results (http://www.nealelab.is/uk-biobank) were accessed in July 2020. UKB SAIGE is yet a different version of UKB PheWAS results by University of Michigan (http://pheweb.sph.umich.edu/ SAIGE-UKB/about). There are 4593 traits in total for the Neale v2 PheWAS analysis. We report raw p-values for the PheWAS association results, but interpret the results taking into account the number of variants and traits tested. An association with p < 0.05 / (48 � 4593) = 2.27 x 10 −7 was deemed to be significant association. No additional adjustment was made for Neale v1 PheWAS results (a few traits present in v1 were not present in v2) and/or UKB SAIGE PheWAS.
Whole-genome genetic correlations between significant PheWAS traits
For convenience of collecting whole-genome summary statistics, AD summary statistics from Jansen et al study [30] was used to calculate genetic correlation with other traits using LD Hub (v1.9.3) [45], which is a centralized database of summary-level GWAS results and a web interface for LD score regression (LDSC) [46].
Directional horizontal pleiotropy vs causal relationship
For the multiple clinical phenotypes associated with the AD/FTD variants identified in the PheWAS analysis, we attempted to untangle the relationship between trait A and trait B to determine if genetic variants impact trait A (also called exposure in the literature) and trait B (also called outcome in the literature) independently, or genetic variants' effect on trait B is mediated by trait A (or vice versa). We applied MR Egger intercept test [47,48] to test directional horizontal pleiotropy, where the variants affect both trait A (e.g. CAD) and trait B (e.g. AD) independently. MR uses genetic variants as a proxy for an environmental exposure/trait A (e.g. CAD), assuming that: 1) the genetic variants are associated with the exposure/trait A; 2) the genetic variants are independent of confounders in the exposure-outcome association; 3) the genetic variants are associated with the outcome only via their effect on the exposure, i.e. there is no horizontal pleiotropy whereby genetic variants have an effect on an outcome (e.g. AD) independent of its influence on the exposure (e.g. CAD). If the MR Egger intercept test had a significant p-value (p < 0.05) (i.e. violating assumption #3 from the MR analysis), the pair of traits was excluded from the bi-directional, two-sample MR test using inverse variance weighted (IVW) method among traits identified in the PheWAS study. In this case (Egger intercept p < 0.05), the gene-outcome vs gene-exposure regression coefficient is estimated using MR Egger regression to correct for the bias due to directional pleiotropy, under a weaker set of assumptions than typically used in MR [49]. Both IVW and MR Egger regression however do not protect against violation of assumption #2. The MR analysis is also only feasible if there is sufficient information from MR Base [50] for analysis or if the information could be supplemented by manually adding GWAS results from publications, e.g. the recent AD metaanalysis by Jansen et al. [30] In the MR analysis, we primarily leveraged variants implicated in a trait from public summary statistics (pre-compiled as a set of instruments from NHGRI-EBI GWAS Catalog [51] in the MRInstruments R package v0.3.2 https://github.com/MRCIEU/ MRInstruments) as an individual variant is unlikely to be powerful enough as an instrument variable unless the effect size is large. Instrument variables were constructed using the default independent genome wide significant SNPs (p < = 5 x 10 −8 ) for AD and other diseases/risk factors except for FTD where a p-value threshold of p < = 6 x 10 −6 was used because of the smaller GWAS samples size [52]. We assessed if bi-directional causal relationships exist between AD and a number of significant PheWAS traits identified in the PheWAS analyses. For FTD, only directional MR analysis was performed using FTD as the exposure as only top GWAS hits were available publicly. All analyses were performed using the MR-Base 'TwoSam-pleMR' v0. 5.4 package [50] in R and MR test with nominal p < 0.05 using inverse variance weighted and/or MR Egger method was reported. A p < 0.05/ # of PheWAS traits examined is considered significant, while a p < 0.05 is considered suggestive.
Results
Thirteen variants were successfully imputed from the four genotyping platforms in the 23andMe cohort with the average and minimum imputation quality across all batches (avg. rsqr and min.rsqr) ranging from 0.96 to 1 for avg.rsqr (average across 13 variants was 0.995) and 0.86 to 1 for min.rsqr (average = 0.982) (S1 Table).
AD-risk variants are highly associated with neurological, longevity, metabolic, cardiovascular, eye, and immune-related traits Selected PheWAS findings were summarized in Tables 1 and 2 for wave 1 and wave 2 PheWAS alongside the known associations reported in the NHGRI-EBI GWAS Catalog [51] for the SNPs previously associated with LOAD and/or FTD. The list of PheWAS association results using the 23andMe cohort with FDR < 0.05 is available as S3 Table, while the full list of Phe-WAS association results is available from S4 Table. An association in the 23andMe cohort with p < 0.05 / (13 � 1,234) = 3.12 x 10 −6 was deemed to be significant association, other associations with FDR < 0.05 was deemed to be suggestive associations. A number of the known associations was replicated. In addition, novel associations were identified. The two SNPs, rs429358 and rs7412, defining APOE ε2/ε3/ε4 alleles were known to be associated with multiple neurological, longevity, metabolic and cardiovascular traits (Fig 1, Table 1, and S3 Table). Subjects carrying the minor T allele of rs7412 are APOE ε2 protective allele carriers and subjects carrying the minor C allele of rs429358 are APOE ε4 risk allele carriers. PheWAS identified significant associations with metabolic traits (high cholesterol or taking drugs to lower cholesterol, body mass index (BMI)), neurological traits (AD family history, AD, cognitive decline, mild cognitive impairment, memory problems), longevity traits (nonagenarian-at least 90 years old, healthy old-over age 60 with no cancer or disease, centenarian family), cardiovascular diseases (coronary artery disease (CAD), metabolic and heart disease), and eye problems (nearsightedness, glasses usage, myopia vs. hyperopia), and serious side effects from statins (rs429358, p = 7.14 x 10 −7 ). The directionality of association is consistent with the protective vs. risk effect of two APOE SNPs in that the minor allele of rs7412 was associated with lower risk of high cholesterol, while the minor allele of rs429358 was associated with higher risk of high cholesterol (p = 6.6 x 10 −295 ). Additional suggestive associations were identified (FDR < 0.05) for age-related macular degeneration (AMD) or blindness (rs429358 p = 9.59 x 10 −5 , FDR = 0.004). Interestingly, rs11136000 from CLU is also strongly associated with multiple eye phenotypes (nearsightedness, myopia, glasses, astigmatism) ( Table 1, Fig 2, and S3 Table). Subjects carrying the minor allele of rs429358 had lower chance of nearsightedness (p = 1.4 x 10 −8 ), while subjects carrying the minor allele of rs11136000 had higher chance of nearsightedness (p = 4.5 x 10 −15 ). For the overlapping phenotypes, UK Biobank PheWAS results largely supported the 23andMe PheWAS findings.
PLOS ONE
PheWAS of AD/FTD loci and causal inference of significant PheWAS traits
PLOS ONE
PheWAS of AD/FTD loci and causal inference of significant PheWAS traits iqb.alzheimers_fh: cases report having any of their grandparents, parents, brothers, sisters, aunts, or uncles ever been diagnosed with AD; Alzheimer: AD, age 55 or older; cognitive_decline: Any report of cognitive impairment or memory loss, age 65 and older, excluding AD cases; iqb.mild_cognitive_imp_fh: "Have any of your grandparents, parents, brothers, sisters, aunts, or uncles ever been diagnosed with mild cognitive impairment (MCI)?"; iqb.low_hdl: Ever told by a medical professional that your high-density lipoprotein is too low; high_cholesterol: High cholesterol or taking drugs to lower cholesterol.
� chromosomal position based on genome build GRCh38 coordinate.
��
The Alleles column describes the two possible alleles at the variant location, listed in alphabetical order. In this study, the first allele will be called "A allele" and the second allele will be called the "B allele"; effect (
PLOS ONE
PheWAS of AD/FTD loci and causal inference of significant PheWAS traits
PLOS ONE
PheWAS of AD/FTD loci and causal inference of significant PheWAS traits
PLOS ONE
PheWAS of AD/FTD loci and causal inference of significant PheWAS traits
PLOS ONE
PheWAS of AD/FTD loci and causal inference of significant PheWAS traits
PLOS ONE
PheWAS of AD/FTD loci and causal inference of significant PheWAS traits
PLOS ONE
PheWAS of AD/FTD loci and causal inference of significant PheWAS traits in UKB cohort.
� chromosomal position based on genome build GRCh38 coordinate.
��
The Alleles column describes the two possible alleles at the variant location, listed in alphabetical order. In this study, the first allele will be called "A allele" and the second allele will be called the "B allele"; effect (β): The effect size, ln(Odds Ratio [OR]) for binary traits, defined per copy of the B allele.
In addition to APOE variants and rs646776, ABI3 variant rs28394864 was also associated with cardiovascular traits, such as hypertension (p = 1.60 x 10 −13 ), ischemic heart disease (p = 4.58 x 10 −9 ), coronary atherosclerosis (p = 6.47 x 10 −10 ), and angina (p = 6.51 x 10 −9 ). Despite CLU and ABCA7 are both implicated in cholesterol metabolism [22], neither the CLU variant nor the ABCA7 variant was strongly associated with metabolic/cardiovascular traits in the PheWAS analyses in either the 23andMe cohort or the UKB cohort despite the fairly substantial sample size for those traits in both cohorts.
The PheWAS results for the UKB cohort are available in S6
No significant genetic correlation between PheWAS traits and AD
Despite that multiple traits were associated with the same individual variants in PheWAS analysis, there was no significant genetic correlation among these traits (e.g. LDL/HDL cholesterol, Type 2 Diabetes, CAD, CeD, RA, UC, multiple sclerosis, and BMI) and AD at the genome level (S8 Table). As positive controls, IGAP AD [21] and UKB trait (Neale v1) Illnesses of mother: AD/dementia showed significant genetic correlation with Jansen et al AD results [30] (r g = 0.901, p = 3.10 x 10 −13 ; r g = 0.63, p = 1.09 x 10 −6 , respectively). 65], CeD [66], multiple sclerosis (MS) [67,68] were obtained. When treating AD as outcome and using p < 5x10 -8 to select variants as instrument variables, the MR Egger intercept test suggested a directional horizontal pleiotropy for extreme height (Egger intercept p = 0.009), total cholesterol (p = 0.02), RA (p = 0.02) and parents' age at death (Egger intercept p = 0.02). MR Egger analysis suggested that metabolic traits (e.g. LDL cholesterol (p = 4.7 x 10 −4 ) and total cholesterol (p = 9.8 x 10 −5 )) and RA had protective effect on AD with higher level of LDL or total cholesterol increasing the risk of AD and having RA reduce the risk of AD (S9 Table).
A causal role of cholesterol on AD revealed by MR analysis
Conversely, genetic variant instrument for AD [30] suggested that AD possibly had causal effect on MS (p = 0.0001 using inverse variance weighted method) and coronary heart disease (p = 0.003 using MR Egger method, S9 Table). For FTD, the results may be inconclusive due to few SNPs were used in the instrument variable and the SNPs chosen were suggestively significant from the GWAS with smaller sample size. These MR tests would be still significant after correcting for the number of traits tested (n = 15, p < 0.05/15~0.003). A full list of MR results is listed in S9 Table.
Discussion
The PheWAS study showed that both APOE variants defining ε2/ε3/ε4 alleles, ABI3 variant rs28394864, and rs646776 had significant associations with metabolic/cardiovascular and/or longevity traits. APOE variants were additionally significantly associated with neurological traits. HLA-DRB1 variant was associated with immune-related traits. Both APOE variants and CLU variant were significantly associated with eye phenotypes. The associations of ABI3 variant rs28394864 with cardiovascular traits (hypertension, ischemic heart disease, coronary atherosclerosis, angina), and asthma are novel findings from this study.
The novel finding of PheWAS associations of ABI3 variant is of most interest. Rare variant (rs616338, p.Ser209Phe, p = 4.56 × 10 −10 , OR = 1.43, MAF cases = 0.011, MAF controls = 0.008) in ABI3 was previously reported, and ABI3 is specifically expressed in microglia (S2 Fig, similar expression pattern in human compared to other AD genes implicated by human genetics including TREM2, HLA-DRB1, PLCG2, SORL1, SCIMP, and MS4A6A) and thought to play a role in microglia-mediated innate immunity in AD [69]. Given its role in immune response, the PheWAS association with asthma is not completely unexpected, and its association with cardiovascular traits might reflect the role of immune dysregulation on those disease processes.
The observed PheWAS associations of APOE variants with metabolic/cardiovascular traits are not surprising. While vascular and metabolic risk factors such as hypertension, hyperlipidemia /hypercholesterolemia, hyperinsulinemia, and obesity at midlife, diabetes mellitus (DM), and cardiovascular and cerebrovascular diseases (including stroke, clinically silent brain infarcts and cerebral microvascular lesions) are generally thought to increase the risk of dementia and AD [70][71][72][73], the directional impact of a factor could be age-dependent, for example, hypertension, obesity and hypercholesterolemia are risk factors at middle age (<65 years) for late-life dementia and AD, but protective late in life (age >75 years) [74]. It seems to be odd that AD patient had a lower risk of developing CAD [73], but it is consistent with a meta-analysis [72] and this meta-analysis also reported that metabolic syndrome decreases the risk of AD. In the MR analysis from this study, AD increased the risk of CAD (S9 Table), but this result was supported by MR Egger method only. Taking age into consideration may help better delineate the relationship. Furthermore, several cardiovascular risk factors demonstrated associations with more rapid cognitive decline as expected, however it was also reported that recent or active hypertension and hypercholesterolemia were associated with slower cognitive decline for AD patients [75]. These epidemiology studies suggested that it appears to be a complex interplay between AD and metabolic/cardiovascular risk factors and conditions, and the occasionally contradictory findings may be due to age of the population, sampling biases and/or other confounding factors. Nevertheless, the Finnish Geriatric Intervention Study to Prevent Cognitive Impairment and Disability (FINGER) study demonstrated that multidomain intervention (diet, exercise, cognitive training, vascular risk monitoring) had beneficiary effect on the primary outcome, i.e. change in cognition as measured through comprehensive neuropsychological test battery (NTB) in an at-risk elderly population (aged 60-77) with CAIDE (Cardiovascular Risk Factors, Aging and Dementia) Dementia Risk Score of at least 6 points and cognition at mean level or slightly lower than expected for his/her age group, suggesting targeting modifiable vascular and lifestyle-related risk factors could improve or maintain cognitive functioning [76]. The PheWAS analysis suggested the minor allele of rs7412 (defining ε2 allele), a known protective allele for AD (OR = 0.74), was also a protective allele for having high cholesterol, low HDL, having heart metabolic disease or CAD. Similarly, the minor allele of rs429358 (defining ε4 allele), a known risk allele for AD (OR = 2.17), was also a risk allele for having high cholesterol, low HDL, having heart metabolic disease or CAD. Our MR analysis demonstrated that LDL and total cholesterol had a causal relationship to the development of AD using MR Egger. This MR result is however sensitive to the MR methods used as other methods such as weighted mode, weighted median, or simple mode (not prespecified analyses) did not provide evidence or only provide suggestive evidence for the causal effect of LDL on AD. A recent MR analysis on 24 potentially modifiable risk factors [77] concluded that genetically predicted cardiometabolic factors were not associated with AD as there was no evidence of causal relationship after excluding one pleiotropic genetic variant (not disclosed in the publication) near the APOE gene (also near APOC1 and TOMM40 genes). The evidence we obtained was far weaker than that reported by Larsson et al., for all variants [77]. Despite there were few SNPs driving the causal evidence in single variant analysis, leave-oneanalysis did not differ substantially from the analysis including all variants for LDL trait except rs7412 (S1 Fig). This study opted to report the findings using the inverse variance weighted method (when Egger intercept is not significant) as also adopted by Howard et al. [78], where a minimal of 30 SNPs used in instrument variable was also imposed, or MR Egger regression results (when the intercept is significant). We did not filter out analysis with less than 30 variants. Both compromises are limitations in this study thus those results shall be interpreted with caution. In addition, both IVW and MR Egger methods do not protect against the violation of the MR assumption when the pleiotropic effects act via a confounder of the exposureoutcome association [49].
The observed PheWAS associations of rs646776 variant with metabolic/cardiovascular traits are also not surprising. SNP rs646776 was reported to be robustly associated with lowdensity lipoprotein cholesterol (LDL-C, p = 3 × 10 −29 ) with each copy of the minor allele decreasing LDL cholesterol concentrations by~5-8 mg/dl [79]. The association was strengthened in a meta-analysis of~100,000 individuals of European descent for LDL-C (p = 5x10 -169 ) and was also detected for total cholesterol (p = 7x10 -130 ) [80]. However, which gene is the causative gene for rs646776 effect is less clear despite it was selected to be included in the PheWAS analysis based on the association with plasma progranulin levels. Rs646776 at the 1p13 locus was also strongly associated with transcript levels of three neighboring genes: sortilin (SORT1) (p = 3 × 10 −26 ), cadherin EGF LAG seven-pass G-type receptor 2 (CELSR2) (p = 2 × 10 −12 ) and proline and serine rich coiled-coil 1 (PSRC1) (p = 3 × 10 −12 ) [79]. The conditional analysis suggested that SORT1 eQTL effect might be the dominant effect [79]. Rs599839, a SNP in LD with rs646776, was also reported to be associated with CAD [81]. The minor allele conferring lower level of LDL cholesterol also conferred lower risk of CAD. Rs646776 was also identified in a bivariate analysis to be associated with circulating IGF-I and IGF-binding protein-3 (IGFBP-3) (p = 6.87 x10 -9 ) in a meta-analysis of 21 studies including 30,884 adults of European ancestry [82]. The growth hormone/insulin-like growth factor (IGF) axis can be manipulated in animal models to promote longevity. IGF related proteins including IGF-I and IGFBP-3 have also been implicated in risk of human diseases including cardiovascular diseases and diabetes. This is particularly interesting given the observation that rs646776 is associated with longevity in the PheWAS analysis.
It is surprising and puzzling to see the effect of AD variants on multiple eye phenotypes especially myopia that have onset in early childhood or teens. The association with age related macular degeneration (AMD) was reported previously [83]. The AMD association is interesting because the histopathological hallmark of AMD is amyloid-β (Aβ) in optic never drusen [84]. Drusen of the macula are very small yellow and white spots that appear in one of the layers of the retina named Bruch's membrane and are remnant nondegradable proteins and lipids (lipofuscin), which is the earliest visible sign of dry macular degeneration. In addition to the amyloid phenotype, AMD and AD also share other common histologic feature such as vitronectin accumulation and immunologic features such as increased oxidative stress, and apolipoprotein and complement activation pathways [85]. The common etiopathogenetic and morphological manifestations of AD and age-related eye diseases in amyloid genesis may have a broader implication in understanding the disease mechanism, identifying new biomarkers and treatment [86]. A recent study showed that the soft drusen area in amyloid-positive patients was significantly larger than that in amyloid-negative patients [87]. Ocular and visual information processing deficit were other possible biomarkers for AD [88]. Recently it was also reported that thinner retinal nerve fiber layer is associated with an increased risk of dementia including AD, suggesting that retinal neurodegeneration may serve as a preclinical biomarker for dementia [89]. Risk variant for AD rs429358 in our PheWAS results had a protective effect for AMD and blindness (p = 9.6 x 10 −5 , FDR = 0.004), perhaps reflecting the equilibrium of Aβ in brain vs. retina like the situation between brain vs. CSF. A variety of other visual problems reported in patients with AD have been reviewed in details [90] including loss of visual acuity (VA), color vision and visual fields; changes in pupillary response to mydriatics, defects in fixation and in smooth and saccadic eye movements; changes in contrast sensitivity and in visual evoked potentials (VEP); and disturbances of complex visual functions, though they have not been studied as a risk factor of AD or outcome of having AD. In the MR analysis, we cannot directly test the causal relationship between AMD despite the GWAS with large sample size is available due to standard error of odds ratio was not reported in the paper [83].
Subjects with AD risk variant rs1113600 in CLU gene had a higher chance of being nearsighted, while subjects with AD risk variant rs429358 in APOE gene had a lower chance of being nearsighted. It was reported that wearing reading glasses correlated significantly with high mini-mental state examination for the visually impaired (MMSE-blind) after adjustment for sex and age (OR = 2.14, 95% CI = 1.16-3.97, p = 0.016), but reached borderline significance after adjustment for education [91]. There was a trend toward correlation between myopia and better MMSE-blind (r = -0.123, p = 0.09, Pearson correlation) [91]. On the other hand, myopia may be a surrogate phenotype for intelligence (or education), as a genetic correlation between myopia and intelligence was shown in a small cohort of 1500 subject (p < 0.01) [92]. Larsson et al., suggested that genetically predicted educational attainment was significantly associated with AD per year of education completed (OR = 0.89, p = 2.4 × 10 −6 ) and per unit increase in log odds of having completed college/university (OR = 0.74, p = 8.0 × 10 −5 ), while intelligence had a suggestive association with AD (OR = 0.73, p = 0.01) [77]. Our MR analysis did not provide evidence on the causal relationship of myopia phenotype on AD.
Furthermore, although genetic variants (rs429358 and rs1113600) are associated with multiple phenotypes, the associations are not necessarily independent of each other. In fact, MR Egger intercept test did not support the independent relationship except for height and a few other traits. Overall, the relationship and interpretation between traits seem to be complex and require further examination.
Other limitations of our study design also merit comment. The sample size for PheWAS varies from trait to trait depending on the prevalence rate of the trait and availability of data. For example, the cohort size for AD in the 23andMe database was not large in 2015 (~640 cases and~158K controls) when the PheWAS analysis for the 23andMe cohort was performed, which is a limitation for this study especially for replicating the association with AD. This may explain why some of the known SNP associations with AD were not replicated or only had nominally significant association in the 23andMe cohort. Furthermore, FTD is not a selfreported question collected in the 23andMe database and therefore could not be tested in the PheWAS analysis. Even if this was included, the sample size would have been smaller than that for AD based on population prevalence rate. Similarly, the cohorts for CD (n~3,600), UC (n~6,200), bipolar (~9,700), and schizophrenia (~700) were limited in size. However, the cohort sizes for other psychiatric disorders (e.g. depression, anxiety and panic) were sufficiently large (>250K cases). The PheWAS 23andMe cohort size for AD used in this study was limited, and therefore only APOE variants, the loci with the largest effect size, were confirmed to be associated with neurological traits. The sample size for a specific trait shall be taken into consideration when interpreting the PheWAS results. PheWAS typically uses "light" phenotyping (based on self-reported as in the 23andMe and surveys deployed by UKB or based on diagnostic ICD codes or medication / procedure usage pattern), the stringency of phenotype is certainly not as good as clinical ascertained phenotype, but the tradeoff is the power to survey a large number of diverse phenotypes within a single study.
The nominal causal effect between immune-related traits (except multiple sclerosis and rheumatoid arthritis) and AD/FTD would have been insignificant if correcting for the expanded list of diseases and risk factors from MR Base tested. Some of the instrument variable used consisted of small number of SNPs and may have weaken the real causal effect if exist. The observation does not seem to be purely by chance especially in light of the report on immune related enrichment of FTD where they found up to 270-fold genetic enrichment between FTD and RA, up to 160-fold genetic enrichment between FTD and UC, and up to 175-fold genetic enrichment between FTD and CeD. Overall, the immune overlap seems to be common to both FTD and AD at the genome level (represented by genome wide significant SNPs used as an instrument variable for AD and other diseases/risk factors except FTD where a p-value threshold of 5 x 10 −6 was used because of the smaller GWAS samples size), while there could still be specificity of neuroinflammation for risk variants in CR1, CD33, CLU, ABCA7, TREM2, SORL1, MS4A6A, SPPL2A, SCIMP, PLCG2, ABI3, and HLA-DRB1. Different MR analysis methods have different assumptions (which in reality do not always hold or even rarely hold) and power, the inference of the causal effect may be inconclusive or only suggestive unless the causal effect size is so huge that most of methods give unequivocal concordant results. Both IVW and MR Egger methods used in this study are vulnerable to false positives when the exposure and outcome traits are both affected by a heritable confounder [49]. Different exposure or outcome GWAS may also vary by study sample size, and number of variants with summary association statistics available (for outcome GWAS as this may limit the ability to leverage proxy SNPs in LD (default r 2 > = 0.8) with the set of SNPs in the instrument variable) and impact the strength of instrument variable and the power of MR analysis. Future reanalysis when studies with larger sample size and more complete summary association statistics will be warranted to interrogate the causal relationship.
Supporting information S1
|
v3-fos-license
|
2016-10-31T15:45:48.767Z
|
2003-12-22T00:00:00.000
|
18200835
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1214/ecp.v8-1080",
"pdf_hash": "5ab1c3213b6a8e5188ab795b7de590a4569a7e49",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45905",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "c2d82d36eb4ba0fadb3e85e1bc32282c6733f7ba",
"year": 2003
}
|
pes2o/s2orc
|
MICROSCOPIC STRUCTURE OF A DECREASING SHOCK FOR THE ASYMMETRIC K -STEP EXCLUSION PROCESS
The asymmetric k -step exclusion processes are the simplest interacting particle systems whose hydrodynamic equation may exhibit both increasing and decreasing entropic shocks under Euler scaling. We prove that, under Riemann initial condition with right density zero and adequate left density, the rightmost particle identifles microscopically the decreasing shock.
Introduction
The asymmetric k-step exclusion process is a conservative attractive process on X = {0, 1} Z that generalizes simple exclusion (general class of processes of this type was introduced in [G]). The hydrodynamic behavior of these processes were studied in [BGRS] (for a specific review see equally [FGRS]). One of the interesting features of these processes is that their macroscopic flux function is neither convex nor concave leading to both increasing and decreasing entropic shock solutions of the hydrodynamic equation. In this note we investigate the microscopic counterpart of a decreasing shock solution in the asymmetric nearest neighbor case, under Riemann initial condition with right density zero. Indeed, remember that the nearest neighbor simple exclusion process with an asymmetry to the right has a concave flux function, and its hydrodynamic equation can exhibit (only) an increasing shock (see for instance [KL], chapters VIII and IX). The microscopic structure of this shock was analyzed in a series of papers by P. Ferrari et al. (the first ones were [FKS] and [F]; see [L99] for a unified presentation and a complete reference list), and in more general settings in [R] and [S]. These authors proved that the shock was characterized by the evolution of a second class particle, which moved at the shock speed, and followed the characteristic lines and shocks of the hydrodynamic equation; moreover, under Riemann initial condition with densities λ (resp. ρ) to the left (resp. right) of the origin, the process seen by this second class particle possessed an invariant measure with asymptotic densities λ (resp. ρ) to the left (resp. right) of the origin. Unfortunately, we cannot adapt the techniques developed in those papers to k-step exclusion, because on the one hand jumps are not restricted to stricto sensu nearest-neighbor sites, and on the other hand both [R] and [S] rely on the concavity of the flux function. We point out that following along the same lines as we do here for k-step exclusion one can obtain the microscopic structure of (the increasing) shock for finite range non-nearest neighbor asymmetric exclusion process with (0, ρ) initial profile. We consider an asymmetric (probability p -resp. q -to jump to the right -resp. left -) starting with an initial measure µ λ,0 : i.e. a product measure with density λ to the left of (and at) the origin and 0 to its right. Our candidate for a microscopic object which identifies the shock is the rightmost particle (cf. [DKPS] where the asymmetric simple exclusion process was studied in the case of an increasing shock, with left density 0; there, the leftmost particle identified the shock). We prove that the rightmost particle evolves at speed v shock , and that the process seen by this particle has an invariant measure with asymptotic density λ to the left of the origin. We illustrate our method for the totally asymmetric 2-step exclusion process. When λ ∈ (0, 1/4) this corresponds to an initial shock profile for the hydrodynamic equation. The shock (discontinuity) at zero propagates at a speed v shock = (p − q)(1 + λ − 2λ 2 ) (see e.g. [FGRS]). Comments will be made to show how to extend the result to the asymmetric nearest neighbor case. We present our results in Section 2, and prove them in Section 3.
Remark: For the k-step asymmetric case with k > 2, the flux function starts out being convex, the shock speed and the allowed range of densities for a decreasing entropic shock are determined by the convex envelope of the initial part of the flux function. However our argument remains valid within suitable changes.
Notation and results
We denote by S(t) the evolution semi-group of the asymmetric two-step exclusion process (η t ) t≥0 on X = {0, 1} Z with generator L given by on all bounded cylinder functions f on X, where p = 1−q ∈ [0, 1]\{1/2}, η x,y is the configuration η where the states of sites x and y have been interchanged and η x,y,z is the configuration η where the states of sites x, y and z have been shifted; i.e. η x,y (z) = η(z) when z = x, y; η x,y (x) = η(y); η x,y (y) = η(x), and η x,y,z (w) = η(w) when w = x, y, z; η x,y,z (y) = η(x); η x,y,z (z) = η(y); η x,y,z (x) = η(z). Notice that we chose a 'pushing interpretation' (a particle may jump to its neighboring site pushing eventually a particle that could stand there provided the next neighboring site has a vacancy) of the evolution, so that particles always keep the same respective order. Like the simple exclusion process, the k-step exclusion process is attractive (with respect to the usual order on configurations, i.e. for η 1 , η 2 ∈ X, η 1 ≤ η 2 means that η 1 (x) ≤ η 2 (x) for all x ∈ Z), and has a one parameter family {ν α , α ∈ [0, 1]} of extremal invariant and translation invariant measures, where ν α is the Bernoulli product measure on X with density α ∈ [0, 1], i.e. with marginal ν α (η(x) = 1) = α for all x ∈ Z.
In the sequel we set p = 1 (total asymmetry); appropriate comments will be made for the 1/2 < p < 1 case (the case 0 ≤ p < 1/2 being symmetric).
In this note we consider the totally asymmetric 2-step exclusion process with initial measure µ λ,0 . Due to the pushing interpretation of the dynamics, it has a rightmost particle, of initial position Z 0 = Z(η), whose distribution G is geometric of mean (1/λ) − 1, and of position S(t)Z = Z t at time t. The 2-step exclusion induces a process seen by the rightmost particle, , with initial measure µ λ,0 . A configuration η in X is obtained from a configuration η on X distributed according to µ λ,0 by shifting it so that the rightmost particle is at the origin: x < 0} is distributed according to a product measure with density λ. Note that the process ( η t ) t≥0 has semi-group S(t) and generator L, where the last term in the definition of L is equal to zero since the process is supported on configurations with η(1) = 0, and for the same reason in the two previous terms 1 − η(1) = 1. We also observe that µ λ,0 τ Zt S(t) = µ λ,0 S(t).
For s ≥ 0, define 0,−1 : X → N as 0,−1 ( η s ) = 1 + η s (−1). It is the flux of holes crossing the bond between 0 and −1 at time s. This is also the rate at which the rightmost particle jumps right at time s. Indeed, Z s is the sum of Z 0 , and net change in the position of the rightmost particle in time s. This net change can then be obtained as a functional of the process ( X, µ λ,0 , S(t)): In the next section we will prove, using the previous notation, Since the set of all probability measures on the compact set X is compact, there exists an increasing sequence of times (t n ) n≥0 , t n → ∞, such that, a stationary measure for the ( η t ) t≥0 process (see [L85], Proposition I.1.8 (e)). As a consequence of Theorem 2.1, we obtain that µ (which has density 0 to the right of the origin) is asymptotically equal to (in the Cesáro sense) ν λ far to the left from the origin: Let µ be any invariant measure for a Markov process with semigroup S(t) starting from an initial measure µ λ,0 . Then then γ = ν λ .
Proofs
The k-step exclusion process is attractive, that is coordinatewise partial order between configurations is preserved by the k-step evolution. The process seen by the rightmost particle does not have this property. On the other hand it preserves a partial order between configurations which compares the number of holes between successive particles appropriately. We now introduce this partial order on configurations in X which will play a crucial role in the proof of Theorem 2.1.
We consider η ∈ X ⊂ X, which either have infinitely many particles to the right and left of the origin, or infinitely many particles to the left of the origin and no particles to the right of the origin. We label particles as follows: If there are infinitely many particles to the right as well as to the left of the origin, particles are labelled by their natural ordering on the line with X 0 (η) = 0. Let γ i (η) be the number of holes between i + 1-st and i-th particle, i.e. γ i (η) = X i+1 (η) − X i (η) − 1. If there are no particles to the right of the origin then we let γ 0 (η) = +∞ and X n (η) = γ n (η) = ∞ for all n ≥ 1. It is easy to show that γ i is a continuous function of η at all η such that γ i (η) < ∞. Given η 1 , η 2 ∈ X , (a) if η 1 and η 2 have infinitely many particles to the right and left of origin, (b) if X j (η 2 ) = ∞ for all j ≥ 1, and η 1 has infinitely many particles to the right and left of origin, This order extends to probability measures: We denote by M the set of bounded monotone (w.r.t. ) functions on X . Then, since the distribution of {η(x), x < 0} under µ λ,0 is product with density λ, we have µ λ,0 ν λ which means that, for any increasing f ∈ M, f d µ λ,0 ≥ f d ν λ . Moreover, if η 1 , η 2 ∈ X and η 1 η 2 then L(γ i (η 1 )) ≤ L(γ i (η 2 )), for all relevant i. It follows that if f ∈ M is increasing on X then so is S(t)f for all t > 0 since 1) S(t) is defined on X so that all configurations have a particle at the origin which remains at the origin because of the tagged particle evolution of S(t).
2) When one compares two configurations (from cases (b) and (c) in the definition of ) the fact that there is always a particle at the origin implies that the labelling of the γ's are unchanged by the evolution for both configurations.
In other words, S(t) is an attractive semi-group with respect to the partial order we have introduced, and using (3), µ λ,0 S(t) ν λ for all t ≥ 0, so that by (2), µ ν λ .
Remark:
The attractivity of S(t) can also be seen by using a particle to particle coupling described as follows: Let us denote by η t and ξ t the processes starting with initial measures µ λ,0 and ν λ respectively. We couple the two processes by requiring that the particles in η t and ξ t with the same labels i ∈ Z use the same clock for jumps if X i (η t ) < ∞ and X i (ξ t ) < ∞. Even though we have sketched the attractivity argument for the totally asymmetric case we point out that it can be extended to the asymmetric case straightforwardly.
Proof of Theorem 2.1.
Since µ is an invariant measure for S(t) we have Because for all n > m + 2, f n (τ 1 η 0,1 ) = f n (τ 1 η −1,0,1 ) = f n (τ 1 η 0,1,2 ) = f n−1 (η), we have and as L 0 (f n ) = L(f n ), for n ≥ m + 2 where we have used the commutativity of L and τ in next to the last step. This proves that µ ∞ is an invariant measure for the semi-group S(t). Since µ ∞ is a translation invariant measure by definition we have that µ ∞ is a convex combination of product measures (see [G] Theorem 1.3 which is a slight adaptation of the corresponding result for simple exclusion, see [L85], Theorem VIII.3.9 (a)). That is µ ∞ = 1 0 ν α dπ(α) where π is a measure on [0, 1]. Now we want to show that π((λ, 1]) = 0. Let η ∈ X . Recall that for i > 0, X −i (η) denotes the location of the i-th particle in η to the left of the origin. For all n < 0 define l n (η) = max{i ≥ 0 : The random variable l n (η) counts the number of particles in η which are in [n, 0] ∩ Z. Now since µ ν λ , there exists a coupling measure µ on {(η, ξ) ∈ X × X } of marginals µ and ν λ , with µ({γ i (η) ≥ γ i (ξ) : i < 0}) = 1. From this it follows that l n (η) ≤ l n (ξ) for all n < 0, µ almost surely. Define A = {η ∈ X : lim inf k→∞ N −1 k N k j=1 η(−j) > λ}. Let f (η) = η(−1). Then A is measurable with respect to the left tail sigma algebra of {η(i) : i ∈ Z} and τ −j A = A for all j ∈ N. We have for all k ≥ 1 µ almost surely. Therefore µ almost surely. Taking expectations and limit in k we get Since ν λ (A) = 0 we have that π((λ, 1]) = 0 and we conclude that µ ∞ is a convex combination of product measures with density at most λ. Now define as i,i−1 (η), i < 0, the flux of holes jumping across the −i-th particle to the left of the origin for the ( η t ) t≥0 process (we point out that the 2 in the second term in parenthesis comes from the fact that a hole in front of the −i-th particle can jump in between −i-th and −i − 1-st particle or behind −i − 1-st particle at the same rate). By an elementary computation, E ν λ ( i,i−1 (η)) = 1 + λ − 2λ 2 = v shock , for all i < 0. Since µ is an invariant measure for S(t) and i,i−1 − i−1,i−2 = L(γ i−1 ), we have E µ ( 0,−1 ) = E µ ( n,n−1 ) for all n < 0. This implies that We have used the fact that the shock speed is a monotone increasing function of the particle density α if α ∈ (0, 1/4) in the last line. Combining (7) and (6) we conclude that E µ ( 0,−1 ) = v shock = lim t→∞ E µ λ,0 (Z t /t) thus proving Theorem 2.1. Notice that if π([0, λ)) > 0 then the inequality in (7) would be strict, contradicting (6). Therefore π(.) is the Dirac measure concentrated on λ: Proof of Corollary 2.1.
Since we obtained the result of the corollary for µ in the proof of the Theorem, and the assumptions on µ that we needed are satisfied for any µ considered in the corollary, the result follows from the previous proof.
|
v3-fos-license
|
2021-02-11T06:16:34.066Z
|
2021-02-05T00:00:00.000
|
231871997
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0244671&type=printable",
"pdf_hash": "0d8bef339d932d46ac0ace0baac57c7ac2234f77",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45906",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "41386663f2d6bd2bdbe2950737c738b45cde3461",
"year": 2021
}
|
pes2o/s2orc
|
Intersections between patient-provider communication and antenatal anxiety in a public healthcare setting in Pakistan.
This study explores pregnant women's and healthcare providers' perspectives on the role of patient-provider communication in experiences of antenatal anxiety within a low-resource setting. In 2017-18, we consecutively sampled pregnant women (n = 19) with at least mild anxiety and purposively sampled antenatal care providers (n = 10) from a public hospital in Punjab Province, Pakistan. We then conducted in-depth interviews and thematically coded them with a combination of inductive and deductive coding methodologies. We found that patients expressed a desire for warm, empathetic communication from providers who demonstrate respect, attentiveness, and a shared lived experience. Providers revealed an awareness that their heavy caseloads, high stress levels, and discourteous tones adversely influenced communication with pregnant women and may exacerbate their anxieties, but also reported that compassionately addressing women's concerns, providing financial problem-solving and/or assistance, and moderating conflicting healthcare desires between patients and their families could alleviate anxiety in pregnant women. Patients reported feelings of anxiety stemming from a belief that they received lower quality communication from antenatal providers at public hospitals than patients received from antenatal providers at private hospitals, an experience that they partially attributed to their low socioeconomic status. Meanwhile, some providers disclosed potentially stigmatizing views of women from particular sociocultural backgrounds or low socioeconomic status, including perceptions that appeared to shape communication with these patients in antenatal care encounters. Our findings provide preliminary evidence that communication between pregnant women and antenatal providers that is warm, normalizes patient fears, and integrates patients' interpersonal and financial considerations can mitigate pregnant women's experiences of anxiety and reduce barriers to accessing antenatal care in Pakistan's public healthcare facilities.
Introduction
to lifestyles, providers in Pakistan often prioritize biomedical considerations when diagnosing and interacting with patients [28]. While studies examining antenatal anxiety within the context of the patient-provider relationship in Pakistan are lacking, research from the United Kingdom has established the importance of the patient-provider relationship on women's engagement in the antenatal care process, control over their care trajectories, and understanding of information [29,30].
A study based in the U.S. examined the relationship between patient-provider communication, antenatal anxiety, and self-care in pregnancy [31]. Pregnant women's perceptions of communication, collaboration, and empowerment in relationships with their midwives were associated with lower anxiety and positive health behaviors [31]. These results suggest that provider relationships can improve patient health through the provision of emotional support [31], a result echoed by other studies that also emphasize the role of patient-provider communication in influencing information exchange and the creation of a positive interpersonal relationship [29,32]. Moreover, in the aforementioned study, lower antenatal anxiety during midto-late gestation mediated associations between patient-provider communication and positive health behaviors [31].
Evidence from LMICs is unclear about how perinatal anxiety is influenced by or reproduced through patient-provider relationships that are marked by differences in communication styles, sociocultural backgrounds, and treatment goals. It is possible that, in the Pakistani context, provider-patient communication could be a key factor in either mitigating or exacerbating anxiety and influencing antenatal care engagement among pregnant women. Therefore, our study expands on barriers and facilitators to patient-provider communication in a lowresource setting by exploring intersections between patient-provider communication and anxiety in pregnant women. Our aim was to understand how communication content and processes between pregnant women experiencing anxiety and antenatal providers in a public antenatal health care setting in Pakistan influence pregnant women's experiences of anxiety.
Study setting
This study was conducted as part of qualitative research that aimed to inform the design of a psychological intervention to reduce antenatal anxiety [33]. Data were collected from providers and pregnant mothers in Rawalpindi District, Punjab Province, Pakistan. Pakistan is classified as a LMIC with a total population of 212 million [34]. The total fertility rate of Punjab Province is 3.4 live births per 1000 women, which is close to Pakistan's total fertility rate of 3.6 live births per 1000 women as of 2017-18 [18]. Pakistan's 2015 GINI index value of 33.5 reflects high levels of socioeconomic inequality, unemployment, and poverty [16]. Additional health indicators for Pakistan are included in Table 1.
Study population and sampling
All participants were recruited from a public tertiary care teaching hospital in Rawalpindi, Pakistan. This hospital provides free antenatal services to an average of 250 primarily lowincome patients daily. Sample size was determined based on reaching data saturation on the interview topics related to the original study aims [35]. Providers were recruited based on their availability using purposive sampling to ensure inclusion of different types of antenatal health workers [36] (i.e., doctors, nurses, and midwives) from the study hospital's Obstetrics and Gynecology Department. Pregnant women were approached consecutively during an antenatal visit at the outpatient clinic. Potential patient participants were then screened using an Urdu version of the Hospital Anxiety and Depression Scale (HADS) [37][38][39] and the Structured Clinical Interview for DSM-5 (SCID) [40]. Adult women at �20 weeks of pregnancy who scored �8 on the anxiety scale of the HADS (indicating at least mild anxiety), did not have clinical depression (measured with the SCID) or serious medical conditions, spoke Urdu, and lived within 20 kilometers of the hospital were eligible for participation. Pregnant women and providers were recruited until data saturation was reached for the main study, which resulted in the sample size that was also used in the secondary data analysis for the present study. Thirty-nine pregnant women who were screened for anxiety and depression met the inclusion criteria, out of which 19 participated. Twelve providers were approached and two declined due to time constraints. Participants were not provided incentives for participation. The research received ethical approval from the Human Development Research Foundation and the Johns Hopkins Bloomberg School of Public Health Institutional Review Board. Participants who screened positive for severe anxiety, depression, or suicidality were referred for further assessment and/or psychiatric care.
Data collection
A team of four trained female interviewers conducted and audio-recorded in-depth interviews with 19 female patients and 10 female providers between September 2017 to August 2018. Pregnant women were interviewed by female interviewers so that they might feel more comfortable discussing matters concerning pregnancy and antenatal mental health. Interviewers were Pakistani research assistants who had graduate degrees in psychology, two years of experience conducting field interviews, and fluency in Urdu. A clinical psychologist provided a one-day training to the interviewers on how to provide informed consent, recruit participants, maintain confidentiality, and use non-stigmatized language. Interview guides developed for patients and providers were pilot-tested after translation to Urdu and back-translation to English to ensure quality and accuracy. Topics in interview guides included, but were not limited to: causes that women attributed to their experiences of antenatal anxiety, physical and emotional symptoms of antenatal anxiety, the impact of antenatal anxiety on women's daily functioning, and coping strategies used for antenatal anxiety. To ensure privacy, interviews were conducted in private spaces. Written informed consent was obtained from both providers and pregnant women. If a participant was unable to read or write, interviewers explained the study to the woman and to a family member who signed the consent form on her behalf. The interviews lasted for about a half hour, ranging from 14 to 75 minutes (mean = 36 minutes).
Data analysis
We used the transcribed and English-translated data to carry out qualitative, thematic analysis of the in-depth interviews. The qualitative analysis was guided by a constructivist epistemology in which we approached coding by considering the data as a reflection of participants' subjective realities and social constructions, whether these were unique or shared across participants [41,42]. A team of two researchers coded patient and provider interviews with Dedoose software [43], using a combination of inductive and deductive coding detailed in a codebook [44]. To generate the codebook, both coders separately coded an initial subset of patient and provider interviews, after which they created initial codes based on emergent themes. Then, each coder coded half of all transcripts independently and met periodically to debrief regarding emerging codes and to adapt the codebook to capture commonalities and differences across the two types of participants. Memos were created primarily in the third phase of coding to document emerging overall themes. Examples of the final set of themes that emerged as relevant to patient-provider communication and antenatal anxiety included rapport between patients and providers, barriers to receiving antenatal care, financial constraints on antenatal care, time constraints affecting provision of antenatal care, and upholding of social norms by providers.
Results
The sample included 19 pregnant married women (Table 2), ranging from 18-37 years old (mean = 26 years) and having between 5-16 years of education (mean = 10 years). Women's number of children ranged from 1-4 with nine women being primigravida. The 10 providers included obstetrician/gynecologist physicians (n = 4) and non-physician antenatal providers (n = 6). Providers ranged in age from 24-55 years (mean = 40 years), had 1-30 years of experience (mean = 16 years), and worked an average of 7 hours per day (Table 3). We have organized the results by the following predetermined categories: pregnant women's perspectives on communication with antenatal providers, antenatal providers' perspectives on communication with anxious pregnant patients, integrated perspectives from patients and providers on communication during the antenatal period, and communication processes and content used by providers and their antenatal patients. All names used are pseudonyms.
Pregnant women's perspectives on communication with antenatal providers
In their accounts of antenatal anxiety, pregnant women expressed their desire for specific factors that would enhance communication with providers, particularly warmth, approachability, respect and privacy as well as trust within the patient-provider dyad. Desired patient-provider communication characteristics. Pregnant women frequently mentioned the manner and tone in which providers communicate during antenatal visits as crucial to creating an open atmosphere for discussing their pregnancies, health concerns, and experiences of anxiety. Mehreen expressed a sentiment common among patients by stating "The most important thing is to speak to the patient with love. You get more information" (M19). Narmin expressed a similar sentiment saying that a good provider is "a person who doesn't get angry and makes you understand with love" (M10). Patients commonly used words such as kind, soft-spoken, approachable, and patient to describe the qualities they wanted in a provider and that they felt would facilitate the best communication. Many patients reported that without such characteristics, they would likely feel reluctant to discuss their problems or open up to providers. For instance, Salma stated that " [providers] should have good character, a kind heart, and be soft spoken. Since you are asking me questions politely, that's why I am answering them, but if you were asking me angrily, I might never answer even one of your questions" (M6). Moreover, many patients cited listening skills and attentiveness as important characteristics for providers to have. As Daneen said: "If we talk to [providers], they must listen and try to understand us and we should do the same" (M16).
A few patients were reluctant to share their non-physical concerns, such as social stressors or emotions, with providers. They viewed sharing such 'personal' problems as beyond the purpose of a visit to a provider; they expected instead to simply receive their check-up and leave. For example, when asked whether she would be able to talk about her personal problems openly, Narmin answered:
Narmin: "No, I would not tell [my problems]."
Interviewer: "What is the main thing that would stop you from talking about them?" Narmin: "No, there isn't any. We come here for a checkup, so we have our checkup and go." (M10) However, more patients reported a desire to be able to discuss their problems with providers as long as their providers were cooperative with them. For example, Munira said "If someone [a woman] has a problem, she will share with her doctor if her doctor is cooperative. If the doctor is not cooperative, she is not going to tell them anything. That's what I do" (M17).
Many patients seemed aware that providers, especially those in public hospitals, had high caseloads and limited time but still reported expecting providers to engage thoughtfully with them. Safeena remarked on the distinction between the atmosphere of public areas in the hospital versus the quieter environment of providers' private exam rooms: Shahin reported a desire to be talked to patiently and to have her questions answered adequately, saying, "Doctors should give proper treatment and if patients are having any problems, then listen to them in a calm manner and also give solutions to their problems related to pregnancy" (M18). In some cases, when pregnant women anticipated that they would not be able to spend enough time with a provider or encounter a friendly provider at a public hospital, they reported visiting private hospitals instead.
Finally, patients mentioned the importance of having a private environment in which to communicate with the provider. Having family members present during medical encounters seemed to make patients less comfortable with openly communicating with their providers. As Amarah revealed to her interviewer, "Now my sister is not here. That's why I have discussed my [problems] with you" (M3).
The role of respect in communication with providers. Pregnant women reported multiple examples of feeling disrespected by providers. For example, Ahmadi (M8) described an instance where she had an appointment with a provider who she felt held a grudge against her, leaving her to wait in the waiting room without explanation. Ahmadi explained that the provider eventually saw her but that she did not feel treated with respect. Moreover, many patients reported that patient-provider interactions in public hospitals tend to be less respectful than treatment at private hospitals. Surayya described how she felt nervous about being disrespected by providers at public hospitals and felt inhibited asking providers at the study hospital for their contact information, even when they treated her well: The ways that providers responded to patients' lack of knowledge appeared to be another common source of anxiety among patients. In one example, Safeena described that a provider asked her sister-in-law "when her clothes were last ruined" (M9) instead of asking the date of her last menstrual period. Her sister-in-law did not understand the question and was reportedly scolded by the provider. Similarly, Zohra described her fear of talking openly with her provider: Interviewer: "You were saying you wanted to talk to the doctor about it."
Zohra: "Yes, that is what I'm thinking whether or not I should ask the doctor about it."
Interviewer: "Why do you feel like this is something you can't talk to the doctor about?" Zohra: "I feel the doctors will insult me." Interviewer: "So, if they do not insult you, will you be able to talk to them?" Zohra: "I need confidence." (M7) Pregnant women commonly talked about experiences of being scolded or reproached during check-ups. Soofia explained that "some doctors. . .don't talk to you in a good way" (M12). She continued to describe that when she went to a recent checkup, the provider asked her "Why do you come here, why don't you just stay at home?"(M12), but also said that "There are some doctors who talk to you in a good manner and ask about your problems" (M12). In response to a question about qualities desired in a health care provider, many women discussed how providers should treat patients with respect. To illustrate, Aleena responded "He/ she should talk politely and behave well" (M2).
The role of trust in communication with providers. Many patients described valuing common lived experiences that providers shared with them. For example, women mentioned that having a female provider, a married provider, or a provider who has personally experienced pregnancy increased the likelihood that the provider would be able to understand the pregnant women's circumstances and empathize with them. Nooshin explained that "Only a woman can understand a woman. When a woman talks and the other listens, she reads her eyes more so than her words" (M5). Nooshin's statement speaks to a commonly expressed belief that providers must build trust using both verbal and non-verbal communication. Moreover, another patient, Safeena, described the importance of first impressions in building a feeling of attunement: "[Providers] are strangers. We don't know them. They don't know us. Without knowing each other, you assess the other person in the first meeting. If the person is good to you in the first meeting, you feel good and then it becomes your last impression." (M9) Participating women, such as Munira, mentioned their trust in physicians over other kinds of health providers: "If a person is telling us what to do and he/she is not a doctor, people will not pay attention to him/her" (M17). Safeena described having more trust in providers who have proper training. When asked about what kind of provider she would like to facilitate a mental health support group, she said: "Those who don't have knowledge about it cannot be trained. It's a profession and you have been given education for that. She should be a qualified teacher or professional. She will be taking a lot of our time so she should be qualified." (M9) In response to same question, Munira said that "a doctor would be the best choice for this because they are better able to understand the problems of pregnant ladies. . .I don't think a professor or [other kind of] trained individual can understand this" (M17). Patients' trust in the training that providers received was related to their ability to guide and understand pregnant women better. Ahmadi described the characteristics that a provider ought to have in order for women to be open to discussing their experiences and sources of anxiety with them: "First, she should be married herself, because the one who has been through all that understands better. Second, she should be educated, obviously. I think it's all about expertise and experience. If you have these things, you can guide others better." (M8)
Antenatal providers' perspectives on communication with anxious pregnant women
A major theme that emerged from provider interviews was the high frequency of conflicts and misunderstandings requiring provider-initiated negotiation. Providers also spoke about their roles as sources of comfort for pregnant women with anxiety since they often took on roles of problem solvers for health issues, financial constraints, and more, while also helping to negotiate family involvement. At the same time, providers reported that they may sometimes cause or contribute to patient anxiety, often due to their own harried demeanors and busy caseloads.
Conflicts or misunderstandings with pregnant women. Providers reported various types of conflicts and misunderstandings that they encountered in their work with pregnant women. Due to the limited time that providers can spend with each patient, some providers expressed an awareness that patients often feel as if providers do not fully hear them, respect them, or truly care for them. Dr. Aabid, an obstetrician/gynecologist, shared her perspective on experiences in Pakistan's public hospitals: "Another contributing factor [to anxiety] can be. . .the attitude of healthcare providers, which also causes psychological disorders in patients. Unfortunately, the tertiary public care hospitals that we have here are very crowded and the one-to-one care that the patients should get is impossible. So, in this type of overburdened environment, many things are left unsaid by the patient. Many questions that she does ask are left unanswered, so this also creates a sense of deprivation in patients. They sort of feel that they are not respected or cared for. The adequate time and attention that they deserve are not being provided so this is a source of tension in patients. They want to know about their illness, but no one tells them in detail about the treatment, where they should go for tests." (P2) Several providers reported that patients occasionally become upset with them, such as when they do not like hospital administrative procedures or medical advice that they are given. In these cases, providers reported having to control the situation, call family members, or counsel the upset patients. Some providers also described having to deal with patients' family or financial issues. For example, one non-physician provider, Ms. Jarral, discussed how a low-income patient needed a blood test, but her husband was unemployed, describing how the patient "was putting the responsibility for arranging the blood on the doctors" (P8). Ms. Jarral then had to contact the pregnant women's husband to ask him to arrange for her blood test and pay for it.
Providers also reported pregnant women's family members as sources of conflict that they need to contend with. They described how sometimes, family members, particularly those who are older, do not understand the necessity of regular antenatal care visits and can be dismissive of providers' suggestions. For instance, Dr. Cheema explained that when providers encourage patients to attend an antenatal check-up every month, "our patients [often] come to us and. . .they say that their mother-in-law said 'What's the need of going to the hospital, you're going for recreational activities as we have also delivered children and they are delivered at home, no one else delivered them for us'" (P4). In these cases, providers talked about having to reiterate their suggestions and justify their recommendations to both patients and their family members.
Providers as sources of comfort.
Providers demonstrated an understanding of the important role of listening in counseling patients. However, they also demonstrated an awareness of how time constraints and high caseloads detract from their ability to adequately listen to and comfort them. In the following quote, Dr. Aabid described how active, empathetic listening can comfort patients: "Look, the attitude of doctors' matters a lot because in counseling a patient, the foremost important thing is listening to the patient. If you do listen to the patient and she thinks that you listened to her attentively and sympathetically, that the pain she described has infiltrated your brain and heart, it reduces her pain by half. So, this is why people are very dissatisfied in the extremely rushy, messy clinics [where no one] has the time to listen, despite receiving phys- Other examples of ways that providers felt they were able to provide comfort to women included restructuring their negative thought patterns, framing their concerns optimistically, and making hopeful comments. Several providers mentioned that family issues are common stressors for pregnant women and that they often console patients about their families' attitudes and advocate for their interests. A non-physician provider said that she considered it her duty to "console the patient not to get worried because they are already stressed by their family's attitude" (P5). Another provider described giving her contact information to patients so that patients could call occasionally for support.
Assuring patients of the normalcy of unpleasant symptoms that they experienced during pregnancy appeared to be one of the most common ways that providers reported calming patients' worries. For example, one non-physician provider described telling patients that vomiting in the first trimester is common and that many mothers experience it. If that did not succeed in calming a patient, then she said that providers would counsel family members about the situation, so that family members could support the patient. As a last resort, the non-physician provider suggested that providers would prescribe anxiolytic medications.
Financial assistance was thought to play an important role in comforting patients. Providers commonly reported that in some cases where patients were stressed about their ability to afford medications or treatment, providers would work with them to find a cheaper alternative, try to convince family members to contribute funds, or even personally step in with financial help. Ms. Hidayat described one such instance that occurred after a patient gave birth: "One patient got some infection in her stitches. On the first day, she followed our recommendation and brought an injection of TANZO. The second day, she cried badly in the doctors' room. Then, the doctor helped her financially and she bought an injection and got full treatment for blood loss. She left healthy and happy because she was feeling well." (P7) Other strategies that providers mentioned using to comfort patients included refraining from talking about their personal problems in front of them and citing their own successful experiences with pregnancy. Providers also reported speaking to patients in their own language and/or dialect to build rapport. Ms. Izam stated "The best thing that I feel that we do is that we converse in their own language, be it Urdu or Punjabi. This makes it easy for them to follow what we are saying. This makes them feel like we are one of them" (P10). Providers frequently described mentioning faith and/or God when comforting patients. Here, Ms. Izam described how she would reframe patients' problems and use faith to calm their anxieties: "Even a little compassion from us [matters], like saying please don't think like this and don't be gloomy. Allah is almighty and He will fix everything for you. It's just a matter of a few days. All problems are temporary and will come to an end. You will get to go home after that." (P10) Finally, some providers spoke about teaching pregnant women that anxiety and stress can harm their health and that of their unborn baby. Providers described counseling patients that if they controlled their stress and remained content, then the chances of a successful pregnancy would increase. Moreover, Ms. Shaikh described how establishing an antenatal care routine and seeing providers regularly helps patients feel like they are being taken care of. She said, "When a patient has their regular checkups, gets their laboratory tests, ultrasounds, and CT scans done, then they feel that the doctors are good to them, they are being taken care of. The patients come out of that cycle of anxiety this way" (P9). As mentioned by a physician, pregnant women typically want to know if everything is alright with their health and with their baby, so reassuring them with these questions at each antenatal care visit is important. Another physician described how she views considerate communication and reassurance as antidotes to anxiety: "If the patient talks to you and you listen attentively, then patients mostly say, 'Doctor, I have been halfway treated by coming to you.' So, I don't think that they get physically well by just visiting the doctor. Physical disease continues to exist. It is their mental strain that lessens from just meeting the doctor, through examination [and] reassurance." (P2) Providers as sources of stress. Many providers gave accounts of how they felt they contributed to patient stress by using harsh words with patients. Ms. Ghazali explained: "If someone talks to you rudely, it makes sense that you would get angry, and when they are in this condition (pregnancy) then it is not permissible to make them angry. When they get angry, their blood pressure gets high and when their blood pressure rises it also affects them and it is harmful for the baby and that happens just because of our behavior." (P6) Many providers admitted that due to organizational challenges in Pakistan's public hospitals (being overburdened with job responsibilities and having limited time for individual patients) they sometimes spoke to patients harshly, possibly inhibiting open communication. For example, Ms. Jarral spoke about how providers treat mothers during labor and delivery.
"The behavior of the doctor also influences the patient at the time of delivery. There was a doctor in Lahore who was a gynecologist. . .[who] used to go abroad for her delivery. Because there is a different way of dealing with pregnant women [abroad], as the midwives sit beside the mother, holding her hands during the labor pain and here, you know, doctors deal with delivery very differently and sometimes their behavior is not appropriate." (P8) To solve the problem of providers treating patients rudely, another non-physician provider suggested that increasing staff and beds in the hospital wards would enable providers to give more individual attention to patients and lessen their use of harsh language, thereby impacting patients' anxiety levels and even physical health. Ms. Izam stated that "Our cooperation is very essential for [patients] to be stable. Otherwise, they can end up with high blood pressure" (P10). According to Ms. Ghazali, sometimes the public hospital environment is filled with so much tension that patients "run away" (P6). In particular, she believed that after hearing screaming from the labor room and witnessing the occasionally inappropriate behaviors of physicians and other staff members, patients sometimes decide to get care at a private hospital instead.
Integrated perspectives from patients and providers on communication during the antenatal period
Patients and provider accounts revealed differences in socioeconomic status (SES) and sociocultural norms between the groups. These differences appeared to shape communication, as well as pregnant women's care-seeking behaviors and providers' personal views of them. Patients and providers often seemed to agree on how SES differences affected patients' abilities to access care but did not always agree on how their own sociocultural beliefs and their perceptions of each other's sociocultural beliefs influenced patient anxiety and health behaviors.
Socioeconomic influences on patient-provider communication and antenatal health care interactions. The low-resource setting of the study hospital seemed to govern patientprovider communication to a large degree and the SES gap between patients and providers at the hospital was noted by both groups. Patients expressed views that providers had comfortable lives, even if they had packed caseloads. One pregnant women went as far as to express that a good quality provider "should think others are better than themselves" (M3). Meanwhile, providers consistently took into consideration patients' financial constraints when prescribing medication, recommending treatments, and suggesting lifestyle changes. Many providers described how seemingly simple recommendations must only be offered after considering their financial ramifications as demonstrated in this statement from Dr. Beg:
"Affordability is a major issue for our class [of low-income patients]. . .See, if we tell the patient to do a protein diet, what's the price of mutton or beef nowadays? If you ask the patient about how many times they cook it in a month, then [it is] maybe once for the lower class. Then, what's the number of members left in the house, after it gets distributed once?" (P1)
When prescribing medications, providers described needing to consider how long the patient might be able to afford a long-term medication. Similarly, when advising patients to obtain diagnostic testing, they had to consider where such testing would be offered. For example, if the testing was only available in a private hospital, then the patient might not be able to afford it. Even highly discounted testing in public hospitals might pose a financial issue for many patients. Dr. Beg said that "if the ultrasound is being done in the [public] hospital, then it's minimally charged. Extremely minimal. Many times, maybe the patient is still unable to afford it" (P1). In high-risk pregnancies, the number of visits and tests needed can be high, and providers described having to be cognizant of how high-risk patients' financial limitations may affect their treatment and health outcomes.
Ms. Shaikh discussed how pregnant women often asked her about costs for tests that she recommended to them and would only act on her recommendations after ensuring affordability:
"Mostly women do not share [their finances] with us openly, but we can feel this thing from their attitude. For instance, when we advise women to get their laboratory test done then they ask us about the charges of each test. Thus, we can access from their facial expression that it's difficult for them to afford such tests. If a person can afford it, they will never ask you this information. [Pregnant women] actually first find out about their expense, then go for tests." (P9)
While she said that most patients reveal low SES through indirect questions about cost, Ms. Shaikh also mentioned that "There are some patients who are assertive; they directly tell the doctor that they cannot afford the tests" (P9).
Pregnant women seemed aware of their own low SES and the fact that public hospitals tend to treat low SES patients, often comparing their experiences at public hospitals to what they perceived as higher-quality treatment for higher SES patients who could afford to attend private hospitals. Mehreen exhorted high SES providers at public hospitals to treat low SES patients with more respect regardless of their financial constraints: Several pregnant women reported feeling a sense of injustice and tension at how high SES providers treated them and believed that providers at for-fee health facilities were more courteous to their patients because they could pay for treatment. Meanwhile, providers described how patient anxiety about their limited financial resources was shown in the verbal and nonverbal ways that patients inquired about treatment costs. Some providers reported attempting to integrate treatment costs and financial constraints into their discussions with patients.
Sociocultural influences on communication. Providers described several sociocultural differences that they encountered between patients and themselves. Firstly, according to providers, patients tended to describe their experiences of 'anxiety' in highly somaticized language that did not necessarily frame anxiety as a mental or emotional disorder. Many providers explained that patients described their anxiety in discrete physical terms, such as "sar me dard" (headache), "saas phoolti hai" (breathlessness), and "bechayni" (restlessness). As such, some providers reported relying on a combination of patients' body language, medical history, and description of symptoms to diagnose them with anxiety. Ms. Hidayat told how anxious pregnant women "mostly are unsure of what is happening around them and say that can't do anything and mostly use facial expressions to show their anxiety" (P7). However, even when they attempted to reduce patients' anxieties, such as by normalizing unpleasant symptoms, educating patients about anxiety or prescribing anxiety medication, they felt that patients might not understand that they were experiencing a mental health condition instead of its resulting physical symptoms. In this example, Dr. Durrani described how patients tend to lack insight into anxiety as a condition that can cause a wide range of psychosomatic symptoms and can be effectively treated by a mental health professional:
"Women have no idea they have anxiety, they don't go to the doctor, they do not understand about treatment. They should be checked by a psychologist or psychiatrist. They do not understand this; they take this as a physical illness. This is very important [to understand] and this is not [physical] illness." (P3)
When asked what stops women from reporting that they experience anxiety, she said that patients "have no realization and. . .they feel it is not a big problem to share [their anxieties] and no benefit to share this. But this is very important, if they have no insight, then how will they recover?" (P3). In contrast, several patients contested the notion that healthcare is the answer to reducing their symptoms of anxiety. Instead, they emphasized the importance of the social environment in shaping a woman's mental and emotional state. Safeena said that "neither diet nor blood are helpful for women's health, but only atmosphere matters" (M9).
As another emerging theme, many patients and providers commented on how social norms governing women's roles in their households and families influenced their experiences of anxiety and their interactions with the antenatal healthcare system. These social norms often included a degree of subservience to husbands and elder family members. For example, in answering a question about how spousal conflict affects pregnant women, Dr. Durrani illustrated possibly stigmatized beliefs towards lower-to middle-SES women when she stated: "I think our women accept these things [husbands not being cooperative]. Basically, these women are from the lower-middle class. They have no authority and control over themselves and their husbands." (P3) In another example, Surayya described how whenever she talked to her husband about a health concern, he would minimize it by telling her not to worry and that he would take care of her concerns himself. She went on to express how she feels that antenatal providers should communicate with the woman and her husband simultaneously:
"When the pregnant mothers come to the doctor, [the doctor] should do all antenatal checkups in front of their husbands, so they know the condition of their wives and know what necessary tests are required. . .there is no use to talk only to women, but husbands should also be present because they have to spend a long time together." (M14)
Several providers reported potentially stigmatizing or stereotyped views of women, especially those of particular ethnic minority groups or low SES backgrounds. These views appeared to shape their communication with and recommendations to anxious women in antenatal care encounters. For example, some providers either assumed that their female patients were responsible for housework or even actively endorsed traditional gender norms, such as in this example from Dr. Aabid: "To physically look after a child, feeding them milk, looking after them psychologically, making them a good human being. This task should not be called a burden but is a responsibility that has to fall on the mother." (P2) Other providers framed low SES women's lives as governed largely by housework and repeated pregnancies, suggesting that they sometimes take advantage of pregnancy strategically in order to secure certain privileges. For instance, Dr. Beg displayed possibly prejudiced views against the Pathani ethnic group (a small minority group prevalent in Western Pakistan [39]), by saying how some women "get pregnant again and again because they know they will be given some extra care during this time, regarding the diet or rest point of view. This is very common among Pathans, as they consider themselves very important" (P1). A few providers expressed the view that low-income pregnant women have cultural views grounded in ignorance. For instance, in response to a question about how women who have had prior miscarriages cope with their current pregnancies, Ms. Jarral appeared to shift blame onto them when she said: Such sentiments containing elements of stigma and/or prejudice were much more common among providers; SES-and sociocultural prejudices were not commonly expressed by patients towards providers. While providers did not often report directly conveying stereotyped views about patients to them, some providers mentioned making decisions about how to deal with women who communicated cultural beliefs that they did not agree with. For instance, Dr. Beg discussed how she would ignore it when patients mentioned that older generation family members held cultural beliefs against visiting an antenatal doctor regularly during pregnancy.
"It's a very normal sentence that we hear from everyone and we don't pay heed because what answer do we have to this: a mother-in-law who gave birth to eight children at home and never went to the doctor and the daughter-in-law says 'I have to go to the doctor.'" (P1) Meanwhile, some patients lacked confidence in providers' abilities to help them with their anxiety. For example, when asked how providers could help her to deal with her apprehensions surrounding delivery and surgery, Zohra said "doctors cannot do anything. It's all in Allah's hands. [The apprehension] will go away on its own" (M7). Another patient, Safeena (M9), wondered out loud whether her mother was correct that it would be better for her not to verbalize her fearful thoughts, such as anxiety about her fetus's movement, in order to protect against the evil eye. Women also described how lack of family cooperation with healthcare providers' advice could arise from differences in cultural norms around how care should be received by women and how women should interact with providers. Some women mentioned that their families and/or communities viewed the hospital as a place to quickly receive medicine or treatment. Yusra described community members' general attitude towards mental healthcare professionals administering treatment for anxiety at hospitals as "they don't do any work, they are just willing to corrupt the mind of women. . .'You went to get medicines, get medicines and come back'" (R2). She said that this attitude is prevalent in rural areas, but that it happens in most cities too. Illustrating a similar point, some providers described how many patients' families believe that they must only visit the hospital for an antenatal checkup at the conclusion of a woman's pregnancy. One non-physician provider commented on the role of family members in shaping women's antenatal care experience: "There are issues, such as when a family, after their first visit to the hospital, get their antenatal card issued and then they don't come further for regular checkups. . .the mother-in-law thinks that delivering a baby is not a big deal. Husbands are careless in taking their wives to hospital for checkup. When a woman in this condition visits a hospital in the last trimester, then we have no record of her checkups, lab tests, history, blood pressure, sugar, hemoglobin. We cannot recommend necessary precautionary measures to her. We are close to square one. Sometimes a woman comes with heavy bleeding and her family members are not cooperative. In this situation, her family members avoid giving blood to her and put the responsibilities [of giving blood] to other family members or in-laws. In this situation, only a woman suffers." (P5) Dr. Beg, mentioned how pregnant women face a social expectation of taking care of men first, then their children, and lastly themselves. She said that when she tells her patients to follow a certain diet, for example, she is cognizant of this norm and knows that the women might not "feel like [she] can say it when she goes home at all" (P1). However, many patients also spoke to differences in the ways that families influence women's experiences of antenatal care with some families reportedly having no problems with women making their own healthcare decisions and other families granting permission to women to receive healthcare and interact with providers as they saw fit.
Communication processes and content used by providers and their antenatal patients
Our findings point toward multiple challenges faced by Pakistani antenatal care providers in deciding what and how to communicate to pregnant women, particularly those experiencing symptoms of anxiety, in the public healthcare setting. Providers described having to remain conscious of time constraints and the needs of their entire caseload, while also paying attention to and building rapport with individual patients. They reported the need to account for various factors outside the healthcare context that influenced how women would respond to treatment recommendations, such as SES, family involvement, and health beliefs. Meanwhile, patients and providers both disclosed holding stereotyped views about each other's sociocultural backgrounds, some of which appeared to shape how they communicated and interacted in the antenatal healthcare encounter. The key factors that influenced patient-provider communication between antenatal providers and pregnant women included both those related to the communication processes (e.g. perceptions of each other, tone, demeanor, etc.) and the content of that communication (e.g. reporting of symptoms, normalization of fears, discussions of psychosocial factors related to treatment, etc.) that jointly influenced patients' experiences of anxiety (Table 4).
Discussion
This is the first qualitative investigation of patient-provider communication and antenatal anxiety in the public healthcare setting of Pakistan. Our study bolsters existing literature on patients' desire for compassion and respect from their providers [21,22,45] while adding to the discourse around how antenatal providers care for low-SES patients within resource constrained environments. Our major findings center on pregnant women's desire for compassion, respect, and trust in communication with antenatal providers. Our findings from antenatal providers corroborated the importance of empathy and compassion in communication with pregnant patients, but also highlighted how sociocultural differences between patients and providers, financial constraints, and time constraints can lead to increased patient anxiety. Overall, the low-resource setting of the study hospital emerged as a major barrier to effective patient-provider communication, potentially leading to reduced courtesy, compassion, and trust in the patient-provider dyad.
Our primary results suggest that pregnant women with anxiety seek warmth from antenatal providers and find comfort in being able to relate to providers, establish trust with them, and engage in empathic and respectful communication with them. These results align with others' findings that providers' tone of voice and warmth can reduce patient anxiety [21,22]. They underscore the healing and therapeutic role that even non-mental health providers can have for anxious patients [19,26] and the potential for caring patient-provider communication to reduce pregnant women's anxiety and increase positive expectations for treatment [21,22]. Our results also appear consistent with Nicoloro-Santa Barbara et al.'s (2017) findings that a patient-provider relationship marked by strong communication and collaboration can reduce patient anxiety and increase advantageous health behaviors [31].
These results stand in contrast with Jalil et al.'s (2017) study, which found that most diabetes patients in a public clinic in Punjab Province did not mind providers communicating with them disrespectfully and even considered physicians as superior to themselves [46]. Patients in that study tended to connect their satisfaction with providers to relief from pain and successful physical health outcomes instead of providers' behavior or demeanor [46], but women in our study overwhelmingly connected providers' communication style to their satisfaction with antenatal care encounters. One explanation for this contradiction could be that caring communication holds greater value during the antenatal period since the majority of women do not have pain or illness, as opposed to when patients deal with other health conditions where communication may feel secondary to physical discomfort. It has been suggested in the US that due to the intimate and memorable nature of the perinatal period, patients may form closer relationships with obstetrician/gynecologists as compared to other kinds of physicians [47]. One study found that the odds of patient satisfaction resulting from obstetrician/gynecologist providers' caring and friendly attitude were three times higher than other kinds of specialists [47]. Others have suggested that the antenatal context in LMICs may be different from sick outpatient care because of low SES patients' perceptions that antenatal care has low utility and a high opportunity cost for time and effort given that it is a preventative care service [48]. This assertion is consistent with our findings that many patients reported that their family members did not believe that obtaining antenatal care was worth the trip or expenses and that some providers reported that they had to justify the importance of consistent antenatal care.
A strength of our study is its inclusion of provider voices in addition to patient perspectives on communication in the antenatal setting. In contrast to Nadir et al.'s (2018) study, which suggests that providers in Pakistan focus on biomedical factors in patients' illness experiences instead of psychosocial factors [28], providers in our sample expressed a keen awareness of the importance of demonstrating empathy in their communication with pregnant women and helping to alleviate their anxieties through acknowledging and addressing psychosocial stressors. Our findings also differ from prior studies that describe conversations between providers and patients in Pakistan as one-sided and authoritarian [27,49] by revealing various ways in which provider communication in the antenatal care setting takes into account patients' social contexts.
Antenatal providers in our study recognized the potential role of patient-provider communication in exacerbating or relieving anxieties in pregnant women. External factors constraining the ability for providers to provide empathy or address anxieties in the antenatal visit included time constraints, high caseloads, and family members' involvement. The factors that emerged from our analysis as key contextual constraints on patient-provider communication are consistent with Feldman-Stewart, Brundage, and Tishelman's [50] model of patient-provider communication, which posits that circumstances of the clinical encounter can cause patients' and providers' goals to be compromised, adjusted, or never reached.
Despite a shared awareness of providers' external constraints, pregnant women in our study reported an ability to trust and open up to antenatal providers who had common lived experiences with them and ample medical experience and education. Although there is a dearth of research on the role of trust in improving communication between antenatal providers and pregnant women in LMICs, one review study of cancer patients in HICs found that when oncologists communicate their expertise, efficiency, and technical skills, patients tend to trust them more [51][52][53][54][55], a finding consistent with our results. The same study also found that cancer patients' trust in their physicians decreased their fears, worries, and perceived risk of illness [51]. Moreover, research on cancer patients has shown that they identify trust as a prerequisite and facilitator of open communication and satisfaction with communication [51,56,57]. Our results coincide with the aforementioned studies by demonstrating that pregnant women view trust as an important element of communication that reduces their experiences of anxiety.
Hospital factors limiting the quality of antenatal provider-patient communication were perceived by patients and providers as particularly relevant to public healthcare facilities as compared to private healthcare facilities. Patients portrayed private hospital providers as giving more individualized, higher-quality, and more respectful care and attention to their patients, while the public antenatal care providers in our sample described financial and time constraints as endemic to the public health sector. This qualitative finding is consistent with Hassan and Rehman's (2011) study that found provider workload to be heavier and patientprovider communication quality to be poorer in public hospitals [58]. Multiple studies, mainly from HICs, have found that patients at larger hospitals, teaching hospitals, and hospitals with decreased privacy tend to have decreased satisfaction [47,59,60]. In line with these findings, it makes sense that pregnant women's perceptions of the large teaching study hospital setting may have heightened their anxieties since they have less control of their environment in this setting as compared to a private, smaller, or inpatient care setting [47]. Pregnant women in our sample expressed worries about obtaining antenatal care from public healthcare settings based on the expectation that providers would treat them discourteously or hurriedly, an expectation that both patients and providers identified as an important barrier to opening up about anxiety symptoms or other stressors when receiving antenatal care. Moreover, several patients' reported that they felt their low SES negatively influenced how antenatal providers in public hospitals communicated with them, a result that aligns with Irfan and Ijaz's (2011) assertion that private hospitals in Pakistan devote more attention to meeting patient needs since they rely on higher SES patients' patronage to remain profitable [61]. Other studies, including one on antenatal care satisfaction in an LMIC setting [48], have found that patients value provider communication that does not stigmatize their low SES [62]. More research is warranted to further characterize differences between the public antenatal care context, such as the setting of this study, and the private sector to determine how perceived communication quality differs and which factors potentially contribute to differences in communication between pregnant women and antenatal care providers in these settings.
While this study focuses on interpersonal dynamics between provider and patient, the complex structural and cultural forces that may shape communication in the antenatal care encounter cannot be overlooked. Although both patients and providers in our sample were Pakistani, differences in SES, ethnic origin, and sub-cultural understandings emerged in their respective interviews. Stereotyped perceptions of low SES individuals or of particular ethnic minority groups may influence communication in the antenatal care environment, possibly leading to discriminatory procedures and interactions in a concept that Knight (2020) calls 'enacted stigma' [63][64][65][66]. While these perceptions may not be verbally expressed, they still constitute internal belief systems, as labelled in Feldman-Stewart et al.'s communication framework, can be communicated non-verbally and/or unintentionally [50], and have the potential to change how messages are conveyed and received. Such perceptions can also influence providers' decision-making processes for diagnosis, treatment, and case management [67]. Future research should delve deeper into the relationship between stereotypes held by antenatal providers and pregnant women's experiences of anxiety.
Some limitations of our study include a smaller number of provider interviews compared to the number of interviews with pregnant women and the reliance on secondary analysis of data collected at one facility with the purpose of informing the design of a preventive intervention targeting antenatal anxiety. Although saturation was reached on the topics of antenatal anxiety sources, manifestations, and coping strategies, we may not have reached saturation on all issues relevant to the patient-provider relationship or communication in the antenatal care setting. However, despite the smaller number of provider interviews, they tended to be richer than the average patient interview. Due to logistical constraints, primary data collection was completed at a single public hospital in Pakistan. Future studies could consider collecting data from multiple public hospitals across several regions in Pakistan as well as from private hospitals in South Asia. The inclusion of only patients with current anxiety could be seen as a limitation, but it also allowed for an in-depth analysis on the theme of anxiety in relation to patientprovider communication which, to our knowledge, has not been widely studied. Therefore, a strength of this study was its focus on the experience of pregnant women with at least mild anxiety and its exploration of the role of patient-provider communication on these patients' experiences.
Other study strengths include our use of an iterative coding method [44], a combination of inductive and deductive analytic methods, and an inclusion of both patients and multiple types of providers. An iterative coding method allowed for an adaptive coding scheme, where each phase of coding built upon the results of the prior phases [44]. The use of two coders reduced the potential for bias in the analysis. Lastly, since the sample of providers included physicians and non-physician providers (i.e. nurses and midwives) with various levels of experience, the study's results speak to the multiplicity of patient interactions with various kinds of providers at public hospitals in Pakistan.
Our results contribute to literature on patient-provider communication and mental health outcomes in LMICs. The findings provide context as to how anxiety is influenced by and reproduced through patient-provider relationships that are marked by differences in styles of communication, sociocultural backgrounds, and treatment goals. As such, the findings can be used to inform mental health interventions in low-resource settings that target both patients and providers. For example, interventions targeting patient-provider communication in the antenatal healthcare setting could encourage the use of empathic listening among providers and active participation by pregnant women by employing training to providers on how to incorporate these elements in the short timeframe of an antenatal care visit [68]. Finally, paired communication training could simultaneously target providers and patients in order to improve patient-centered communication [69].
Conclusion
This study reveals the existence of multiple fault lines within patient-provider communication in Pakistan's public antenatal care setting that should continue to be explored in future research. Specifically, we found that high patient caseloads, time and financial constraints, family involvement, and socioeconomic and sociocultural stigma adversely influenced patient-provider communication. Our results suggest that tangible resource constraints in the study hospital, such as a high patient-provider ratio, pregnant women's low SES, and providers' limited time translated into interpersonal constraints, including reduced individual attention, diminished empathy, and less courteous tones. These interpersonal constraints appeared to contribute to pregnant women's anxieties. Future studies on communication dynamics in antenatal settings could be extended to include private hospitals in Pakistan as well as to explore the role of sociocultural stereotypes and SES differences in influencing patient-provider communication in low-resource antenatal settings in South Asia. Such research could help to shed more light on the complex determinants of pregnant patients' anxiety and to enhance maternal and child health outcomes in low-resource settings.
|
v3-fos-license
|
2020-02-20T09:12:38.640Z
|
2020-02-10T00:00:00.000
|
211828961
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.jcancer.org/v11p2318.pdf",
"pdf_hash": "01c512f79183f2ae902fa78c62b3ed4f538d187a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45907",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4d4a2ef9aea9a1f1a9ae18a02e57a1317e123d45",
"year": 2020
}
|
pes2o/s2orc
|
Actinin-4 splice variant - a complementary diagnostic and prognostic marker of pancreatic neuroendocrine neoplasms
Introduction: For pathological diagnosis of pancreatic neuroendocrine neoplasms (pNENs) the routinely used immunohistochemical markers are chromogranin A (CgA) and synaptophysin (Syn). Their ability as prognostic markers is not well established. A splice variant of actinin-4 (Actn-4sv) was recently found to be an excellent biomarker of neuroendocrine neoplasms of the lung. We aimed to investigate the expression of Actn-4sv in pNENs and evaluate its quality as a biomarker of pNENs. Methods: Paraffin-embedded and frozen tissues specimens from 122 pNENs were analyzed. Western blots were performed to prove and compare the relative amount of Actn-4sv expression in pNENs tissue homogenates. For comparison pancreatic ductal adenocarcinoma (PDAC) and normal pancreatic tissues were analyzed in parallel. Immunohistochemistry (IHC) of paraffin sections of pNENs for Actn-4sv were performed and compared to the classic neuroendocrine markers CgA and Syn. Correlations were calculated between the staining intensity and distribution of Actn-4sv and staging, grading and afflicted lymph nodes respectively. Results: Actn-4sv was expressed in 88.5% (108/122) of pNENs, but not in normal pancreatic tissues (0/14) or PDAC (0/14). Compared to CgA and Syn, Actn-4sv was not detectable in islet cells of the normal pancreas. Staining intensity of Actn-4sv on pNENs negatively correlated to the histological grading (Spearman r=-0.4990, p<0.0001) and staging (r = -0.2581, p = 0.0041) but no correlation to afflicted lymph nodes was found. A significantly better overall survival was observed for pNEN patients with higher expression of Actn-4sv (hazard ratio 2.7; log-rank test p= 0.0349). Conclusions: The expression of Actn-4sv may be an important prognostic factor for patients with pNENs. Its expression correlates with the grading and staging of the tumors.
Introduction
Pancreatic neuroendocrine neoplasms (pNENs) are rare tumors and comprising 1-2% of all pancreatic tumors [1]. Neuroendocrine neoplasms including pancreatic NENs have recently gained attention due to growing incidence [2]. The age-adjusted incidence of pNENs increased over the period 1973-2007 from 0.17 to 0.43 cases per 100,000 people in the USA [3]. This increase is also due to increased physicians Ivyspring International Publisher awareness and improvements in diagnostic imaging. Particularly the high sensitive and specific imaging techniques, such as computed tomography, SPECT with 111 In-Pentetreotide and Positron Emission Tomography (PET) with 68 Ga-DOTATATE, 11 C 5-HTP and 18 F-DOPA, multidetector-row CT and endoscopic ultrasound allow detecting and localizing pNENs [4,5]. PNENs occur with various symptoms, often related on the hormones produced. The molecular pathomechanisms are mostly unknown, but clinical studies showed that they may develop sporadic or are inherited [2].
PNENs can be divided into functional versus non-functional tumors with up to 85% being classified as non-functional [2]. The majority of patients diagnosed with non-functional pNENs presents on admission with symptoms such as jaundice, weight loss, nausea, abdominal or back pain and pancreatitis, all symptoms with similar occurrence in patients with pancreatic adenocarcinoma [6]. Functional pNENs are characterized by production and secretion of a variety of hormones including somatostatin, insulin, glucagon, serotonin, and pancreatic polypeptide. They are segregated according to the secreted hormone and the resulting clinical syndrome. From several studies it has been estimated that insulinomas are the most frequent functioning pNENs, followed by gastrinomas as the second most, whereas VIPoma and glucagonomas are rare.
Easily detectable tumor markers, which are specifically expressed in pNEN-tissues and secreted into the blood circulation, are not established. Useful immunohistochemical tumor markers in tissue and serum peptides exist [10]. A note of caution should be mentioned that routinely used markers for pathological identification of pNENs, chromogranin A (CgA) and synaptophysin (Syn), are also expressed in a variety of other tumor types, and in contrast are occasionally lost in pNENs [11,12]. CgA is a serum marker with a specificity of 85.7% and sensitivity of 67.9% and is routinely used as a histopathological marker on tissue [13][14][15]. Because of the poor sensitivity and large inter-assay detection variance new tools for the diagnosis, prognosis, and monitoring of pNENs are of urgent need. Recent focus of research in this field is directed towards identification of biomarkers for biological targeted therapy and multiple marker test such as the NETest [16].
In 2004, Honda et al. discovered a splice variant (Actn-4sv) of the ubiquitously expressed actinin-4 [17]. By subsequent studies they suggested Actn-4sv as a prognostic marker for neuroendocrine tumors of the lung, the small cell lung carcinoma and the large cell neuroendocrine carcinoma [18]. Actinin-4 is an actin-binding protein and component of the cytoskeleton. Originally it was associated with an elevated cell motility and tumor invasiveness [19]. Further studies reported overexpression and the importance of actinin-4 in many malignant tumors as pro-tumor marker [20]. The actinin-4 gene consists of two equally-long exons (8 and 8'), leading at transcriptional level to two different mRNAs, resulting in the exchange of the three amino acids N249G, A251L and S264C [17]. The Exon 8 transcript was found ubiquitously expressed, however in healthy tissues the Exon 8' transcript called "Actn-4sv" was found only in brain and testis and thus considered as a cancer testis antigen.
Okamoto et al. showed this spliced variant of actinin-4 as marker with strong diagnostic and prognostic validity in small cell lung carcinoma [21]. Pathologically, this spliced-form occurs also in neuroendocrine tumors of other tissues, and the authors demonstrated RNAs for the variant actinin-4 (Actn-4sv) in all six investigated cell lines derived from NETs. However in pancreatic NEN the expression of the actinin-4 splice variant was not yet investigated.
Using frozen and FFPE pNEN-tissues we address the questions whether Actn-4sv may function as a potential indicator for pNEN, may be a complementary marker to CgA and Syn and whether the Actn-4sv levels are differently expressed in less differentiated versus high differentiated pNENs. We found that Actn-4sv is expressed in pNENs but not in normal pancreatic tissue. Additionally, we investigated the correlation between variant actinin-4expression and factors such as outcome und invasive growth and found that patients with a high expression of Actn-4sv also have a better prognosis.
Patients and samples
The study, performed at the Department of General Surgery, University of Heidelberg was approved by the Ethics Committee of the University of Heidelberg and written informed consent was obtained from all individuals from whom tissue samples were collected.
In this study, we retrospectively examined a collective of tissue samples of patients with pNENs from our database. Resected tumors were classified histopathologically according to the WHO and TNM classification and pNEN-patients with group I-IV staging were selected. The pNENs were diagnosed and classified using Ki-67 and the immunohistochemical markers CgA and Syn. In order to assemble a robust collective of pNEN samples the following procedure was used. At the beginning, all samples listed as neuroendocrine pancreatic neoplasms obtained between Feb. 2002 and Dec. 2012 in the Dept. of General Surgery, University of Heidelberg (n= 320) were acquired from our database (Biobank of the European Pancreas Center). The patients' samples were subsequently selected depending on the availability and amount of both frozen and corresponding FFPE pancreatic tissue (n= 176) at the time of the study. Prior analysis H&E staining was performed on all 176 pancreatic tissues and reviewed by experienced pathologists (F.B., M.M.G., and S.H.) to confirm disease diagnosis. Samples which have not fulfilled the quality criteria (e.g. with no significant amount of tumor, tissue alterations by electrothermic artifacts) were sorted out. Finally a total collective of 122 pNEN samples was obtained and all these samples were included into this study. Furthermore, 14 tissue samples of patients with PDAC, four with chronic pancreatitis (CP) and 14 normal tissue samples were randomly chosen and analyzed.
The cell lines cells were cultured in RPMI 1640 supplemented with 10% fetal bovine serum (FBS), 100 U/mL penicillin and 100 µg/mL streptomycin all from Life Technologies, Darmstadt, Germany. Cells were maintained in a humidified atmosphere with 5% CO2 at 37°C.
Western blot
Western blot was chosen to prove the expression of Actn-4sv in pNENs. It was performed on tissue extracts from pNENs, PDAC, CP, normal pancreas, and cell lines of neuroendocrine tumor and exocrine pancreatic cancer. Tissue and cell extracts were obtained by first crushing the frozen tissues (60-80 mg/sample) while submerged in liquid nitrogen. The resulted powder was collected in 15 ml polypropylene tubes. Subsequently, 500 µl RIPA buffer (Tris 25 mmol/L, NaCl 75 mmol/L, NP-40 1%, CHAPS 250 mg/L, SDS 1%, pH 8.5) containing protease inhibitors (Roche, Mannheim, Germany) were added. Each tube was vigorously vortexed then shock-frozen in liquid nitrogen and stored overnight at −80 °C. The day after, using an ultrasonic homogenizer (SonoPuls mini20 Bandelin ® , Berlin, Germany) the suspensions were subjected to a 30 sec. sonication step on ice (ampl. 80%, 0.99 kJ) and centrifuged at 16,000 × g for 10 min at 10 °C. Supernatants were collected and divided into aliquots, and the total protein concentration was determined using a Pierce BCA assay (ThermoScientific, Germany) and assessed on a plate reader (Multiskan EX, Thermo Scientific, USA).
After washing, membranes were incubated with a horseradish peroxidase-conjugated IgG (Santa Cruz, CA, USA) goat anti mouse or goat anti rabbit as secondary antibody diluted 1:5000 at 22°C for 30 min. With SuperSignal West Dura Extended Duration Substrate (Pierce, Waltham, USA) blot detection followed. The signals were recorded using a FUSION image acquisition system (Vilber Lourmat, Marne-la-Vallée, France). Band intensities were quantified using ImageJ software and normalized to the β-actin levels.
Immunohistochemistry
The expression and semi-quantitative evaluation of Actn-4sv protein in pancreatic neuroendocrine neoplasms tissues was determined by immunohistochemistry (IHC). For comparative IHC analysis the expression of CgA, Syn, and wild type Actinin-4 were also analyzed.
The scoring algorithm was adapted from Hu et al., and had taken into consideration the proportion (percentage) of stained area of the tumor tissue and the intensity of the staining [24]. Immunostaining of Actn-4sv less than 10% of tumor cells was denoted as negative (0). When >10% of tumor cells were stained, the expression was considered positive and denoted as weak (1), moderate (2) or strong (3) depending on the intensity. Scoring with (0) and (1) was categorized as the low expression, while scoring with (2) and (3) was categorized as the high expression. The results of IHC were judged by SH and FB. A randomly selected subset of 50 slides was being analyzed independently by MMG, there was a match of over 95%.
RNAseq data analysis
Total RNA was isolated from randomly chosen fresh-frozen pNEN tissues using the RNA Plus Mini Kit (Qiagen, Hilden, Germany), according to the manufacturer's instructions. RNA quality was assessed on the Agilent 2100 Bioanalyzer using the Agilent RNA 6000 Nano Kit (Agilent Technologies, Waldbronn, Germany). Samples (n= 24) with a RNA-Integrity-Number (RIN) > 7.5 were used for RNA sequencing.
PNEN transcriptomes were generated from dual indexed libraries on an Illumina HiSeq4000 sequencer in paired-end, 100bp read-length mode by the DKFZ sequencing core facility. We used STAR [25] for mapping the raw data to the human reference genome (GRCh37), while subsequent read group addition and removal of duplicate fragments was done with Picard tools (http://broadinstitute.github.io/picard). The Cancer Genome Atlas (TCGA) pNEN transcriptomes were obtained from TCGA data portal as BAM files (mapped to hg19). The classification of the tumors was verified from the provided pathology reports. RNA-seq datasets of PDAC cell lines were downloaded from the TCGA legacy archive.
To determine the relative expression of the alternative ACTN4 exons 8 and exon 8' [17] spliced into the Actn-4 and Actn-4sv mRNAs, respectively, we used BEDtools [26] to extract the mean coverage of both exons. Normalization was performed by dividing the mean coverage of exon 8 and 8' to the number of reads in the mapped, not duplicated transcriptome and multiplication of the results by 1 million. For plotting the results, we used the R package 'ggplot2' [27].
Biostatistics
Statistical analysis and graphical data presentation were performed using GraphPad Prism 5 (GraphPad Software Inc., San Diego, CA) and SAS software (Release 9.4, SAS Institute, Inc., Cary, NC). The survival rates were assessed using the KaplanMeier method. Patients alive at the last follow-up were censored. In all graphs, overall survival (OS) is defined as the time from the date of the operation to either death from any cause or last follow-up. The difference between the Kaplan-Meier curves was tested for significance applying the log-rank test. Differences were considered to be statistically significant at P < 0.05.
Biometric analysis was performed to examine the strength of correlation between the clinical parameters including age staging, grading, lymph node metastasis and survival with levels of IHC-based validations of the Actn-4sv. Depending on the character of the distributions of the quantitative parameters in each group, the correlation coefficient r with its corresponding p-value of Pearson correlation was used to analyze the correlations. The character of the distributions of the quantitative parameters was determined using the Shapiro-Wilk test and a normal probability plot.
Patient clinico-pathological characteristics
We evaluated pNEN samples from 122 patients together with 14 tissue samples of patients with PDAC, four CP samples, and 10 normal tissue samples as controls. PNEN-patient demographics are summarized in Table 1.
Expression of actinin-4 splice variant in pNEN
Tumor associated antigens, which are selectively overexpressed in pNEN cells, particularly already at early stages, are ideal target proteins for early detection and tumor monitoring. For this purpose we investigated an isoform of Actinin-4, the so called Actn-4sv, for screening and risk stratification of pNEN. Prior to large scale experimental analysis we proved the expression of actinin-4 and its variant Actn-4sv as eligible markers for pNENs. 14 human pNEN-tissue extracts and corresponding donor pancreatic tissues extracts, 10 PDAC, four CP as well as two normal pancreatic tissue lysates were tested for the expression of Actinin-4 and Actn-4sv protein levels. The tissue extracts were subjected to PAGE and subsequently immunoblotted to monitor actinin-4 or its isoform Actn-4sv, the latter through use of a monoclonal antibody (15H2) that reacted specifically with a peptide sequence (DIVGTLRPDEKAIMTYV SC) derived from the variant actinin-4 protein, but not with the corresponding sequence of the ubiquitous protein (DIVNTARPDEKAIMTYVSS). To assess specificity and exclude interpretation mistakes we tested the spliced variant peptide sequence used for antibody formation as competitive inhibitor and the western blots showed no bands indicating Actn-4sv as sole antigen. Detection of Actn-4sv in three pNENs but not in the corresponding adjacent normal tissues (donor). Moderate to strong expression of Actn-4sv was detectable in all pNEN tissue extracts but not detectable in normal pancreatic tissue, CP and PDAC extracts. Using a pan-actinin-4 antibody, detecting both the wild type actinin-4 and Actn-4sv, positive staining was found in all lysates. Immunoblots for β-actin were used as loading control reference.
A representative western blot is presented in Figure 1 A. The Actn-4 spliced form was not detectable in CP, PDAC and normal control samples. On the other hand Actn-4sv was detected as an approximately 105 kDa band in all pNENs, however with different intensities as shown in Figure 1A-B. The normal pancreatic tissues were negative for Actn-4sv. A semiquantitative analysis applying ImageJ and β-actin as a reference protein revealed a range of actn-4 sv/β-actin ratio between 0.5-1.4. Using a pan specific-Actinin-4 Ab which binds to the N-terminal part (MGDYMAQEDDW) of the protein, a 105 kDa band was found in all lysates.
Expression of actinin-4 and actn-4 splice variant in cell lines
In parallel to the tissue extracts cell lysates from AsPC-1, BxPC-3, CFPAC-1, MIA PaCa-2, PANC-1, SU8686, T3M4 and BON-1 were also subjected to PAGE and immunoblotted to monitor the Actinin-4 and its isoform Actn-4sv. As internal control the adenocarcinoma of the lung cell line A549 transfected with Actn-4sv tagged with GFP-protein was used to confirm the reactivity of anti-actn-4sv antibody [18].
The specific reactivity of anti-actn4-va antibody 15H2 is clearly visible as a single band in GFP-Actn-4sv transfected A549 cells but absent in mock transfected (GFP) as shown in lane 1 and lane 2 as quality control Fig. 2A. Using the pan-actinin-4 Ab 13G9 the protein was detected in all cell lines. However, Actn-4sv was only clearly visible in the pNEN cell line BON-1 as shown in Figure 2B. Withdrawal of nutrition "starvation" for 72 h revealed no significant effect on Actn-4sv expression.
Expression of Chromogranin A, Synaptophysin, and actinin-4 splice variant in pNENs determined by IHC
It is well known that protein overexpression of Actinin-4 is a prognostic biomarker for invasive PDAC of the pancreas [28]. Here we investigated the protein expression of Actinin-4 and its variant Actn-4sv in pNENs and compared it to normal pancreatic tissue using IHC.
IHC revealed staining of normal pancreatic endocrine tissue for CgA and Syn, but no staining for Actn-4sv was detectable as shown in Figure 3 A-C. In contrast to nornal tissue a strong expression of the Actn-4sv was observed in pNENs ( Figure 3D). After optimization of staining conditions on the automated slide stainer a large cohort of pNEN composed of 122 patients was assessed for the expression and distribution of Actn-4sv, and compared to the expression of the reference markers CgA and Syn. The results are summarized in Table 2. A positive Actn-4sv staining was obtained for 108 pNEN tissue samples (88.5%). The samples were then divided into two categories according to the staining intensity: low intensity comprising faint to weak staining and high intensity comprising moderate to strong tissue staining. A variation in stained tumor area was also observed and subdivided into five groups as presented in Table 2.
Correlation between intensity of Actn-4sv staining and grading and staging
Because tumor grade is considered an important prognostic variable for survival as shown for our analyzed cohort in Figure 4A, we assessed the Actn-4sv expression in correlation to pNEN grading ( Figure 4B). The tumor grade was available for 120 pNEN patients, and the distribution was as follows: for NET G1, n = 50 patients; for NET G2, n = 61 patients; for NET G3, n = 7 and NEC (G3) large cell type n = 2 patients. Protein expression (stainingintensity) of Actn-4sv statistically negatively correlated with the grading (Spearman, r = -0.4990, P < 0.0001).
As the next step we analyzed the statistical correlations between Actn-4sv expression and staging. Staging according the UICC 2009 was available for 86 patients and the majority of pNEN were stage IIIB (n= 33) and stage IV (n= 29). Also a correlation of Actn-4sv staining-intensity and tumor staging (r= -0.2581, P= 0.0041) was found. However, there was no correlation between Actn-4sv protein expression and lymph node metastasis. The results are summarized in Table 3.
Survival of patients in regard to intensity of actinin-4 splice variant staining
To estimate the clinical prognostic significance of Actn-4sv expression, Kaplan-Meier survival analysis and a log-rank test were performed. The analysis was performed on survival data available for 122 pNEN patients. A staining intensity cut-off of <50% was selected for evaluating the Actn-4sv as a marker of overall survival (OS). For all 122 PNEN analyzed, a statistically significant difference in OS was found. Median to strong Actn-4sv staining intensity (≥ 50% n = 70) was prognostic for longer OS, compared to 52 patients with Actn-4sv staining intensity (faint to low) below 50% (log-rank p = 0.0349) as shown in Figure 5. Table 3. Correlation between Actn-4sv staining intensity and staging and grading. As the next part of the study we investigated the Actn-4sv expression in relation to pNEN expressing hormones. As presented in table 1 the entire PNEN-cohort revealed several hormone expressing tumors: 27 insulin-positive, 11 gastrin-positive, 21 glucagon-positive and 21 somatostatin-positive of which several were double-or even triple-positive hormone-producing tumors. Figure 6 summarize the Actn-4sv staining intensity in pNEN expressing hormones.
To assess the expression of Actn-4sv on RNA level, we analyzed the utilization of the alternative exon 8' [17] in relation to the exon 8 incorporated into Actn-4 mRNA. To this end, we obtained pNEN transcriptomes from TCGA and analyzed a more comprehensive set of in-house pNEN transcriptomes. ACTN4 gene expression varies between samples and all analyzed samples express Actn-4 splice variant mRNA (Figure 7). Actn-4sv comprises, on average, 13% and ranges between 2% (RNA1354) and 34% (TCGA-3A-A9IS) of the total ACTN4 mRNA.
However a correlation between the Actn-4sv protein expression assessed by IHC (n= 13) and the corresponding splice variant Actn-4sv mRNA could not be established, likely due to different tissue sections (FFPE vs frozen) of the tumor used for IHC and RNA.
Discussion
The establishment of a reliable marker to assess the malignant behavior of pNENs would be very important to optimize the patients' therapy. Neuroendocrine proteins CgA, Syn and the neural cell adhesion molecule CD56 are the conventional primary diagnostic markers in pNENs. Additionally molecular markers such as, Ki-67, CK19, p27, p21, p53, cyclin D1, Bcl-2, E-cadherin, and vimentin are increasingly used to predict patient outcome. Antibodies have been raised against mutant-proteins for immunoassays, and immunohistochemistry has been found to be a fast and efficient method for evaluating the molecular aberrations on protein level [29]. These aberrant-specific antibodies specifically bind to altered regions of the proteins encoded by the mutated genes or alternatively spliced transcripts but do not bind to the wild-type proteins.
In this study we investigated the expression of the spliced transcript Actn-4sv in pancreatic neuroendocrine neoplasms and compared it to the expression of CgA and Syn which are standard diagnostic markers for the pathological diagnosis and immunohistochemical confirmation of pNENs [30]. The Actn-4sv marker was chosen based on previous reports suggesting his prognostic value in other NENs. Here we demonstrated, first by western blotting, the presence of Actn-4sv in all analyzed pNEN tissue-lysates. This variant protein was undetectable in extracts derived from PDAC-, CPand pancreatic donor-tissues. Subsequently, applying IHC we analyzed a larger cohort of pNENs and confirmed the expression of Actn-4sv in the resected tissues sections. Our data demonstrate that the Actn-4sv is expressed at the protein level in 86% of the pNENs, suggesting Actn-4sv may be a useful diagnostic biomarker for pNENs or complementary marker in cases where CgA and Syn perform poorly. Immunoblotting and IHC are convenient approaches and frequently used methods for assessment of protein amount and distribution in tissue lysates and histological sections. Nevertheless, data must be interpreted with caution as the antigen-antibody complex formation on a membrane or on a tissue matrix surface may be accompanied by unspecific binding interactions. In order to exclude interpretation mistakes we tested the spliced variant peptide sequence used for antibody formation as competitive inhibitor. These western blot and IHC data revealed the Actn-4sv as sole antigen of immunoreactivity.
The role of Actinin-4 in cancer and particularly its involvement in tumor progression and metastasis is well documented for many cancers including colorectal, lung, gastric, cervical cancer, and PDAC as reviewed in [20]. The overexpression of Actinin-4 which is based on the gene amplification of ACTN4 has been observed in invasive PDAC and these patients showed a worse prognosis for overall survival than those with weak Actinin-4 expression [31].
The variant transcript Actn-4sv was reported to be expressed under normal conditions only in human testis and in trace amounts in brain tissue but was not detected in any other normal organs [17]. The current knowledge about the Actn-4sv function and role in cancer is limited [17,18,21] and no reports on Actn-4sv protein in pancreatic NENs are known. RNAseq datasets of pancreatic ductal adenocarcinoma cell lines were obtained from the TCGA Legacy Archive and analyzed analogously to the pNEN transcriptomes. ACTN4 expression varies widely between cell lines while Actn-4sv (light grey) expression is generally low, ranging between 1 and 5% of total ACTN4 mRNA.
Our pNEN transcriptome analysis for incorporation of exon 8 or the alternative exon 8' into the ACTN4 mRNA applying the RNAseq datasets from TCGA and our in-house generated, revealed that both forms the ubiquitous Actinin-4 and the splice variant Actn-4sv are expressed on the mRNA level (ranging between 2-34% in relation to exon 8). Interestingly, the analogous analysis of ACTN4 Figure 7. Abundance of Actn-4 and Actn-4sv mRNA variants in pNEN transcriptomes. RNAseq datasets from TCGA (left) and generated in-house were analyzed for incorporation of exon 8 (Actn-4, dark grey) or the alternative exon 8' (Actn-4sv, light grey). All samples analyzed express Actn-4sv at RNA level. For comparability between samples, expression was normalized as described in the Materials & Methods section. The black horizontal bar denotes those samples for which also Actn-4sv IHC has been analyzed in this study. expression in PDAC cell lines showed not only larger transcriptome differences between analyzed cell lines but also a much lower Actn-4sv expression, ranging between 1 and 5% of total ACTN4 mRNA.
Here we report for the first time the protein expression of this alternatively spliced actinin-4 in pNENs but not in non-neoplastic pancreatic islets. This is in line with previous reports on Actn-4sv in high grade neuroendocrine pulmonary tumors (HGNTs) [18]. However, in contrast to HGNTs where the expression of Actn-4sv was significantly associated with poorer overall survival, in the pNENs the opposite observation was made. The overall survival of patients with moderate to strong Actn-4sv protein expression was significantly better than that of patients with faint or weak Actn-4sv expression which revealed an unfavorable postsurgical outcome. Our result of increased Actn-4sv expression being a better survival of the patients is also consistent with our finding that increased Actn-4sv staining correlates with better grading and tumor staging of the pNENs. Therefore we assume that neuroendocrine pancreatic neoplasms have a fundamentally different interaction with Actn-4sv on a molecular basis than neuroendocrine tumors of the lung.
Nikopoulos et al. showed that a decreased Actinin-4 expression favor cell adhesion abnormalities and metastatic ability [32]. Assuming that Actn-4sv, as a subgroup of Actinin-4, performs similar functions and thus stronger expression of Actn-4sv prevents cell adhesion abnormalities, then our data demonstrating that increased Actn-4sv expression in the pNENs correlates with a better outcome for the tumor patients are in line with the finding of Nikopoulos et al. work. Additionally, since Actinin-4 can be localized in the cell membrane, cytosol and also the nucleus its subcellular distribution may influence the tumor cell properties. In breast tumors and few cell lines Actinin-4 was detected immunohistochemically in cytosol and nucleus, and cells with cytoplasmic expression were more motile and metastatic [19]. Likely subcellular Actn-4sv distribution has also effects on motility of the pancreatic neuroendocrine tumor cells because in few of our pNENs a staining for Actn-4sv was only nuclear or cytoplasmic. Further research has to be conducted to show possible linking between intracellular location and outcome of patients. It is conceivable that nuclear or cytoplasmic location of Actn-4sv may be correlated to tumor invasion and metastasis. Our preliminary data on the subcellular localization/distribution of Actn-4sv in context of migration and invasiveness of BON-1 and QGP-1, two cell lines derived from human pNETS indicate that factors such as hypoxia and nutrient deprivation might influence the migratory behavior (results not shown).
It is also noted that Actn-4sv is not found in the serum, so it is not suitable as serum marker. At a high sensitivity of 88.5 % it may be useful in clinical setting to combine this marker with a serum marker such as CRP, which was described as an independent prognostic marker by Wiese et al. [33].
|
v3-fos-license
|
2018-04-03T03:03:38.434Z
|
2016-06-09T00:00:00.000
|
18639739
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://portlandpress.com/biochemsoctrans/article-pdf/44/3/851/755505/bst0440851.pdf",
"pdf_hash": "efb8c7da5369c490e379649620a220fcbcc2e050",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45909",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"sha1": "13dc93b6a31fa05428b4fcbc80047b14faea64d6",
"year": 2016
}
|
pes2o/s2orc
|
Rotating with the brakes on and other unresolved features of the vacuolar ATPase
The rotary ATPase family comprises the ATP synthase (F-ATPase), vacuolar ATPase (V-ATPase) and archaeal ATPase (A-ATPase). These either predominantly utilize a proton gradient for ATP synthesis or use ATP to produce a proton gradient, driving secondary transport and acidifying organelles. With advances in EM has come a significant increase in our understanding of the rotary ATPase family. Following the sub nm resolution reconstructions of both the F- and V-ATPases, the secondary structure organization of the elusive subunit a has now been resolved, revealing a novel helical arrangement. Despite these significant developments in our understanding of the rotary ATPases, there are still a number of unresolved questions about the mechanism, regulation and overall architecture, which this mini-review aims to highlight and discuss.
Introduction
The rotary ATPase families are membrane-bound molecular motors which act either by hydrolysing ATP to generate a proton gradient or by utilizing an existing proton gradient to energize ATP synthesis. The family consists of the ATP synthase (F-ATPase), vacuolar ATPase (V-ATPase) and archaeal ATPase (A-ATPase). A common feature of the rotary ATPase family is the coupling of a membrane-bound protontranslocating domain (V O ) and a soluble, ATP hydrolysing/ synthesizing motor (Figure 1). Although the core motors of each member of the family share a high degree of similarity, they possess varying degrees of extra complexity. This is indicated by the number of static peripheral stalks (that make up the 'stator' of the motor) they possess which may reflect the differing degrees of regulation they each require [1]. The best studied system, the F-ATPase, is the simplest of the family, possessing only one stator stalk. It primarily uses the proton gradient and subsequent flow of H + through the F o domain to generate torque and drive ATP synthesis in the F 1 region. Recent EM and crystallography studies have shown the interface where proton translocation occurs between the rotating c-ring and the static subunit a has a novel architecture with two horizontal helices parallel to the plane of the membrane and embedded within it [2][3][4]. At this interface the proton enters one side of a half-channel within a, is picked up by a conserved glutamate residue on the c-ring and following a full revolution of the rotor, is then moved into the half-channel on the opposite side. The second member of the ATPase family is the A-ATPase, which differs in organization to the F-type in having two stator stalks rather than one. Each of these stalks is made of just two coiledcoils rather than the multi-polypeptide stator found in the F-ATPase. Some members of the A-ATPase family also function through the use of sodium as a coupling ion [5].
This review focuses on the V-ATPase, which acts via the hydrolysis of ATP to drive proton translocation across a membrane. The V-ATPase has the same overall architecture as the other ATPase family members consisting of the soluble ATP hydrolysing domain (V 1 ) and V O (Figure 1). V 1 consists of three AB repeats where ATP hydrolysis takes place, subunits D and F make up the central rotor axle and three stator stalks consisting of subunits E and G. The three stators are linked by a series of 'collar' subunits which include subunit C, H and the soluble domain of subunit a. The membrane embedded V O domain consists of the cring, made up of a variable number of c subunits, subunit d and the remainder of subunit a. The V-ATPase plays an essential role in several physiological processes, including cell homoeostasis and signalling [6], and therefore is found in all eukaryotic cells [7]. It is thus unsurprising that the complex is implicated in a variety of disease states [8], for example its role in bone resorption through acid extrusion means that it is involved in both osteoporosis and osteopetrosis [9]. The V-ATPase has also been linked to cancer invasiveness and has been studied as a target for cancer therapy [10][11][12].
Recently, the sub nm resolution EM structures of the V-ATPase from both yeast and the higher eukaryote Manduca sexta [13,14] have provided unprecedented detail into its structure and mechanism. In particular, the structures from yeast show the organization of subunit a, in the form of highly tilted horizontal helices similar to the arrangement recently observed in F-ATPase from Polytomella sp [2]. Despite the improved resolution of these structures, the primary structure of subunit a cannot be unambiguously assigned in the map hence, the connectivity between helices is still uncertain. The yeast V-ATPase has been solved in three distinct rotational states, permitting the conformational changes that accommodate ATP hydrolysis to be observed. Despite the great advances that have been made in our structural and biochemical understanding of the rotary ATPase family, a number of unresolved questions remain, some of which this review aims to address. For example the manner in which isolated or disassociated domains of the complex are silenced, the effect of direct linkage between the stator and rotor domains acting as a 'break' and the role and location of subunits e and Ac45 within the complex.
Mechanism
Through a combination of biochemical analysis, X-ray crystallography and EM, a number of defined catalytic states of the V-ATPase have been trapped which are consistent with the 'Boyer mechanism' [15][16][17]. Key to the rotation of the central axle is the movement of the lever arms at the base of the AB catalytic dimer, induced through ATP hydrolysis and the subsequent release of ADP and P i . The central rotor axle is stabilized at the top of V 1 by a series of highly conserved loops that display alternative positive and negative charges (a feature conserved in the F-and A-ATPase families) and it is tempting to hypothesize that these act as a frictionless electrostatic bearing through a series of attracting and repelling charges acting on the charged axle [14,18]. To our knowledge there is no report of any mutagenesis on these residues, which could provide new insights into this poorly characterized highly conserved feature within the rotary ATPase family.
Implicit within the rotary mechanism is the need for the central rotor axle to couple with the c-ring with no interactions to the static stator complex. It is clear that within both the yeast and M. sexta systems this is not the case with significant contact made between subunits d and C, which would be analogous to applying the brakes to the rotary mechanism ( Figure 2) [13,14,19]. The removal of this linkage has been shown to increase the flexibility of the system [20]. Moreover, it was hypothesized that this may act in a ratchetlike mechanism such that the conformational changes which accommodate ATP binding and hydrolysis not only drive rotation but release the steric hindrance, allowing forward rotation [20]. In the absence of any ATP hydrolysis, the axle is unable to rotate either forwards or in reverse through steric hindrance of this 'brake-like' mechanism, silencing proton leakage and preventing futile backwards rotation of the rotor.
Analysis of the different catalytic states shows significant movement of the EG stators [13,21], although their fundamental shape is dictated not by the occupancy of the ATP-binding site in the AB domain, but by interactions with the different subunits within the collar region [14]. Interestingly for both the M. sexta and yeast complexes there is an apparent 'resting state' into which a predominant population of V-ATPase exist, (approximately 48 % within the yeast system [13]). These states also coincide with the level of interaction between subunit d and subunits C and H. Within yeast the largest to smallest interfaces have approximately 48 %, 36 % and 16 % of particles associated with them, respectively. This suggests that the interface may potentially provide stability within the complex, although the exact purpose is unknown. It may be that this asymmetry in population is related to regulation, as it is known that addition of ATP induces dissociation of the complex [22], so it is possible that the detachment of V 1 from V o can only occur in certain rotational states and thus is mediated by the stability of this interface.
Regulation
The ability of the V-ATPase to consume significant cellular resources of ATP [22,23], requires a regulatory mechanism to avoid futile ATP turnover. One proposed mechanism is through the dissociation of V 1 from V O , as shown in yeast and M. sexta [24,25]. However, recent in vivo experiments have suggested a more subtle rearrangement of V 1 from V o rather than complete separation [26]. Moreover, to date evidence for this being a ubiquitous mechanism also remains elusive with other species failing to show the same dissociation behaviour, although there is some evidence in mammalian cells for amino acid modulated reassembly [27]. Furthermore, although the A 1 domain from the A-ATPase has been well studied, to our knowledge it has not yet been shown to dissociate in a similar manner to that of the V-ATPase under physiological conditions. In addition to the need for regulation of the mature complex, it is also vital to regulate the isolated hydrolysing domains of the complex along the assembly pathway. This may be regulated in an alternative manner or share a common method of regulation with the dissociated V 1 /A 1 . Therefore, it is possible that, although informative, studies that are carried out on the isolated A 1 domain may be more relevant to a stable assembly intermediate rather than representative of a dissociated A-ATPase complex, whereas V 1 can be isolated after controlled dissociation [28].
Within the V-ATPase, subunits C and H have been implicated in the regulatory process with the former responding to cellular signals (likely phosphorylation [29,30]), triggering dissociation and then later halting rotation of the central axle through steric hindrance [31]. The A-ATPase lacks any homologues for these subunits and so the triggers for dissociation and/or silencing of ATP turnover are unknown if indeed present. With regards to ATP silencing, there is evidence that subunit H undergoes an approximately 120 • rotation about the EG stator to interact with the central rotor axle [31,32]. The interfaces involved within this interaction are yet to be resolved with an approximately 25 Å (1 Å =0.1 nm) EM reconstruction of the isolated V 1 domain from M. sexta failing to show any interaction despite the H subunit being associated with the complex as shown through biochemical and negative stain analysis [28]. The lack of strong density for subunit H may represent a high degree of flexibility or an artefact caused through the modest resolution. It has also been suggested that subunit F, which sits at the base of the rotor axle, may play a role in silencing ATP hydrolysis in the isolated V 1 and A 1 domains [33,34]. As the F subunit is common to both the V-and A-ATPase, it is possible that this forms the basis of a common mode of silencing during the process of complex assembly.
The role of subunits e and ac45
The least characterized subunit within the V-ATPase complex is subunit e. It has been shown to be heavily glycosylated [35] and is required for accurate assembly of the c-ring [36][37][38]. However, its role, if any, in V-ATPase mechanism and/or structure is unknown. Within yeast, it has been shown that subunit e is absent from the complex after purification [39]. However, for M. sexta, the e subunit has been identified as part of the purified complex [35]. Moreover, the potent and selective V-ATPase inhibitor Pea albumin toxin 1b (PA1b) was shown to bind subunit e with the corresponding position in the EM map at the base of the V-ATPase [40]. There is a clear distinction at the base of the V-ATPase, between yeast and M. sexta, with the latter, which contains subunit e, having a protrusion that is partially glycosylated at the base of V o [14] (Figure 3). For the yeast enzyme, which lacks subunit e in the purified complex, no protrusion is seen [13]. Therefore, we believe that subunit e is located at the base of the V-ATPase, although its function and its precise location are yet to be resolved. This is important as subunit e may be used as a target site for selective inhibitors as it is differentially observed across species. Thus, it would be valuable to study the relationship between organisms possessing the e subunit within the complex and those where it is more transient in order to discern the rationale behind the retention and thus the function of this mysterious subunit.
Previous studies have suggested the accessory protein Ac45 is responsible for the density at the base of the V-ATPase since it is absent from yeast, along with the protrusion at the base of V o [41]. However, despite extensive MS and gel analysis Ac45 cannot be identified within the M. sexta V-ATPase preparation [14], which also contains a similar protrusion at the base of V o . Therefore, it seems logical to conclude that the density at the base of V o is linked to subunit e and not Ac45. The role for Ac45 and its association with the V-ATPase is still poorly understood although there is growing evidence that it plays a role in localization rather than mechanism [42,43].
Conclusions
Although there have been several major advances recently in our understanding of the structure and function of the V-ATPase and rotary ATPases in general, there are still several unanswered questions. Through understanding these systems in more detail we are beginning to see subtle differences between organisms, such as the presence or absence of subunit e and Ac45, which may call into question the relevance of model systems currently in use. Mechanistically, there are still several challenges to overcome such as the manner of regulation and dissociation within the V-ATPase and how this differs across the ATPase family as a whole, and also the safeguards in place to prevent backwards rotation.
Additionally, the precise mechanism of proton translocation is still unknown with higher resolution structural studies still required to view this process in detail, whereas dynamic structural studies may enable functional and mechanistic detail to be linked to structural changes rather than the static snapshots currently available.
|
v3-fos-license
|
2020-11-11T14:19:06.038Z
|
2021-02-16T00:00:00.000
|
231929556
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.634208/pdf",
"pdf_hash": "145fd763db57ebb57b862c69ad8b0e23ec9a6437",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45910",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "ad919896716d1e04e45ed023f97598c50e55b573",
"year": 2021
}
|
pes2o/s2orc
|
Skin Manifestations in COVID-19 Patients: Are They Indicators for Disease Severity? A Systematic Review
Introduction: Until now, there are several reports on cutaneous manifestations in COVID-19 patients. However, the link between skin manifestations and the severity of the disease remains debatable. We conducted a systematic review to evaluate the temporal relationship between different types of skin lesions and the severity of COVID-19. Methods: A systematic search was conducted for relevant studies published between January and July 2020 using Pubmed/Medline, Embase, and Web of knowledge. The following keywords were used: “SARS-CoV-2” or “COVID-19” or “new coronavirus” or “Wuhan Coronavirus” or “coronavirus disease 2019” and “skin disease” or “skin manifestation” or “cutaneous manifestation.” Results: Out of 381 articles, 47 meet the inclusion criteria and a total of 1,847 patients with confirmed COVID-19 were examined. The overall frequency of cutaneous manifestations in COVID-19 patients was 5.95%. The maculopapular rash was the main reported skin involvement (37.3%) commonly occurred in middle-aged females with intermediate severity of the disease. Forty-eight percentage of the patients had a mild, 32% a moderate, and 20% a severe COVID-19 disease. The mild disease was mainly correlated with chilblain-like and urticaria-like lesions and patients with vascular lesions experienced a more severe disease. Seventy-two percentage of patients with chilblain-like lesions improved without any medication. The overall mortality rate was 4.5%. Patients with vascular lesions had the highest mortality rate (18.2%) and patients with urticaria-like lesions had the lowest mortality rate (2.2%). Conclusion: The mere occurrence of skin manifestations in COVID-19 patients is not an indicator for the disease severity, and it highly depends on the type of skin lesions. Chilblain-like and vascular lesions are the ends of a spectrum in which from chilblain-like to vascular lesions, the severity of the disease increases, and the patient's prognosis worsens. Those with vascular lesions should also be considered as high-priority patients for further medical care.
INTRODUCTION
A viral outbreak caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged from Wuhan, China in late December 2019 (1). The disease was named coronavirus disease 2019 (COVID- 19) by World Health Organization (WHO) and was declared as a pandemic on 11 March 2020 (2). After 1 year from the beginning of the pandemic, the full spectrum of COVID-19 presentations and its relationship with disease severity is still unknown. Fever, cough, chills, dyspnea, myalgia, and sore throat are the most common clinical presentations of COVID-19 and as time goes on, different other manifestations have been reported (3). Recently, skin lesions have been described as potential manifestations of COVID-19 (4)(5)(6). The cutaneous changes reported to date include maculopapular rash, vesicular lesions, urticaria-like lesions, and chilblain-like lesions (4)(5)(6)(7)(8). Some of these skin manifestations arise before the signs and symptoms more commonly associated with COVID-19, suggesting that they could be presenting signs of COVID-19 (9). However, the link between skin manifestations and the severity of the disease remains debatable. Due to the great variety of reported dermatologic presentations as well as the inconsistency of data on the association between skin presentations of COVID-19 with poor outcome, we aimed to conduct a comprehensive systematic review on the clinical and histopathological characteristics of skin manifestations in relation to other features of confirmed COVID-19 patients and to evaluate the temporal relationship between different types of skin lesions and the severity of COVID-19.
METHODS
This review conforms to the "Preferred Reporting Items for Systematic Reviews and Meta-Analyses" (PRISMA) statement (10). Registration: PROSPERO (pending registration ID: 215422).
Search Strategy and Selection Criteria
To investigate the prevalence and characteristics of cutaneous manifestations in COVID-19 patients, a systematic search was conducted for relevant studies published between January and July 2020 using Pubmed, Embase, and Web of knowledge.
The following search terms were used (designed using MeSH keywords and Emtree terms): "SARS-CoV-2" or "COVID- 19" or "new coronavirus" or "Wuhan Coronavirus" or "coronavirus disease 2019" and "skin disease" or "skin manifestation" or "cutaneous manifestation." Only studies included if they contained data about the skin manifestation in patients with confirmed COVID-19. There were no language restrictions. We got help from the Google Translate system for non-English papers. Review articles, duplicate publications, and articles with no relevant data were excluded from the analysis. Two authors independently screened the remaining articles. Finally, selected data were extracted from the full-texts of eligible publication by other investigators of the team.
Data Extraction
Data about the first author's name, date of publication, country, number of COVID-19 patients, number of cases with skin manifestations, age, gender, location and type of skin manifestations, associated cutaneous symptoms, the onset of skin lesions with systemic symptoms, the median duration of the lesions, treatment strategies and main histological findings of the lesions as well as comorbidities, associated symptoms, drug history, laboratory findings, severity and outcome of the patients were selected for further analysis. All cutaneous presentations related to COVID-19 were categorized into six groups: chilblain-like, vesicular, urticaria-like, maculopapular, vascular, and miscellaneous (lesions that we couldn't subscribe to any of the groups). Petechiae, purpura, livedo, and necrosis were classified into vascular lesions. Two authors (PJ, BH) independently extracted the data from the selected studies. The data was jointly reconciled, and disagreements were discussed and resolved between review authors (PJ, BH, MJN).
Quality Assessment
The critical appraisal checklist for case reports provided by the Joanna Briggs Institute (JBI) was used to perform a quality assessment of the studies (11).
RESULTS
At the first round of review, 381 articles were selected. After removing the duplicates and studies that did not meet the entry criteria, 88 full texts were finally selected for further assessment. Of these, only 47 articles had the characteristics appropriated for systematic review and were entered into the data extraction (Figure 1). Most of the studies were case reports (47%, N: 22) followed by case series (42.4%, N: 20), retrospective hospital/private section-based study (6.4%, N: 3), and crosssectional (4.2%, N: 2). Thirteen articles were originated from Italy, 11 from Spain, 10 from France, 5 from the USA, and others from Belgium, China, Thailand, Kuwait, Indonesia, Russia, Turkey, and Singapore. Information of the 47 analyzed articles can be found in Table 1.
A total of 1,847 patients with confirmed COVID-19 (based on positive RT-PCR or positive antibody tests) were examined in 47 articles, of which 597 patients had different skin manifestations. The overall frequency of cutaneous manifestations in COVID-19 patients was 5.95%.
Characteristics of the Cutaneous Lesions in Confirmed COVID-19 Patients
The maculopapular rash was the main reported skin involvement (37.3%) followed by chilblain-like lesions (18.4%). The prevalence rate of vesicular and urticaria-like lesions was 15% ( Table 4).
The mean age of patients with cutaneous manifestations was 53.3 (ranging from 16 to 92) years. Chilblain-like lesions were more common in younger patients (mean age: 40.7 years) and vascular lesions were more common in the elderly (mean age: 72.3 years). The prevalence of skin lesions was slightly higher in females than males (54 vs. 46%). Urticaria-like, chilblain-like and miscellaneous lesions were more frequent among females ( Table 4). Vascular lesions were more frequent in males (61%). The prevalence of vesicular and maculopapular lesions was almost the same in men and women (51 and 49%).
Trunk, lower limb, and upper limb were the main involved regions. Chilblain-like and vascular lesions were more common in acral areas and except for maculopapular lesions, others were commonly located in the trunk. The maculopapular lesions were more common in extremities. The involvement of palms and soles were rare. Mucous membrane involvement was reported in all types of skin lesions particularly maculopapular and vascular lesions, but it was not reported in chilblain-like lesions ( Table 4). Vesicular rashes could have diffused polymorphic or localized monomorphic patterns (27,37).
In the majority of patients (89.5%), dermatologic manifestations presented after (55%) or at the same time (34.5%) with the onset of systemic symptoms of COVID-19. Urticaria-like lesions appeared usually as a concomitant symptom (47%). In 3.5% of patients particularly with chilblainlike lesions, skin manifestations were the only presentation of COVID-19. In 7% of patients, skin manifestations occurred before the systemic symptoms, particularly in chilblain-like lesions ( Table 4).
The median duration of skin lesions was about 9 days ranging from 1 to 18 days ( Table 4). Urticaria-like lesions had the least duration (5 days) and chilblain-like lesions had the most duration (14 days).
No skin biopsy or histological examination of urticarialike lesions was performed. Therefore, the following results are related to other types of skin lesions.
Perivascular lymphocytic infiltration, spongiotic and interface dermatitis, and vacuolization or keratinocyte necrosis were the common histologic findings in skin biopsies, except for vesicular lesions. In vesicular lesions, the absence of inflammatory infiltrates, atrophic epidermis, and hyperkeratosis was reported. In almost all types of lesions (except maculopapular and vesicular lesions) thrombotic vasculopathy and red blood cell extravasation were present. Langerhans cell aggregations were seen within the epidermis in maculopapular lesions. Telangiectatic blood vessels were seen within the dermis of vascular and miscellaneous lesions. Virally-induced cytopathic alterations were absent according to reports on the miscellaneous category. Striking vascular and dermal deposits of complement factors (C5b-9, C3d, C4d) and IgM were present in four vascular rashes. Some studies performed an RT-PCR test on skin samples of maculopapular and vesicular lesions and the results were all negative for SARS-CoV-2. More details can be found in Table 2.
Characteristics of the Confirmed COVID-19 Patients With Skin Manifestations
The overall prevalence of comorbidities among patients with skin manifestations was 17.9% ( Table 4). Hypertension (39%), diabetes (23%), and dermatologic diseases (20%) were the most frequent comorbidities, respectively. Utmost cases with comorbidity were across the patients with maculopapular lesions (40%). Previous dermatologic illnesses were most common in patients with vesicular lesions ( Table 4). Cardiovascular disease, hypertension, and obstructive lung diseases were common comorbidities amongst patients with vascular lesions ( Table 4). Rheumatologic diseases were more frequent in patients with chilblain-like lesions (30%). Diabetes was seen commonly in patients with urticaria-like lesions (46%).
Elevated D-dimer was the main laboratory finding in most of the cases, especially in patients with chilblain-like (100%) and vascular (46%) lesions. Disruption of coagulation condition (increase in PT, INR, and fibrinogen) was reported in patients with chilblain-like and vascular lesions (Tables 3, 4).
Regarding the drug history and medication regimen used for COVID-19, data of 389 out of 597 cases were available, most of which related to maculopapular and urticaria-like lesions. Fifty-two percentage of all cases and 72% of cases with chilblain-like lesions underwent symptomatic treatment with paracetamol, etc., or recovered without any medication. Chloroquine/hydroxychloroquine was the most common medication used in patients (45%). Details in Tables 3, 4.
Most patients had mild disease (48%). The majority of patients with chilblain-like lesions had mild disease (82%) and the majority of patients with vascular lesions had severe disease (68%). Also, most of the patients with maculopapular lesions were moderate (43%) regarding severity (Tables 3, 4).
The overall mortality rate among COVID-19 patients with cutaneous manifestations was 4.5%. Patients with vascular lesions had the highest mortality rate (18.2%) and patients with urticarialike lesions had the lowest mortality rate (2.2%).
Details indicating characteristics of the lesions and the patients are shown in Tables 2-4.
DISCUSSION
After 1 year from the beginning of COVID-19 pandemic, the world is still facing a crisis. According to the current literature, more than half of the patients are asymptomatic leading to uncontrolled transmission of the virus (57-60). Recognizing COVID-19 related cutaneous manifestations may assist clinicians in early diagnosis of disease, before the development of respiratory symptoms, and may also be used to identify complications requiring treatment. The current study found that 10.5% of the COVID-19 patients reported skin lesions before the initiation of other symptoms or as their chief complaint. On the other hand, considering cutaneous manifestations is important to make the right diagnosis; as Joob et al. reported a COVID-19 patient with petechiae misdiagnosed with dengue fever (13). Our data demonstrated that 34.5% of cutaneous manifestations occurred at the same time with other symptoms particularly urticaria-like lesions (47%). It may suggest that urticaria-like lesions may be a diagnostic sign for COVID-19. The rest of the skin manifestations appeared later in the course of the disease Vesicular (24) Head (4), anterior trunk (21), posterior trunk (14), arms (8), legs (10), palms-soles (2) 18 disseminated pattern (small papules, vesicles, and pustules with varying sizes of up to 7-8 mm diameter, different stages of the lesions appeared simultaneously), 6 localized pattern (monomorphic lesions, of up to 3-4 mm diameter, at the same stage of evolution, mostly trunk involvement) Pruritus (20), Asymptomatic (4) Before (2), At the same time (3), After Topical corticosteroids (2) and mainly after the initiation of systemic symptoms (55%) in our review. Galván Casas et al. suggested the chilblain-like and vesicular lesions as epidemiological markers for the disease (24). However, in our study, vesicular lesions (74%) were the most important cutaneous manifestations usually appearing after systemic symptoms of the disease.
Most of the patients with skin manifestations were middleaged females, while, patients with chilblain-like lesions were younger (mean age: 40.7 years) and patients with vascular lesions were older individuals (mean age: 72.3 years). These findings are along with other studies about the chilblain-like lesions (6,19,24,40). Maculopapular lesions were the most common dermatologic presentation of COVID-19 patients that commonly appeared at extremities. It occurred most often in middle-aged patients and was associated with moderate COVID-19 severity.
The overall mortality rate between the COVID-19 patients with skin presentations was 4.5%, with the point that there was the lowest mortality rate among the patients with urticaria-like lesions (2.2%) and contradictory, there was the highest mortality rate among the patients with vascular lesions (18.2%). Previous studies showed a pooled mortality rate of 3.2-6% in patients with COVID-19 (61,62). Thus, the mortality rate of COVID-19 patients with skin manifestations is proportionate to the overall mortality rate of the disease.
Regardless of the type of skin lesions, 80% of COVID-19 patients with cutaneous manifestations experienced a mild and moderate, and 20% a severe COVID-19 disease. A previous study from the Chinese Center for Disease Control and Prevention reported that 81% of COVID-19 patients had a mild, 14% a severe, and 5% a critical disease (63). We don't have any specific data on patients without skin manifestations but comparing the COVID-19 severity in patients with skin manifestations and COVID-19 patients, regardless of their symptoms, demonstrates no obvious difference. Future cohort studies are required to compare the disease severity and outcome of COVID-19 patients with and without skin manifestations.
There is a wide range of cutaneous manifestations related to COVID-19 that in terms of age, associated symptoms, comorbidity, medication, severity, and mortality, chilblainlike lesions, and vascular lesions are the ends of this spectrum. Chilblain-like, urticaria-like, vesicular, maculopapular, miscellaneous, and vascular lesions are associated with an increase in COVID-19 severity and worsening the prognosis, respectively. Vascular lesions were more prevalent in males (61%) compared to females (39%). Considering the more severe disease and higher mortality rate in patients with vascular lesions, we can conclude that COVID-19 is more severe in males compared to females. This finding is compatible with our recent article, in which we assessed the sex-specific risk of mortality in COVID-19 patients (62).
Although skin presentations of COVID-19 are well described, the pathogenesis of skin lesions remains unknown. The direct viral invasion of the skin cells may be one possibility. Angiotensin-converting enzyme 2 (ACE2) is known as a ligand for the Spike protein of SARS-CoV-2 for entering human cells (66). There is a high expression of ACE2 on keratinocytes and sweat gland cells, respectively (67,68). Thus, SARS-CoV-2 can directly infect keratinocytes resulting in necrosis. This hypothesis is consistent with our histologic findings which demonstrated the epidermal and adnexal necrosis in all skin lesions except vesicular rashes. According to Amatore et al., neither viralinduced cytopathic alterations nor intranuclear inclusions were seen in skin biopsies (35). However, SARS-CoV-2 spike and envelope proteins were detected in the endothelial cells of damaged skin in two cases with purpuric rashes (22). RT-PCR for SARS-CoV-2 was performed on skin samples of some patients and was negative in all of them. Since the nasopharyngeal swabs of these patients were positive simultaneously, we assume that it can be a false negative result due to a small viral load or technical problems. Further research is urgently needed.
Skin lesions during SARS-CoV2 infection might be immunerelated phenomena. It has been shown that the presence of virus RNA in blood is related to greater severity of infection (69). Viremia is also associated with the levels of cytokines and growth factors in a dose-dependent manner with markedly higher levels in patients suffering from more severe COVID-19 (69). Recognition of the viral RNA by Toll-free receptors like TLR7 stimulates the intracellular signaling pathways which in turn enhance the cytokine secretion (69).
In a group of patients, with the end of the first week of the infection, a sharp increase in inflammatory cytokines such as interleukin (IL)1, IL2, IL7, IL10, granulocyte colony-stimulating factor (G-CSF), tumor necrosis factor (TNF) α and interferon (IFN)-g occurs. Overactivation of immune responses followed by pro-inflammatory cytokines increase may result in a "cytokine storm" which is an immune pathological condition (69)(70)(71). Increased cytokines allow them to access the skin, where they stimulate various cells, including lymphocytes, dendritic cells, macrophages, neutrophils, monocytes, and Langerhans cells to cause various skin manifestations (22,69). Maybe a hyperviremia state is responsible for vascular lesions in severe COVID-19 patients. We suggest further investigations on the viral load levels among patients with vascular lesions compared with other skin manifestations.
The antigen-antibody complex can lead to complement activation and subsequent mast cell degranulation. This mechanism is suggested particularly for the urticaria-like lesions (43).
A low or delayed interferon response may result in uncontrolled viral replication followed by a subsequent cytokine storm which can lead to severe disease (72). Activation of the host immune system in response to viral antigen deposition may result in vascular damage in COVID-19 infection (73). It seems that high levels of type 1 interferon response, a critical factor in immunity against viral agents, is associated with chilblain-like lesions and mild disease (15,72,74). Activation and aggregation of cytotoxic CD8+ T cells and B cells also lead to lymphocytic thrombophilic arteritis and destruction of keratinocytes (21,22). Nests of Langerhans cells are seen in most of the COVID-19 skin lesion biopsies and have been also reported in another viral-induced skin dermatitis-like pytriasis rosea (75).
Coinfection with other viruses is another potential possibility for COVID-19 related cutaneous manifestations. Some skin lesions in COVID-19 patients are very similar to rashes induced by other viruses like parvovirus18, herpes simplex virus type 1 and 2 (HSV-1, HSV-2), varicella-zoster virus (VZV), and poxviruses, both clinically and histologically. It is probable that because of the attenuation of the immune system, COVID-19 patients are susceptible to coinfection with or relapse of the other viral exanthems. This hypothesis is strongly suggested for vesicular and some miscellaneous lesions (e.g., erythema multiform) due to their unique histologic findings compared to other skin lesions of COVID-19 (24,32,37). A study reported four COVID-19 patients presenting diffuse vesicular lesions which microbiological and serological investigations demonstrated varicella infection (24). Thus, in COVID-19 patients with vesicular lesions, physicians need to investigate other possible etiological factors other than SARS-CoV-2.
Coagulopathy and vasculitis are other possible reasons for skin lesions during COVID-19. Evidence shows that COVID-19 patients are predisposed to coagulopathy and subsequent thrombotic events (76). It seems to be a result of inflammatory cytokine release, hypoxia, and other illness or therapeutic risk factors (76). Microvascular thrombosis of dermal vessels leads to ischemia or vasculitis mainly seen in chilblain-like or vascular lesions. Magro et al. focused on the role of the complement factors activation, especially alternative and lectin pathways, and subsequent thrombotic microvascular injuries (22). Evidence for this hypothesis is the elevated levels of CH50, C3, and C4 in blood samples as well as significant vascular depositions of C5b-9, C3d, and C4d in the dermis of skin specimens (22). According to our histologic findings mentioned in RESULT, vascular thrombosis was reported in almost all skin biopsies (except vesicular lesions). This finding across with the increased level of D-dimer, fibrinogen, and prolonged PT and INR in most patients is in favor of this hypothesis. Another presentation of coagulopathy in COVID-19 patients is hemorrhagic events and subsequent dermatologic manifestations (petechiae, purpura, and livedo). These manifestations are not specific to SARS-CoV-2. Schneider et al. reported a petechial rash associated with coronavirus NL63 (77,78).
Extremely dilated blood vessels were introduced as a diagnostic histological finding for SARS-CoV-2 by Zengarini et al. (28). There are other reports of vasodilation and telangiectatic vessels in the dermis. With this finding, Magro et al. explained a possible pathway in which dysfunction of ACE2 (due to SARS-CoV-2 binding) and subsequent elevated level of angiotensin2 can result in high activation of endothelial nitric oxide synthase (eNOS) and ensuing vasodilation (22).
Drug-induced eruptions may occur during COVID-19. COVID-19 patients usually use a set of medications that potentially can cause cutaneous rashes. The current study found that paracetamol, azithromycin, hydroxychloroquine, lopinavir/ritonavir, and remdesivir were the most common medications used for COVID-19 patients. Paracetamol has been reported to cause asymmetrical drug-related intertriginous and flexural exanthema (STRIFE) (16). However, in Mahé et al. study, despite keeping the drug, skin lesions disappeared; that is very uncommon in drug reactions (16). Najarian et al. mentioned that maculopapular lesions of their patient could be according to azithromycin use or hypersensitivity reaction to azithromycin due to concurrent viral infection (20).
Hydroxychloroquine that has been used in 45% of all the cases (mentioned in Result) is one of the most likely medications to cause different skin rashes. Acute generalized exanthematous pustulosis (AGEP), erythroderma, urticaria, and erythema multiform are some of the skin lesions that have been reported in connection with hydroxychloroquine (79)(80)(81).
However, Robustelli et al. mentioned that the skin lesion developed 3 weeks after discontinuation of the drug (42). As a conclusion, most of our reviewed articles considered the potential possibility of drug-induced exanthems but in almost all cases, dermatologic manifestations preceded the drug intake or the rashes disappeared despite the continuation of drugs (5,7,16,20,23,37,42,43). So it is very unlikely that current COVID-19 medications are responsible for the reported skin lesions.
In our study, the prevalence of comorbidities in COVID-19 patients with skin manifestations is about 17.9% mainly reported in patients with maculopapular lesions. History of serious comorbidities like cardiovascular disease, hypertension, and obstructive lung disease was mostly reported in patients with vascular lesions; suggesting that patients with these skin manifestations are more complicated cases and need more attention. Interestingly, immune disorders were more common in patients with chilblain-like lesions. This finding is not reported yet and we suggest it to be focused on due to the possible relationship with the etiology and pathophysiology of these lesions.
Fever, cough, and dyspnea were more frequent in patients with vascular lesions and less frequent in patients with chilblainlike lesions. Also, 17% of patients with chilblain-like lesions were asymptomatic regarding systemic symptoms. Astonishingly, headache, dysosmia/dysgeusia, nasal congestion/coryza, and irritability/confusion were more common in patients with vesicular lesions. This finding can demonstrate the probable link between vesicular lesions and neurological manifestations. Future investigations are required to clarify the issue.
LIMITATIONS
There were limited articles that mentioned complete data about all the items including the disease severity and outcome of the COVID-19 patients with dermatologic presentations. Another limitation was the absence of data about the COVID-19 patients without skin manifestations. Future cohort studies are required to compare the severity and prognosis of the disease in patients with and without skin manifestations, considering other related characteristics. Such studies help to better understand the prognostic value of the cutaneous manifestations in COVID-19 patients.
CONCLUSIONS
Cutaneous lesions occur most often in middle age individuals at the same time or after the systemic symptoms of COVID-19. Urticaria-like lesions commonly (47%) occurred at the same time with other symptoms. It may suggest that urticaria-like lesions may be a diagnostic sign for COVID-19. A maculopapular rash is the main reported skin involvement in COVID-19 patients and is associated with intermediate severity of the disease. The mere occurrence of skin manifestations in COVID-19 patients is not an indicator for the disease severity, and it highly depends on the type of skin lesions. Chilblain-like and vascular lesions are the ends of a spectrum in which from chilblain-like to vascular lesions, the severity of the disease increases, and the patient's prognosis worsens. We highly suggest emergency and general practitioners to evaluate the suspected COVID-19 patients for any cutaneous manifestations. Those with vascular lesions should also be considered as high-priority patients for further medical care.
AUTHOR CONTRIBUTIONS
PJ, MN, and MM designed the study. PJ and BH performed the review literatures, collected the data, and wrote the first draft of the manuscript. PJ, HV, and MD helped in manuscript preparation. MM critically reviewed the manuscript.
|
v3-fos-license
|
2021-04-16T08:07:17.316Z
|
2021-03-30T00:00:00.000
|
233252786
|
{
"extfieldsofstudy": [
"Sociology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://edumag.mrsu.ru/content/pdf/21-1/01.pdf",
"pdf_hash": "17e3c21c03184422f32085c5c484001ebff53ed4",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45911",
"s2fieldsofstudy": [
"Education"
],
"sha1": "17e3c21c03184422f32085c5c484001ebff53ed4",
"year": 2021
}
|
pes2o/s2orc
|
Pedagogical Leadership within the Framework of Human Talent Management: A Comprehensive Approach from the Perspective of Higher Education in Ecuador
in Eng.) ИНТЕГРАЦИЯ ОБРАЗОВАНИЯ. Т. 25, No 1. 2021 20 МЕЖДУНАРОДНЫЙ ОПЫТ ИНТЕГРАЦИИ ОБРАЗОВАНИЯ 16. Medina I. Pedagogical Leadership and its Contribution to Educational Management. Journal: Caribeña de Ciencias Sociales. 2020; 12(1):1-9. (In Span.) 17. Newman S. Reviewing School Leadership: From Psychology to Philosophy. International Journal of Leadership in Education. 2020; 23(6):775-784. (In Eng.) DOI: https://doi.org/10.1080/13603124.2020.1744734 18. Lipscombe K., Grice Ch., Tindall Sh., De-Nobile J. Middle Leading in Australian Schools: Professional Standards, Positions, and Professional Development. School Leadership & Management. 2020; 40(5):406-424. (In Eng.) DOI: https://doi.org/10.1080/13632434.2020.1731685 19. Hernández R., Fernández C., Baptista L. Metodología de la Investigación. Mexico: Mc Graw Hill; 2014. Available at: https://www.uca.ac.cr/wp-content/uploads/2017/10/Investigacion.pdf (accessed 03.05.2020). (In Span.) 20. Tangney S. Student-Centred Learning: A Humanist Perspective. Teaching in Higher Education. 2014; 19(3):266-275. (In Eng.) DOI: https://doi.org/10.1080/13562517.2013.860099 21. Brooman S., Darwent S. Measuring the Beginning: A Quantitative Study of the Transition to Higher Education. Studies in Higher Education. 2014; 39(9):1523-1541. (In Eng.) DOI: https://doi.org/10.1080/030750
Introduction
The higher education system, both nationally and internationally, is facing changes motivated by the complexity and versatility that have accelerated the transformation processes. These are observable in different areas in a global way. Added to this, the diversification of specialized university demand and complemented with technologies, renews and breaks with traditional paradigms, which makes the Higher Education environment a challenging and key institution in the assimilation of these transformations for future professionals who will join the society with the skills developed in their training time [1].
The import of this can be seen in the different global movements of the last two decades to achieve quality, competitiveness, efficiency, productivity, social commitment, progress and innovation in the university education field, one of them being the World Declaration on higher education in the 21 st century: Vision and Action, report that exposes: "... At the dawn of the new century, there is an unprecedented demand in higher education, accompanied by a great diversification of it, and a greater awareness of the fundamental importance that this type of education has for sociocultural and economic development and for the construction of the future, for which new generations must be prepared with new skills and new knowledge and ideals (...)ˮ 1 .
Likewise, in line with the above, there are the reports of the European Higher Education Area, with the Declaration of Bribery 2 which set out as main objective the harmonization of the architecture of the European higher education system, its national legibility and international with the professional integration of the graduates 3 , the Bologna Declaration system 4 based on qualification cycles and European cooperation to guarantee the professional quality of masters and doctorates and establishment of a credit system (ECTS), The incorporation of new elements during the Bologna process to respond to the challenges of economic competitiveness, the network of seminar organizations for quality assessment, development of joint titles, use of the credit system, European mobility and lifelong learning 5 . On the other hand, the Tuning Latin America Project 6 ; that arises from the experience of Europe, with the union of more than 135 European universities for the creation of the European Higher Education Area, mentioned above [2].
In the case of Latin America, it is worth noting the Alfa Tuning project which begins, in its first phase in the 2004-2008 period, with the objective of identifying generic competences for university degrees in Latin America, with their respective specific competences in each area thematic and develop a general diagnosis of Higher Education, among other important activities [3]. Quality education is always on a par with professional skills [4].
All these movements and projects seeking to unify criteria regarding the quality, innovation and participation of the higher education system in society with plans were derived from those mentioned, directed towards the same objective [5].
It has been a challenge for universities to incorporate these standards into their policies, structures and actions, due to the com- 1 plex nature of their identities, culture and procedures that underpin their raison dʼêtre, even when they are in times of re-evaluations and reforms. The figure of the university teacher plays a fundamental role in these processes of changes. In this dynamic of innovation and improvement towards quality, promoted by these movements and projects worldwide, adjusted to the current times, the teacher as a key factor in Higher Education plays a crucial role, his participation contains the edges of an implicit leadership or tacit in the different areas that make up this system, because the role of the teaching professional is unavoidable in the classroom, research and active participation agreed in social and institutional plans and projects [6].
However, the authors of this research through the years of practice in this profession allude to this thematic field or to the relationship of leadership with teaching that the approach is little used, little credible or little associated, even when the protagonistʼs participation in the processes of higher education cannot be denied [7]. It is spoken with ease of management, direction and even management in teaching, but very rarely the term of leadership is used, as this tends to be attributed to managerial positions, to power, hierarchy and influence, mostly private companies who seek an economic benefit or charismatic people who with their personality run institutions or societies, but very rarely the teacher is placed in his role as leader [8]. It is understood that, for the purposes of this research, the authors assume the concept of leadership expressed by Vargas J. Liderazgo "is the process aimed at achieving the objectives of the organization through encouragement and assistance to its members. The application of leadership transforms the ideal into realityˮ 7 . In this particular, university teachers in Ecuador are not in a position to develop adequate leadership to the new times, the higher education system lacks flexible processes, higher education is provided for academic controls on the work of research, teaching and link, which responds to complex management [9].
Literature Review
When referring to teaching leadership, the university professor is described as a figure of influence before students, also someone who promotes the formation of new knowledge through the development of research in conjunction with their peers, who is capable of guide their students to transform realities with the practice of what they have learned theoretically through the work of linking and extension, in addition to this they have managerial, coordination or managerial positions within their administrative work in the university institution to which they belong. In any of these roles, the teacher must have the ability to lead, it is a transformative entity, so it must be recognized as such 8 .
When you carry out effective leadership as a teacher, you are contributing not only to the educational organization, but also, even more importantly, to the people who are part of the system. Through leadership, you have the opportunity to surrender to your profession and make the most of the talents, skills and training that the educational process probably helped you develop, both as a student and as a professional 9 .
It is well understood that leadership has been a virtue that has been conferred on managers, politicians, social figures who have shown charisma and ability to influence others, in the same way it is a widely used term and that normally they are attributed to people in charge of an area or department within organizations. However, in teacherʼs case, it is not possible to forget that the majority are charismatic, therefore, it has a significant influence on its students and the processes of the higher education system [10].
Types of leadership. In this order of ideas, it should be noted that there are many types of leadership and classifications due to the importance that this topic has taken for progress, innovation, changes and decision making in organizations and institutions, however for the field university and very particularly to define or classify the role of the teacher in the system as a leader, it is necessary to go to a classification that can define this role, maintaining the fundamental essence of the teacher as an educator under a social responsibility, which in part is one of the commitments of the university education system according to L. Adie et al. [11]; L. Santamaría and A. Santamaría [12] state that there is little literature available to identify and celebrate the positive attributes of the educational leaders of history, of the oppressed groups and those who identify with them, as well as the forms in that these individuals acquired conventional institutional access to create real change, an adequate theory is needed that pushes educational leaders to think about leadership for the social and justice, through the lens of criticism.
In view of the different roles that the teacher plays in the exercise of their functions, ranging from the fulfilment of their classes, to the field of research, the link with society or community and institutional administrative duties or commitments, leadership in each area it fluctuates from one typology to another, or from one perspective to another, therefore, its identification in the classroom will depend on its degree or level of responsibility in each of the specified areas. In this sense, establishing a classification would imply considering the teacher in each of their contexts and attributions. However, in general terms, the goal is to establish the perspective of a professional capable of articulating, conceptualizing, creating and promoting spaces and possibilities for a critical and effective change of the conditions that inhibit the improvement of and for all [13].
For the purposes of this article, the perspective of transformational and transactional educational leadership will be taken into account, because the dynamics of higher education educational institutions are constantly permeated by internal and external factors that directly influence the work of the teacher [14]. Transformational leadership maximizes team autonomy and leads to potential productivity, mainly because by performing the work correctly, followers also achieve personal goals [15].
It is common to relate the success of the motivation of his followers with the transformational leadership; however, it has been shown in different studies that followers generally prefer direct reward for the results obtained. So, the predilection points to the transactional leadership, being this an important component for the followers when they are projected in the long term, the direct reward goes to the background they choose to grow personally, transform their lives and reach their goals [16].
On the other hand, transformational leadership has a direct effect on the quality of Higher Education, teachers who practice this style of leadership have the ability to permeate their students with shared knowledge, it should be noted that it has four dimensions: 1. Intellectual stimulation (the ability of a leader to arouse intellectual curiosity among his followers, is one of the variables that serves as an indicator to know the level of knowledge shared by the members of a team, and therefore an indicator Indirect of transformational leadership); 2. Individual consideration (review each situation individually in different circumstances); 3. Inspirational leadership (followers feel valued; and finally, there is the idealized influence related to the charism, which arises from an identification emotional among the group leader and his associates); 4. Idealized influence [17]. Pedagogical leadership. Unlike a company with a traditional economy, educational centres have as their characteristic the challenge of offering a service that transcends the scenarios of their own value proposal [18]. This is why, when studying the concept of pedagogical leadership, the need is generated to address what is implied by the management of work teams in order to achieve the goals that lead an entire educational organisation to materialise its vision, This means that the leader must have a set of skills that are additional to the daily tasks of a leader working in a daily organisation, representing a challenge that requires a broad knowledge of the educational environment, the educational systems, educational administration and the work of teaching itself [19].
INTERNATIONAL EXPERIENCE IN THE INTEGRATION OF EDUCATION
This document therefore addresses two specific perspectives that allow pedagogical leadership to be identified according to [20]. Educational management acquires an added value in the management of an educational institution: a) Pedagogical leadership from the management functions. b) Pedagogical leadership and its contribution to the development of human capital.
This leadership allows followers to trust their leaders, so that they obtain a good level of adherence to their proposals, making them identify and trust them and their orientation. There is also the leadership Laissez Faire, which is potentially harmful in the process of formation of Higher Education, generating professionals with lower quality due to the absence or indecision of the teacher in decision making and the lack of motivation of the students, they will have a direct negative influence on the processes within the higher education system [21 ; 22].
Taking into account the above, both teachers and university educational institutions must exercise a pedagogical leadership that allows them to have a vision towards joint construction, to transcend from theory to practice, assuming with responsibility the tasks entrusted and necessary for this purpose. Understand that the structures must be flexible towards the adaptability and promotion of changes for the renewal, innovation and incorporation of updated strategies that allow the preparation of high-level professionals to society, capable of effectively responding to each of the scenarios that are I present him. Similarly, it should be noted that following the statements made by the Organization and Personnel Management regarding the professionalism of the person responsible for human talent in organizations, the characteristics inherent to leadership must address, among other aspects, the sense of justice, that is, the need to inspire confidence, the sense of discretion of secrecy, that is, the teacher must consider being more reserved when acting before students, the ability to intelligently defend the interests of the university institution, that is, consider an objective position but at the same time, where the regulations of the institution prevail, and, finally, the ability to act in a balanced way between students to generate a climate appropriate to the demands of the current condition that governs higher education in the world.
Materials and methods
Research Design. The research had a mixed or multi-method approach, which is nothing more than the combination of the quantitative with the qualitative. The research design corresponds to the explanatory-sequential design (DEXPLIS), characterised by [19], by a first stage in which quantitative data are collected and analysed. Then qualitative data is collected and evaluated. The mixed study occurs when the initial quantitative results allow for continuity in the collection of the qualitative data. It should be noted that the second phase builds on the results of the first phase.
The authors propose that the research scheme is a scientific alternative for complex studies, by integrating quantitative and qualitative methods, until it becomes a process that allows deep and substantive knowledge. The research is deployed from a quantitative approach, supported by a set of objectives and questions posed to a qualitative approach, which will provide in two phases the results that formalize the paradigm of the mixed research method. The first phase is a quantitative study. The second phase is a qualitative study.
Research paradigm. According to the mixed method of study, the research is ascribed to the Interpretative-Humanistic Paradigm, because through the techniques of survey, documentation, enquiry and search for information the facts or phenomena that are carried out in the educational context are increased. This meant the search and analysis of leadership and the competencies of human talent in Ecuadorian higher education. In this perspective, M. Tamayo and M. Tamayo consider the humanist paradigm as patterns of behaviour and attitudes, knowledge, values, skills and beliefs 10 . All these skills are shared by the members of a society, the school and the classroom being also conceived as societies.
Likewise, the multi-method approach of the research, admitted the diversity of sets and theories that are necessary as techniques to approach the study and solve the problematic reality, a product of multiple factors, among which the leadership and the management of the human talent stand out.
Quantitative research phase. As a first phase of the study, quantitative research was applied. C. Sabino establishes that the quantitative study determines the strength of association or correlation between variables 11 . The generalization and objectification of the results through a sample, allows making inferences to a population from which every sample comes. The quantitative method as an initial part of the research referred to the measurement of the study variables, through a numerical relationship. After the association of data, questions were raised in a way determined by the research objectives.
Population and sample. The population according to [23], is the set of all the elements that share a common group of characteristics. They also form the universe for the purpose of the problem. Therefore, this study presents as a population group, higher education teachers in Ecuadorian universities.
For the sample, the intentional non-probabilistic selection was applied. According to D. Mendoza et al., the most common element in obtaining a representative sample is random selection [24]. Each individual in a population has an equal chance of being chosen. Sixteen representative universities in Ecuador were chosen. 8 public universities and 8 private universities. The following conditions, to choose the teachers were: to be a university teacher. To have 5 years of service or more. To fulfil the role of teacher and researcher. In this way, 6 teachers were chosen from 15 universities, for a subtotal of 90. The total was 97 teachers as a sample.
Research techniques and instruments. The digital survey was used as a data collection technique. Due to the quarantine season and health prevention measures of the Covid-19, the researchers cannot approach the respondents. A questionnaire was used as a data collection instrument. The design of the questionnaire was Likert scale type. W. Wiersma and S. Jurs define the Likert scale as a psychometric scale 12 . The use of the Likert scale is common in questionnaires, widely used in research surveys, mainly in the social sciences (see Fig. 1).
The questionnaire had 11 questions. Each question emphasized leadership and human talent management. Each question had five answer options. The first answer option was "I totally agreeˮ with a value of 5. Then "I agreeˮ with a value of 4. The last option was "Totally disagreeˮ with a value of 1 point.
Reliability of the instrument. To establish the reliability and precision of the results, the Cronbachʼs alpha statistic coefficient recommended by [23] was applied. A diagnostic pilot test was applied with 5 teachers. The results gave a Cronbachʼs alpha reliability of 0.862 considering a reliable type value.
Technique of analysis of the results. To achieve a greater comparative versatility in the results and to be able to visualize the perspective of the teachers in terms of the recognition of their leadership within the institution, a descriptive analysis was applied. The variables were leadership and educational management, these were described in the questionnaires and studied based on percentages. The results gave rise to a more in-depth qualitative analysis.
Qualitative research phase. In verifying and analysing the data obtained quantitatively through the questionnaire, the information was compared using the qualitative approach.
Research design. The design of the study was of a documentary nature, and books, publications and reports were reviewed at national and international level. The phenomenological study method was used, according to [24] it refers to the study of phenomena as they are experienced, lived and perceived by the individual. It is focused on the study of experiential realities, being the most suitable to study and understand the experiential structure of the teacher.
Techniques and analysis of qualitative results. By linking the experiences and the study of leadership, the science of knowledge is provided by the phenomenological method. In this sense, by founding sciences related to the human being, in terms of the qualities of his human development, natural phenomena are adapted to the educational reality. Analysing and describing them coldly, without passion and emotion, from the perspective of the researchers, the facts, in order to obtain universal meanings and structures were contextualized in the daily practice of education. Phenomenology, as a special method of research, applied the technique of interpretative analysis. The interpretive technique offered an explanatory process with the aim of fulfilling the general objective Only the manager, coordinator and supervisor is a leader in this institution 6 I recognize myself as a leader when I am in the classroom
7
The leadership that I exercise in the institution is not of great relevance 8 In this institution the teacher is the protagonist of the process 9 Leadership is a concept far removed from the educational sphere 10 In this institution the teacher is not considered a leader 11 Leadership is only for managers F i g. 1. Questionnaire applied to teachers participating in the study 13 Ibid. of studying the leadership of teachers in the higher education system, from its recognition in the transforming participation of educational processes.
For the specific analysis of the information, the procedure of the grounded theory was executed, which according to W. Wiersma and S. Jurs, figures that the theory emerges grounded from the data 13 . It is thus clarified that there is no single procedure to carry out content analysis in mixed research, and therefore it is not linear, so that when the study begins, the researchers know its beginning, but it was of changes in the certainty of the closing path. Due to the sequential type of research, the teacher triangulates and contrasts the quantitative and qualitative results, putting imagination and creativity into play.
Results
After the survey was completed, we proceeded to analyze and interpret the results, then, we will find highlights of the study, specifically 11 questions of the 32 that were made to know the perception, commitment, typology and recognition of the teacher as a leader in the higher education system, taking as a reference a university institute of Ecuador.
The combined chart shows that 64% of teachers on average think that they totally agree and agree that only the manager, coordinator and supervisor is a leader in the uni-versity institution under study. On the other hand, 67% totally agree and agree that teaching merits leadership attitudes and 46% that it is necessary to be a leader to influence the community (see Fig. 2). However, 54% of teachers think that they partially disagree and strongly disagree that teaching is an activity inherent in leadership and 66% partially disagree and strongly disagree that research is an activity that involves leadership. What is evident in these findings about the separation of some activities exercised by teachers and not all considered attributable to leadership. Teaching is an activity inherent to leadership You need to be a leader to influence a community Research is an activity that involves leadership In the graph presented, the results show that the teachers surveyed agree and totally agree, in 66% that they recognize themselves as leaders when they are in the classroom, 57% that in the institution the teacher is not assumed as leader and 57% that leadership is only for managers (see Fig. 3). On the other hand, they partially disagree and strongly disagree in 46% that the leadership that they exercise in the institution is not significant, 54% that in the institution the teacher is a protagonist in the processes and in 46% that the Leadership is a concept away from the educational field. What evidences some contradictions regarding the results of the previous graph, placing the institution as a responsible party in the action towards teacher leadership, another contradiction is evidenced regarding the teacherʼs position in the classroom, he feels like leader, but not It is recognized as such.
And regarding the concept of leadership, they mostly consider this to be a concept far from education, which is more associated with the field of management and organizations.
Phenomenologically, researchers can argue that the leadership of the professor is reflected only when he or she assumes a managerial position. The characteristics and competencies required to become a director, or a rector are the usual figures representing school leaders, designed by training programmes [25].
However, it is necessary to understand how a teacher can exercise his or her own leadership, as a teacher. This consideration makes its way into approaches to instructional leadership, pedagogical leadership or leadership for learning, shared or distributed. These different types of leadership coincide in considering the teacher as a key instrument for improving teaching and learning, and they maintain that one of the main objectives of their management work is to achieve their professional development.
The growth of autonomy in universities implies a change in the type of leadership [26]. Educational leaders have a higher level of responsibility and accountability. Therefore, it is more necessary than ever to create effective leadership structures at universities. They are capable of promoting and executing university projects efficiently, so that their projection finally reaches the fundamental nucleus of all educational action: the work of the classroom with the students. Teachers are a key element in this process. Promoting the development of teachers is therefore one of the main tasks of managers.
Discussion and Conclusion
In the presentation of the analysis and result of the bibliographic review and the application of the instrument for collecting information and /or data, important findings regarding the teacherʼs position regarding leadership, certain inconsistencies in the results, product can be visualized. of some related questions that yielded opposite results as partially disagreeing and totally disagreeing that teaching is an activity inherent to leadership and 66% and agree or totally agree, in 66% that they recognize themselves as leaders when they are in the classroom, this inconsistency, among others generated in the course of the investigation, states that the conceptualization of leadership for some teachers is not clear and that for others it has a responsibility that they are not willing to assume, due to the level of commitment they acquire with themselves, the educational process and the institution to which they belong. Similarly, the lack of leadership commitment in the institutions is demonstrated by [27; 28] due to overwork.
In these terms, supported by the re-searchersʼ interpretative analyses, one of the factors influencing this position is the fact that a large proportion of the staff of the teachers surveyed serve on a contract basis, which involves a commitment and effort to do their work in the best possible way, but not necessarily an involvement with the institutional processes, which is why the result of the question is justified regarding a 54% disagreement that in the institution the teacher is the protagonist in the processes, because obviously many of them are not identified, of course that to reaffirm this observation, it is necessary to deepen in other related aspects, by way of discussion it is necessary to limit what has been collected on the subject.
On the other hand, 46% disagree that leadership is a concept far from the educational field, many recognize that this meaning can be part of the higher education system, and that, therefore, although there is little recognition of the profession of teaching with him, some of his characteristics are attributable or considered attributable.
Researchers can emphasize three types of leadership present in Ecuadorian university education. Instructional leadership that considers the teacher as a leader, in the classroom and outside it. It influences the creation of a culture that promotes learning and an organisation at the service of learning. Teachers, who are considered leaders because they are experts in teaching and learning, develop learning communities through their work. These teachers inspire practices of excellence and participate with commitment in the promotion of the University.
The second type of leadership is distributed. The distributive leadership modality responds to a context in which organisational changes, attention to the dynamics of social interactions, the configuration of networking, the horizontal and vertical flow of knowledge, information and communication converge. The third type of Leadership is the University one. It stands out as the final point of instructional leadership. It is proposed as a broader vision of the development of learning. Furthermore, it is an ecological perspective in which more subjects are included than students and more issues than student and teaching results. Learning is related to organization, professionalism and leadership.
As a conclusion, at the present time the participation, commitment and responsibility in the area of competence of the university teacher, the educational leadership represented in the professional of university teaching, it is vitally important to move from discourse to action and active participation in the transformation of the realities, under this perspec-tive it is possible to use all the competences framed in the process of training of the university by the teacher, taking into account an active participation that reflects the multiple options derived from the role in the classroom by of the teacher, in addition to being a key factor in the formation of skills for future professionals who also require specialized conditions beyond knowledge, skills and abilities generating tools to assume a more complex and global world.
|
v3-fos-license
|
2019-04-20T13:13:23.811Z
|
2008-10-07T00:00:00.000
|
121994876
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epja/i2008-10659-5.pdf",
"pdf_hash": "8a9597482106c37359eda7283d10ca8cc33938a8",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45912",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "790293930257fe46939340dc4f2ec03ff4d30588",
"year": 2008
}
|
pes2o/s2orc
|
Measurement of the η mass at KLOE (cid:3)
. In this report the measurement of the η mass is presented. The analysis has been performed on 450 pb − 1 of data collected in the years 2001 and 2002. The measured value is m η = (547 . 874 ± 0 . 07 stat ± 0 . 029 syst ) MeV.
Introduction
In this paper we describe the measurement of the η mass from the KLOE experiment [1] operating at the Frascati φ factory DAφNE.
The value of the η-meson mass has been poorly determined for many years and today the picture is still not fully clarified. The first measurements date back to about 40 years ago studying η decays in bubble chamber experiments [2] with a mass resolution of ∼ 1 MeV; these resulted in mass values clustered around 548.5 MeV. A lower value with better precision was obtained in 1974 measuring the missing-mass spectrum of π − p → Xn close to threshold, m η = (547.45 ± 0.25) MeV [3]. This result was confirmed by other experiments studying the production of η at threshold in pd [4] and γp [5] reactions. More recently, the mass was measured precisely by the GEM experiment using the reaction dp → η 3 He at threshold: m η = (547.311 ± 0.028 ± 0.032) MeV [6]. Thus, all the experiments at threshold give consistent results.
However, this value of η mass is highly inconsistent with the one measured by the NA48 experiment studying the decay η → π 0 π 0 π 0 : m η = (547.843 ± 0.030 stat ± 0.041 syst ) MeV [7], the difference being about eight standard deviations. This discrepancy between threshold and decay experiments has been confirmed by the preliminary η mass measurement carried out by the KLOE experiment [8] m η = (547.822 ± 0.005 stat ± 0.069 syst ) MeV. A recent result from the CLEO-c Collaboration gives m η = (547.785±0.017±0.057) MeV [9] using ψ(2S) → ηJ/ψ decays and combining different η decay modes. In this paper, we report the best measurement of the η mass to date, using the φ(1020) → ηγ decay. This decay chain, assuming the φ(1020)-meson at rest, is a source of monochromatic ηmesons of 362.792 MeV/c, recoiling against a photon of the same momentum. The detection of such a photon signals the presence of an η-meson. Photons from η → γγ cover a continuum flat spectrum between 147 < E γ < 510 MeV in the laboratory reference frame. The photon energies are measured in KLOE, but for 3 γ events the main accuracy is ultimately from accurate measurements of the photon emission angles. Together with the stability of the continuously calibrated detector and the very large sample of η-mesons collected, we have been able to obtain a very accurate measurement of the η mass [10].
Events are selected requiring at least three energy clusters in the barrel calorimeter with polar angle 50 • < θ γ < 130 • . Being r the distance bewtween a photon cluster position and the interaction point, the time of the cluster must be such that the |t − r/c| < 3σ t , with σ t the calorimeter time resolution parametrized as σ t = (54 ps) 2 × 1 GeV/E + (140 ps) 2 . A kinematic fit imposing energy momentum conservation is performed. The kinematic fit uses the value of the total energy, the φ(1020) transverse momentum and the average value of the beam-beam interaction point; these values are determined with good precision run by run by analyzing e + e − → e + e − elastic scattering events. The energy resolution gets greatly improved from the good calorimeter angular resolution. Moreover, a cut on the χ 2 of the kinematic fit is imposed in order to reject background events from > 3γ final states: events with χ 2 < 35 are retained. Figure 1 shows the m 2 γ 2γ3 , m 2 γ 1γ2 Dalitz plot population, with the energies ordered as E γ1 < E γ2 < E γ3 . The m 2 γ1γ2 m 2 π 0 , m 2 γ1γ2 m 2 η and m 2 γ1γ3 = m 2 η bands are clearly visible. We apply a cut m 2 γ1γ2 +m 2 γ2γ3 ≤ 0.73 GeV 2 , "background-rejection cut" in the following, shown by the line in fig. 1. Events below the line are retained for the analysis. The resulting m γ1γ2 distribution, for a data subsample, is shown in fig. 1, right-top panel. The background under the η peak is very small and flat, therefore the m(γ 1 γ 2 ) distribution in the 542.5 to 552.5 interval is well fitted with a single Gaussian with σ = 2.0 MeV, neglecting the background contribution ( fig. 1). The result of the fit is m η = 547.777 ± 0.016 MeV with χ 2 /n.d.f = 168/161. The Gaussian width is dominated mostly by the experimental resolution, as the decay width of the eta being 1.30 ± 0.08 keV [11] is well below the detector resolution.
Systematic uncertainties have been determined studying the effects of the detector response, alignment, event selection cuts, kinematic fit and beam energy calibration. The values of the systematic errors are shown in table 1.
The systematic uncertainities have been evaluated using several DATA control samples in order to estimate the error on the reconstructed quantities: photon entry points in the calorimeter, beam interaction point position, photon energies. A sample of e + e − → π + π − γ events has been used to estimate biases in the interaction point determination, by comparing the π + , π − vertex to the reconstructed vertex from Bhabha events. The deviation from linearity and calibration was checked by comparing the photon energy reconstructed from the missing energy of the π + π − tracks with the cluster energy. The energy scale was found to be correct at 1% level and linearity was better than 2%. Miscalibration at the level of the estimated uncertainty on both vertex and energy was applied event by event. The η mass has been recomputed, and the spread observed in the mass measurement is used as systematic error. The systematics due to the inhomogeneous response The continuous line and the χ 2 contributions are obtained according the procedure described in ref. [11].
of the calorimeter in the 4π solid angle have been determined by dividing the data sample into subsamples with different photon solid angles. No systematic behaviour has been observed, and the rms of the points has been used as systematic error. The error coming from the background rejection cut was obtained by varying the slope and the intercept of the linear cut in the plot. The rms of the measurements obtained was used as the systematic error. The initial-state radiation in the e + e − → φ process affects the available center-of-mass energy in the decay φ → ηγ. Correction due to this effect was estimated by MC. The systematic error has been computed evaluating the η mass as a function of the √ s and comparing DATA with MC. The rms of the DATA-MC difference has been used as systematic error.
Our η mass measurement is the most precise result to date and is in good agreement with the recent measurements based on η decays shown in fig. 2. Averaging these measurements we obtain m η = 547.851±0.025 MeV which differs by ∼ 10 σ from the average of the measurements done studying the production of the η-meson at threshold in nuclear reactions. In table 2 we show all η mass measurements starting from 1974.
|
v3-fos-license
|
2023-01-24T16:37:49.702Z
|
2023-01-19T00:00:00.000
|
256187698
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2223-7747/12/3/475/pdf?version=1674128636",
"pdf_hash": "7f5ddcdaed93d8dab3534b8c6ace3b7e90d10237",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45913",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"sha1": "0aa0f136a6daf136cfc33f84f73485f25e67af53",
"year": 2023
}
|
pes2o/s2orc
|
Quali–Quantitative Fingerprinting of the Fruit Extract of Uapaca bojeri Bail. (Euphorbiaceae) and Its Antioxidant, Analgesic, Anti-Inflammatory, and Antihyperglycemic Effects: An Example of Biodiversity Conservation and Sustainable Use of Natural Resources in Madagascar
Antioxidants are important supplements for the human body for their roles in human life for the maintenance of homeostasis. Tapia fruits (Uapaca bojeri) are used by the riverain population of the Tapia forests in Madagascar as complementary foods. This study aims to quantify the main antioxidants in the U. bojeri fruits to verify their contribution to the enhancement of their anti-inflammatory and antihyperglycemic effects. Standard phytochemical screening was used for qualitative analysis, while spectrophotometric (TPC, TAC, and TFC) and chromatographic analyses (HPLC) were used to quantify several phytochemicals in U. bojeri fruits. The antioxidant activity was evaluated using DPPH and FRAP assays. The writhing test was used for the analgesic effects, the carrageenan-induced paw edema was used for the anti-inflammatory activity, and OGTT was used to test the anti-hyperglycemia property of the MEUB in mice. Several phytocompounds were detected and quantified in the fruits, including succinic acid (67.73%) as the main quantified compound. Fruits exerted a good antioxidant capacity and showed analgesic, anti-inflammatory, and antihyperglycemic activities in mice. Isolation of the bioactive compounds should be carried out to confirm these pharmacological properties and develop health-promoting food products or medicinal applications derived from this species.
Introduction
Antioxidant compounds play important roles in human life for the maintenance of homeostasis. The ingestion of exogenous antioxidants could help our organism to stabilize reactive oxygen species (ROS), which are harmful to cells and tissues [1,2]. An unbalance between ROS and natural antioxidants is the main cause of oxidative stress; in this case, ROS, mainly produced by the mitochondria in cells, cause several problems to other cells and tissues [3,4]. ROS are often composed of free radical atoms, molecules, or ions with unpaired electrons that are extremely unstable and susceptible to active chemical reactions with other molecules. Oxidative stress impairs several endothelial functions, vascular smooth muscle, and adventitial cells involved in several cardiovascular disorders, including hypertension, atherosclerosis, hypercholesterolemia, and diabetes (Cai and Harrison, 2000;Harrison et al., 2003) [5,6]. Moreover, during their respiration, leukocytes and activated macrophages produce free radicals, which damage epithelial and stromal cells leading to carcinogenesis by modifying targets and pathways essential for normal tissue homeostasis (Hussain et al., 2003) [7]. Antioxidant supplements are necessary in the case of pathologic excessive production of ROS. These compounds are often molecules possessing labile hydrogen that may interact with the ROS and produce stable radicals (e.g., phenolic and organic acid compounds). These compounds can be extracted from edible plants, including fruits and vegetables used in diets [8][9][10]. Moreover, alternative treatment with fruits showed a growing interest in a broad spectrum of diseases, including diabetes, obesity, neurodegenerative, gastric, inflammation and cardiovascular disorders, and certain types of cancers [11][12][13][14][15].
Tapia, also known by its botanical name Uapaca bojeri Bail. (Euphorbiaceae), is an endemic tree species from the Tapia forests of Madagascar ( Figure 1) localized in the Imamo massif, near Arivonimamo and Miarinarivo, the Hill of Tapia in the region of Manandona between Antsirabe and Ambositra, the Itremo massif in the west of Ambatofinandrahana, and the Isalo massif near Ranohira [16]. The Tapia fruits have a drupe shape with a fleshy, sweet, sticky mesocarp; woody endocarp protecting the three seeds at maturity. The Tapia produces large quantities of small, juicy, oval, and edible fruits, called Voapaka or Voatapia in the Malagasy language. Its flowering period is between March and September, while its fruiting period is from mid-September to early December [17]. This species is known as a medicinal plant and possesses several health-promoting properties. The local population uses this plant in the treatment of diabetes, infectious diseases, and hypertension [18], and the rape fruits are used as a complementary food and source of income for the riverain population during the fructification period. Recently, several compounds have been quantified from the U. bojeri leaves and the stems, including carotenoids, organic acids, and phenolics which contribute to the antioxidant, antidiabetic, and anti-inflammatory properties of this species [19]. Leaves, stems, and fruits are the main storage of secondary metabolites in plants. Nowadays, the trend in the research of natural treatments is growing due to the side effects caused by long-term treatment with synthetic drugs [20][21][22][23]. As discussed in the study of Razafindrakoto et al. [19], the quantified antioxidant secondary metabolites in the methanol extracts of the U. bojeri leaves and stems contribute to the antidiabetic, antalgic, and anti-inflammatory activities of this species. However, few studies are reported in the literature on the phytochemical and biological characterization of U. bojeri fruits. For this reason, the main aim of this study was to identify and quantify the main bioactive compounds susceptible as antioxidants in the methanol extract of the U. bojeri fruits to verify their contribution to the enhancement of their potential therapeutic effects toward inflammatory diseases and diabetes. Plants 2023, 12, x FOR PEER REVIEW 3 of 12
Phytochemicals and Antioxidant Properties
The methanol maceration produced an extract of U. bojeri fruits (MEUB) with a yield of 12.45% relative to the fresh powder. Classes of phytochemicals detected in the fruits of U. bojeri were reported in Table 1.
Phytochemicals and Antioxidant Properties
The methanol maceration produced an extract of U. bojeri fruits (MEUB) with a yield of 12.45% relative to the fresh powder. Classes of phytochemicals detected in the fruits of U. bojeri were reported in Table 1. Most of the main antioxidant phytochemicals, including flavonoids, anthocyanins, phenolics, and tannins, were identified in the fruits. These phytocompounds were quantitatively determined as total flavonoid content (TFC), total phenolic content (TPC), and total anthocyanin content (TAC). They showed values of 118.05 ± 31.05 mg QE /100 g fresh weight (FW), 883.60 ± 186.62 mg GAE /100 g FW, and 423.51 ± 14.89 mg C3GE /100 g FW (Table 2), respectively. The amounts of the main bioactive compounds are reported in Table 3. Among the 28 biomarkers considered in this study, 14 molecules were detected and quantified; succinic acid was the main compound, with a percentage of 67.73% in relation to the total bioactive compound content (TBCC) that was represented by the sum of all the quantified compounds. Organic acids were the main class of detected phytocompounds, with a percentage of 79.56% in relation to TBCC, in particular succinic, quinic, and oxalic acids. Succinic acid was also the main compound quantified in leaves and the stems with values of 533.74 ± 340.08 and 1275.65 ± 434.99 mg/100 g dry weight, respectively [19], showing that this plant may be considered a good source of succinic acid for the growing interest in pharmaceutical, agricultural, food, and chemical industry applications [24]. Cinnamic acids were represented by caffeic and chlorogenic acids (0.04% and 1.09%, respectively); moreover, four flavonols were detected, including hyperoside (0.03%), isoquercitrin (0.05%), quercetin (0.73%), and quercitrin (3.57%). Ellagic acid (8.34%) represented the class of benzoic acids, and epicatechin (0.84%) represented the class of catechins, while castalagin (4.20%) and vescalgin (0.94%) were detected in the class of tannins. Vitamin C (0.60%) was also identified and quantified in the samples. Most of these quantified compounds show labile hydrogens and could stabilize ROS. The antioxidant capacity of the U. bojeri fruits was evaluated using the free radical DPPH assay on the MEUB and confirmed with the FRAP assay. The DPPH assay is based on the ability of the compounds in the extracts to donate hydrogen to the radical DPPH (used as the ROS model) and form stable DPPH-H and stable radicals, while the FRAP assay is based on the capacity of the compounds in the plants to give electrons to the ferric ions and form ferrous ions and stable radicals [25]. The results are reported in Table 2. The fruit extracts averagely inhibited the radical DPPH with an IC 50 of 301.85 ± 3.62 µg/mL in relation to the gallic acid (p < 0.05), higher values than leaf (47.36 ± 3.00 µg/mL; p < 0.05) and stem (33.32 ± 0.69 µg/mL; p < 0.05) extracts [19].
The FRAP results showed that the fruits were two-fold less active than the leaves and stems. Effectively, its capacity to reduce Fe 3+ -TPTZ into Fe 2+ -TPTZ is two-fold less important (39.72 ± 0.34 mmol Fe 2+ /kg FW) than that of leaves (69.20 ± 1.41 mmol DW [19]) or stems (70.17 ± 9.53 mmol Fe 2+ /kg DW [19]). It may be due to the TPC content. Indeed, fruits contain less TPC than leaves (3624.72 ± 268.07 mg GAE /100 g FW) or stems (5854.17 ± 1247.67 mg GAE /100 g FW). Moreover, several studies showed the relationship between the phenolics, organic acids, flavonoids, and vitamin content and the antioxidant capacity that could influence the AOC values in several plant materials [16,19,26,27].
Analgesic Effect of MEUB
The analgesic activity of MEUB was evaluated using the writhing method, one of the most common peripheral analgesic animal models for the screening of analgesic drugs [28]. The number of writhing and the percentage of inhibition exerted by the animals after the treatment with MEUB were reported in Table 4. The intraperitoneal injection of 1% acetic acid solution into the control group caused 24.6 ± 4.3 writhing between the intervals of 25 min after the 5th min of acetic acid induction. The treatment with MEUB at each dose significantly decreased the number of writhing (p < 0.01) in a dose-dependent manner (p < 0.001 with ANOVA). The paracetamol at the dose of 100 mg/kg inhibited the pain caused by the acetic acid induction at 82.11%, showing the effectiveness of the protocol. MEUB at the dose of 200 and 400 mg/kg exerted similar analgesic activity when compared to the paracetamol 100 mg/kg (p > 0.05). The peripheral analgesic effect may be mediated by the inhibition of cyclo-oxygenases and/or lipoxygenases (and other inflammatory mediators), while the central analgesic action may be mediated by the inhibition of central pain receptors. Several classes of phytocompounds identified in this species exerted these properties (e.g., steroids, flavonoids such as quercetin and quercitrin, and tannins such as castalagin and vescalgin) [29][30][31][32][33], showing their contribution to the pain reduction effect of the MEUB. The pain inhibition effect of the ripe fruit is better than the effect of leaves (49.40% vs. 70.73% at the dose of 400 mg/kg) and stems (57.43 vs. 70.73% at the dose of 400 mg/kg), showing the importance of the other compounds detected in the fruits (e.g., steroids) [19].
Anti-Inflammatory Effect of MEUB
The anti-inflammatory effect of the MEUB was evaluated using carrageenan-induced mice paw edema, one of the most used in vivo protocols for the evaluation of the antiinflammatory activity of natural products [33]. The injection of carrageenan (2%) caused inflammation at the mice's paw level that was progressively reduced after the treatment. The inflammatory response was detected in two phases; the first phase occurred between the 0 and 120th min post carrageenan injection due to the release of histamine, serotonin, and bradykinin. These mediators increased vascular permeability in the surrounding damaged tissues [34]. The second phase occurred after the 120th min and was identified together with the biosynthesis of prostaglandins and infiltration of neutrophils [35]. The treatment with MEUB at the dose of 100, 200, and 400 mg/kg significantly reduced the inflammation at 60, 120, 180, and 240 min (p < 0.05, p < 0.01, and p < 0.001; Table 5) compared with the negative control. The effect of MEUB was in a dose-dependent manner after 180 and 240 min (p < 0.001, respectively). The effect of the MEUB was comparable to the effect of the indomethacin at the dose of 10 mg/kg at the 120th, 180th, and 240th min post carrageenan injection (p > 0.05), showing that the MEUB was more effective during the second phase (however, the inhibition of the first phase should not be unnoticed during the release of prostaglandins). These observations showed that the secondary metabolites in the MEUB could antagonize the release and/or the activity of the chemical mediators during both phases. Compared to the leaf and stem extracts, as reported by Razafindrakoto et al. [19], the MEUB was more effective at higher doses (200 and 400 mg/kg) after the 60th, 120th, and 180th min carrageenan injection. It may be due to the high rate of succinic acid in the fruit, with a value of 67.73% vs. 32.68% in the leaves and 41.64% in the stems. Succinic acid and derivatives are reported as a potent anti-inflammatory by increasing TNF-α secretion and suppressing IL-6 production in LPS-stimulated cells [36]. Moreover, Chang et al. [37] reported the contribution of vescalagin as an antioxidant in its anti-inflammatory properties; on the other hand, Al-Sayed and Abdel-Daim [31] also demonstrated the antioxidant and anti-inflammatory activity of epicatechin.
Antihyperglycemic Activity
The effects of MEUB on the blood glucose level were evaluated using an oral glucose tolerance test (OGTT) in mice. The results are reported in Table 6. As shown in Table 6, the glycemia increased from the administration of glucose (t = 0 min) to the 30th min, then it gradually decreased from the 60th min for each treatment showing that the body releases insulin to regulate the rate of glucose and keep the homeostatic state and demonstrating the effectiveness of the protocol. The glycemia dose-dependently decreased after the 60th, 90th, 120th, and 150th min (p < 0.01) glucose loading, and already after 30 min from the glucose administration, mice fed with MEUB at 200 and 400 mg/kg doses already showed lower glucose values compared to those of negative control. Compared to the GBC, after the 60th min glucose loading, MEUB at the dose of 200 mg/kg showed a silent, low effect in terms of variation of glycemia, but the glycemia values did not show any statistical difference compared to GBC; moreover, at the dose of 400 mg/kg, the glycemia variation was higher for MEUB than glycemia variation of GBC, but the glucose level did show any statistical difference. Benzoic acid-related molecules, castalagin, vescalagin, and other antioxidant compounds enhanced insulin effects and reduced insulin resistance [37,38] and the improvement of liver; cardiovascular and metabolic parameters in a rat model of human diseases [39] could be at the origin of these effects on the glycemia decreasing. Additionally, epicatechin, a potent antioxidant that lowers blood pressure [40], plays an important role as insulin-like, contributing to the antihyperglycemic activity of the U. bojeri fruits [41]. The above values are expressed as mean ± SEM of glycemia (n = 5). Below values are the percentage of glycemia variation relative to the glycemia variation at the 30th min. MEUB: Methanol extract of U. bojeri; GBC: Glibenclamide a: p < 0.001 vs. negative control; b: p < 0.01 vs. negative control; c: p < 0.05 vs. negative control; d: p < 0.05 vs. GBC. Student t-test and ANOVA, followed by Tukey posthoc test, were performed.
Plant Materials
The ripe fruits of U. bojeri were collected in November 2019 in the Imamo forests in the district of Miarinarivo, about 70 km from Antananarivo. A specimen was identified by Mr. Benja Rakotonirina, the botanist of the Institut Malgache de Recherches Appliquées (IMRA), and the voucher specimen was compared to the previous reference TN-021/LPA at the IMRA Botanical Department. The ripe fruits were stored in a cool and airy place away from sunlight before being ground.
Animals
Swiss albino male and female mice (weight: 25 ± 5 g; age: 4-5 months), kept under controlled conditions (12 h dark and 12 h light cycle, 25 ± 2 • C temperature, and 50 ± 10% humidity) at the IMRA animal house, were used. The animals received standard food pellets (1420, Livestock Feed Ltd., Port Louis, Mauritius), and they remained fasting for one night before the experiment. All experiments were carried out following the DIRECTIVE 2010/63/EU and approved by the local ethic committee (number: 06/CEA-IMRA/2020).
Chemicals and Reagents
The chemicals and reagents used during the study are reported in Supplementary Material.
Qualitative Analysis
The classes of secondary metabolites were detected by the traditional methods for phytochemical screening described in the work of Tombozara et al. [42]. Their principle is based on the formation of colored soluble or precipitated compounds by the specific reactive reagent used. The protocol described by Slinkard and Singleton [43] was used to evaluate the TPC in triplicate using the Folin-Ciocalteu reagent. TAC was determined using the pH differential method in triplicate described in the protocol of Lee et al. [44]. The method is based on the coloration change of the colored monomeric anthocyanin in oxonium form when diluted at pH 1.0 to the colorless hemiketal form at pH 4.5. The aluminum chloride (AlCl 3 ) method was used for the determination of the TFC in triplicate, according to Matic et al. [45]. The details are reported in Supplementary Material.
HPLC Analysis
The samples for the HPLC analysis of phytoconstituents were prepared in triplicate according to the method described by Razafindrakoto et al. [19]. The extraction protocol was reported in Supplementary Material. Twenty-eight standards (Table 3) were selected and manually injected (20 µL) in triplicate for their quantification in this study. The quantitation of these compounds was performed using an Agilent 1200 High-Performance Liquid Chromatograph coupled to an Agilent UV-Vis diode array detector (Agilent Technologies, Santa Clara, CA, USA) according to the protocol described by Razafindrakoto et al. [19].
Antioxidant Activity Evaluation
The antioxidant activity of the methanolic extract of U. bojeri (MEUB) was determined using the free radical DPPH assay along with the FRAP assay that is based on the capacity of the sample to reduce the ferric ions Fe 3+ into ferrous ions Fe 2+ in the 2,4,6-tripyridyl-s-triazine (TPTZ) complex [46]. Both protocols have been described by Razafindrakoto et al. [19], and they are detailed in Supplementary Material.
Acetic-Acid-Induced Writhing Test
The protocol of Olajide et al. [47], slightly modified by Razafindrakoto et al. [19], was used to determine the analgesic property of MEUB (Supplementary Material).
Carrageenan-Induced in Paw Oedema Test
In vivo anti-inflammatory activity was evaluated based on the inhibition of carrageenaninduced mouse hind paw edema using a plethysmometer as previously described by Buisseret et al. [48], slightly modified by Razafindrakoto et al. [49] (Supplementary Material).
Oral Glucose Tolerance Test (OGTT)
OGTT, described by Tombozara et al. [16] with slight modifications, was applied to determine the hypoglycemia property of MEUB (Supplementary Material).
Statistical Analysis
The results were expressed as mean ± standard error means (S.E.M.), and the data were statistically analyzed using the Student's t-test and one-way analysis of variance (ANOVA) followed by the HSD Tukey multiple range test using SPSS 20.0 software. All the differences showing a p < 0.05 were accepted as statistically significant.
Conclusions
This work was performed to determine the antioxidant compounds in the fruit of U. bojeri, consumed by the local people living around the Tapia forests, and their contribution to the management of metabolic diseases, including inflammatory and related diseases and diabetes. Overall, 14 compounds belonging to phenolic, flavonoid, vitamin, and organic acid classes have been quantified in the fruit of U. bojeri, contributing to its antioxidant, analgesic, anti-inflammatory, and antihyperglycemic activities. The tested product is a methanolic crude extract (MEIC) of the U. bojeri fruit, which is a mixture of more secondary metabolites. Some of them could contribute to the pharmacological activity. In the case of the mixture of active compounds, the synergy of action can be envisaged, but all molecules have their pharmacokinetic profile. It is therefore not entirely obvious to have a dose-dependent response as in the case of an isolated bioactive molecule. This study contributed to the valorization of this species as a health promotor, mostly as complementary medicine in the treatment of diabetes and inflammatory and related diseases, but further studies, including the isolation of the bioactive compounds, should be performed to confirm the properties of this plant. It is very important to investigate the local use of biodiversity to identify new bioactive compounds for supporting biodiversity conservation and sustainable development projects in Madagascar.
|
v3-fos-license
|
2023-10-26T15:19:04.674Z
|
2023-10-24T00:00:00.000
|
264483183
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6643/15/21/4505/pdf?version=1698148345",
"pdf_hash": "1555d50f673d4ca6a0e0df89b606e0c0d75d27f9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45914",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "935514f141d4ca79f59d17932338f0c68a355944",
"year": 2023
}
|
pes2o/s2orc
|
Association between Mineral Intake and Cognition Evaluated by Montreal Cognitive Assessment (MoCA): A Cross-Sectional Study
Background: Mineral intake may protect against cognitive impairment (CI) and all-cause dementia, which affects a large number of adults worldwide. The aim of this study was to investigate the association between mineral intake and Montreal Cognitive Assessment (MoCA), which is a sensitive and specific test. Methods: In total, 201 adults were included in a cross-sectional study. They completed a three-day dietary record to estimate their average daily intake of minerals. Contributions to dietary reference intakes (DRIs) were also calculated. The participants were divided into tertiles according to their mineral intake. CI classifications were determined via the MoCA (score < 26). Apolipoprotein E (APOE) genotyping was carried out, and the patients’ anthropometric measurements and physical activity, health and personal data were collected. Results: The prevalence of CI in this selective sample was 54.2% (34.3% females and 19.9% males). In women, being in the third tertiles of iron and manganese intake was associated with lower odds of having CI (OR [95% CI]: 0.32 [0.11 ± 0.93]; 0.33 [0.12 ± 0.93], p < 0.05). No significant differences were observed for any of the nutrients studied in men. Conclusions: These findings suggest that a low mineral intake, especially low iron and manganese intake in women, is associated with a worse cognition as assessed by MoCA.
Introduction
With the increase in the life expectancy of the population in recent decades, there have been significant increases in the incidences of chronic diseases such as cancer and cardiovascular and neurodegenerative diseases [1][2][3].Specifically, within the latter group, dementia affects about 50 million people worldwide, with about 10 million new cases registered each year [4], making it a public health priority.This syndrome also interferes with occupational, domestic and social functioning [5].It often begins with mild cognitive impairment (MCI) with which memory loss can appear at an early stage and may trigger the development of Alzheimer's disease (AD), the most common type of dementia [6].
MCI is characterized by cognitive decline observed subjectively and supported by objective measures compared to a prior level of functioning, which represents the preclinical phase, where healthy aging could lead to dementia.Although there are no approved pharmacological treatments for MCI, progression may be slowed or delayed with focus on reversible causes (such as hypertension, hyperlipidemia, atrial fibrillation and diabetes mellitus) and changing lifestyle (including diet, exercise, tobacco and cognitive stimulation) [7].The prevalence of MCI in individuals over the age of 65 has been found to range from 10% to 15% [8].The annual rate of progression to dementia in individuals with MCI is at 5-10%, which is a significantly elevated figure when contrasted with the 1-2% annual incidence rate observed in the general population [7].Moreover, it has been described that approximately 50% will lead to dementia in 5 years [9].MCI stage could be an opportunity to apply strategies to delay the progression to dementia [7].Although some drugs are dispensed to reduce or control some symptoms, there is no pharmacological treatment for slowing or delaying cognitive decline [10].
For this reason, screening for MCI is essential in order to identify modified risk factors in the population, in an attempt to improve cognitive function and delay progression to dementia [4].However, these possible causative factors are often overlooked and underestimated [7].
Some non-modifiable factors, such as age, sex and genetics, significantly increase the risk of dementia, especially AD.Women are at higher risk than men [11][12][13] and are carriers of the ε4 allele of the apolipoprotein E gene (APOE ε4) [14].
However, there are many factors that are modifiable, and most of them are related to lifestyle such as diet, physical activity, smoking, sleep, obesity, diabetes mellitus, hypertension, hyperhomocysteinemia and others, such as depression, employment status and education level [15].Therefore, carrying out early, lifestyle-focused interventions in people with MCI could help to prevent the more severe stages of the pathology [16].
Regarding diet, some studies have indicated that it could play an essential role in the prevention and/or delay of dementia [10,17,18].In this regard, some research has observed associations with some dietary patterns such as the Mediterranean diet, the Dietary Approach to Stop Hypertension (DASH) or the Mediterranean-DASH Diet Intervention for Neurodegenerative Delay (MIND) [19][20][21][22].Regarding nutrients, several investigations have observed relationships between B vitamins, vitamin C, folates, omega-3 fatty acids and cognitive function [22][23][24][25][26]. Recently, attention has also been paid to the associations between some minerals and cognitive function [27][28][29][30][31], although there is not much information on this topic.Some studies suggest that certain minerals such as iron, magnesium, copper, zinc, selenium and manganese could be involved in some mechanisms of action related to cognitive function [32,33].It has been described that these micronutrients play a role in DNA repair, oxidative damage prevention and the correct methylation process of DNA, among other mechanisms [34].Thus, they appear to be important in the regulation of cell function and neuromodulation and could play a crucial role in antioxidant protection [35].Furthermore, their antioxidative properties have the potential to mitigate damage induced by free radicals, thereby preventing or retarding the cognitive decline process attributed to the neurotoxic effects produced by the oxidative stress [36].
Therefore, in an attempt to consolidate the scientific evidence, and taking into account that there is hardly any research using the Montreal Cognitive Assessment (MoCA) test [37,38] to measure cognitive function and study its association with diet, the aim of the present study was to assess the relationship between the intake of minerals with described neuroprotective actions and cognitive function in adults, using the MoCA test to categorize participants as either having or not having cognitive impairment.
Participants
This research is part of the project entitled "Cognitive and neurophysiological characteristics of people at high risk for the development of dementia: a multidimensional approach" (COGDEM).This is an observational, cross-sectional study whose objective is to study the physiological characteristics of healthy and pathological aging, with special interest in recruiting individuals at an increased risk of developing AD [39].The COGDEM cohort consisted of 262 individuals recruited through different channels: day centers for elderly, some professional associations (i.e., telecommunications engineers) and the neurology consultation of the Hospital Clínico San Carlos.Although the COGDEM study is not a case-control study, efforts were made to recruit people at particular risk of developing AD.Therefore, healthy individuals with a family history of AD were encouraged to participate, since they have higher odds of inheriting genes related to cognitive impairment (such as the APOE ε4+ gene) as has been described previously [1].A team of expert neuropsychologists ensured that the individuals willing to participate met the study's selection criteria, which have been detailed previously [39,40].The main inclusion criteria were MMSE ≥ 24, a modified Hachinski score ≤ 4 [41] and a Geriatric Depression Scale Short-Form score ≤ 5 [42].The main exclusion criteria were: previous history of neurological or psychiatric disorder, medical conditions that have a high risk of associated cognitive symptoms; severe head injury with loss of consciousness within 5 years; any illness indicating a life expectancy of less than 2 years; alcoholism; chronic use of anxiolytics, neuroleptics, narcotics, anticonvulsants or sedative hypnotics; subjects that showed infection, infarction, focal lesions or significant hippocampal atrophy on magnetic resonance imaging (MRI).
From this cohort, the following subjects were excluded for the present work: individuals who had not completed the MoCA test (n = 45) and individuals who had not completed the dietary study (n = 16) (Figure 1).Finally, a sample of 201 subjects was included.
described neuroprotective actions and cognitive function in adults, using the MoCA test to categorize participants as either having or not having cognitive impairment.
Participants
This research is part of the project entitled "Cognitive and neurophysiological characteristics of people at high risk for the development of dementia: a multidimensional approach" (COGDEM).This is an observational, cross-sectional study whose objective is to study the physiological characteristics of healthy and pathological aging, with special interest in recruiting individuals at an increased risk of developing AD [39].The COG-DEM cohort consisted of 262 individuals recruited through different channels: day centers for elderly, some professional associations (i.e., telecommunications engineers) and the neurology consultation of the Hospital Clínico San Carlos.Although the COGDEM study is not a case-control study, efforts were made to recruit people at particular risk of developing AD.Therefore, healthy individuals with a family history of AD were encouraged to participate, since they have higher odds of inheriting genes related to cognitive impairment (such as the APOE ε4+ gene) as has been described previously [1].A team of expert neuropsychologists ensured that the individuals willing to participate met the study's selection criteria, which have been detailed previously [39,40].The main inclusion criteria were MMSE ≥ 24, a modified Hachinski score ≤ 4 [41] and a Geriatric Depression Scale Short-Form score ≤ 5 [42].The main exclusion criteria were: previous history of neurological or psychiatric disorder, medical conditions that have a high risk of associated cognitive symptoms; severe head injury with loss of consciousness within 5 years; any illness indicating a life expectancy of less than 2 years; alcoholism; chronic use of anxiolytics, neuroleptics, narcotics, anticonvulsants or sedative hypnotics; subjects that showed infection, infarction, focal lesions or significant hippocampal atrophy on magnetic resonance imaging (MRI).
From this cohort, the following subjects were excluded for the present work: individuals who had not completed the MoCA test (n = 45) and individuals who had not completed the dietary study (n = 16) (Figure 1).Finally, a sample of 201 subjects was included.All the selected participants signed the informed consent form in order to participate.This research followed the criteria of the Declaration of Helsinki and was approved by the Ethics Committee of the Hospital Clínico San Carlos with the internal code 15/382-E_BS.
The participants underwent a study that examined their health, socio-demographic variables, diet, anthropometric measurements, physical activity, neuropsychological profile and genotype.The study was managed by qualified research staff.
Health and Socio-Demographic Data
Using a questionnaire prepared specifically for this study, the participants were asked about the following factors: (1) their employment status, classifying them as employed, unemployed or retired; (2) their level of education, classifying the population into three groups (whether they received a primary education or lower, a secondary education or a university education); and (3) their use of drugs for hypertension, depression or type 2 diabetes.
Food Record Data
Food and beverage consumption data were collected from a three-day food and beverage consumption record [43] in which participants were required to note all foods and beverages consumed during three non-consecutive days, two during the week and one on the weekend.The dietary data were processed using the nutritional analysis software DIAL [44], which uses data from the Spanish food composition tables [45].For the present study, data for the following nutrients were analyzed: energy (kcal/day), iron (mg/day), magnesium (mg/day), copper (µg/day), selenium (µg/day), zinc (mg/day) and manganese (mg/day).The studied minerals were adjusted for energy intake via the Willett residual model [46].
Then, the mineral contributions were calculated using the dietary reference intakes (DRIs) established by the Institute of Medicine (IoM), which provided the estimated average requirement (EAR) for all the nutrients studied except for manganese, for which the adequate intake (AI) was used [47][48][49].
Anthropometric Data
Data on the weight, height and waist, hip and calf circumferences of the participants were used for the present study.The anthropometric data were collected according to ISAK guidelines [50], with the subjects standing barefoot and unclothed in a relaxed position.
Each participant's weight (kg) was measured using a Tanita Body Fat Monitor Scale, White Backlit LCD Display model UM-017 (range: 0.1-150 kg; precision: 100 g), which is an electronic, digital scale, and the height (cm) of each participant was obtained using a Harpenden digital stadiometer (range 70-205 cm; precision: 1 mm).With these two measurements, the subjects' BMI values were calculated using the following formula: weight (kg)/height 2 (m 2 ) [51].
The waist and hip circumferences were determined to evaluate possible cardiovascular risk [52], and the calf circumference was determined to establish the presence of sarcopenia [53].All circumferences were measured using a HOLTAIN steel tape measure (range: 0-150 cm; accuracy: 1 mm).
Physical Activity
Physical activity data were recorded via ActiGraph wGT3X-BT accelerometers (Pensacola, FL, USA).The participants wore the accelerometer on the right hip for 7 days, and finally, data from those who recorded more than 10 h per day on at least 4 days of the week were taken, with a minimum of one of those days falling on the weekend [54][55][56][57].ActiLife software (6.13.3) (LLC, Pensacola, FL, USA) was used to collect physical activity data of the participants.To classify the intensity of physical activity, the following criteria were applied: sedentary time (<100 counts/min); light activity (100-1951 counts/min); and moderate to vigorous physical activity (MVPA) (≥1952 counts/min) [58].
APOE Genotyping
Blood samples of 10 mL were extracted in ethylenediaminetetraacetic acid (EDTA) tubes to obtain the genomic DNA.To carry out APOE genotyping (i.e., rs7412 and rs429358 polymorphisms), TaqMan assays were conducted on an Applied Biosystems 7500 Fast Real Time PCR machine.As a result, the participants were classified as carriers (APOE ε4+) or non-carriers (APOE ε4−) of the ε4 allele of the APOE gene.
Neuropsychological Test 2.7.1. Geriatric Depression Scale (GDS)
The 15-item GDS [42] was used for the study of depression.Each question is answered dichotomously (yes/no), which is scored as 1 or 0, respectively, with a maximum score of 15 points.Scores above 5 probably indicate depression symptomatology.
Mini-Mental State Examination (MMSE)
Mental status was also evaluated by means of the MMSE test, which is composed of different areas: spatial and temporal orientation, immediate memory, attention and calculation and delayed memory and language [59].
Montreal Cognitive Assessment (MoCA)
The MoCA is a cognitive screening tool to assist in detection of mild cognitive impairment (MCI) [37].This test has been validated for the Spanish population [60].This test studies different abilities such as attention, concentration, memory, language and executive functioning.
The test is structured as follows: (1) visuo-spatial abilities are assessed by having the subject draw a clock (3 points) and copy a three-dimensional cube (1 point); (2) executive function is assessed by having the subject complete an alternation task adapted from the Trail Making B task (1 point), a phonemic fluency task (1 point) and a two-item verbal abstraction task (2 points); (3) attention, concentration and working memory are assessed by having the subject complete a sustained attention task (target detection by tapping; 1 point), a serial subtraction task (3 points) and assess forward and backward digits (1 point each); (4) short-term recall is assessed by having the subject complete two trials involving learning nouns and their delayed recall after five minutes (5 points); (5) language is assessed by having the subject complete a three-item naming task with animals (3 points), repeat two syntactically complex sentences (2 points) and complete the fluency task mentioned above; and (6) the subject's orientation in time and place is assessed (6 points).
The maximum achievable score is 30, and scores below 26 suggest MCI [37].In our study, since we do not have a clinical diagnosis of MCI we have used this cut-off point to assess CI.
Statistical Analysis
Data for continuous variables are expressed as means and standard deviations.Percentages were also calculated for the qualitative variables studied.Dietary and anthropometric data were compared according to sex and according to the MoCA score (CI group: MoCA score < 26; non-CI group: MoCA score ≥ 26).
For the comparison of means, the Mann-Whitney U test and Student's t-test were used for variables following a non-normal or normal distribution, respectively.A two-way ANOVA test was used to determine the relationships between energy and mineral intake (quantitative dependent variable) and the MoCA score and sex (qualitative independent variables).A Z-test was used to determine the differences between proportions.Spearman's correlation was applied since the scores obtained via the MoCA test did not follow a normal distribution.
Tertiles of consumption of each mineral were calculated.The association between the tertile of the intake of each mineral (independent variable) and MoCA score (dependent variable) was analyzed via logistic regression to calculate odds ratios (ORs).Mineral intake was calculated with a 95% confidence interval.Crude ORs were calculated (model 1), additionally corrected for age and BMI (model 2) and further corrected for educational level, employment status, drug intake, physical activity, family history of Alzheimer's disease, APOE genotype and depression (model 3).The statistical significance level was set at p < 0.05.
Results
A total of 201 subjects were included (63.2% female), with a mean age of 59.8 ± 7.9 years (from 41 to 81 years old).Of the total sample, 54.2% (34.3% females and 19.9% males) presented scores lower than 26 points in the MoCA test.
Table 1 shows the general characteristics of the sample of the present study: personal and health data, anthropometric data, physical activity, APOE genotyping and the scores obtained from the neuropsychological tests.In general, with respect to health data (Table 1), for the total sample, it can be observed that the percentage of subjects who studied at a university was higher in the non-CI group than in the CI group.In the latter group, a higher percentage of people with either a primary level of education or no education was found.This difference was also observed when sex was taken into account.
For anthropometry, physical activity, family history of Alzheimer's disease, APOE genotype and the scores obtained from the depression assessment tests via the GDS, no significant differences were observed according to the MoCA score.
No significant differences were observed in the MMSE according to MoCA score, nor for the total sample according to sex.
The data regarding energy and mineral intake according to the CI and sex are shown in Table 2. Significant differences were found in the contributions to the DRIs of iron and manganese for the total sample and for women, having been found to be lower in the CI group than the non-CI group (Table 2).
In women, positive correlations were also observed between the MoCA score and copper intake (Rho: 0.259, p = 0.003), manganese intake (Rho: 0.178, p = 0. 045), iron contribution (Rho: 0.218 p = 0.013), magnesium contribution (Rho: 0.177 p = 0.047), copper contribution (Rho: 0.331 p = 0.000) and manganese contribution (Rho: 0.310 p = 0.000).No significant differences were observed for any of the nutrients studied in men.When analyzing the MoCA score according to the tertiles of mineral intake (Table 3), it was observed that those in the total sample with intakes of magnesium, copper and manganese in the first tertile obtained lower MoCA scores compared to those in higher tertiles.
In women, those with intakes of iron, magnesium, copper and manganese in the first tertile had lower MoCA scores than those in higher tertiles.However, no significant differences were observed in men.
In general, the results show that all persons in T1 for the total sample for any of the nutrients evaluated, as well as for women and men separately, had mean MoCA scores below 26 points.9.4 mg/day in men; IA of manganese 2.3 mg/day in men and 1.8 mg/day in women).For a comparison of means, the Kruskal-Wallis test was used because the distribution of all variables was not normal, and a two-way ANOVA analysis was used for the following: I-interaction between sex and mineral intake; T-differences by tertile of mineral intake.Significant pairwise differences are indicated by letters and bold type (a-differences from T1; b-differences from T2, p < 0.05).
Tables 4 and 5 show the association between mineral intake and the presence of CI, as measured by the MoCA, in women and men.The analysis was corrected for different covariates that influence the development of CI (age, BMI, employment status, educational level, drug intake, physical activity, family history of Alzheimer's disease, genetics and depression).Women with an iron intake within the third tertile (>15.37 mg/day) were less likely to have CI than those with an iron intake within the first tertile (<13.47 mg/day).The same was true for manganese: women in the third tertile (>3.10 mg/day) were less likely to have CI than those in the first tertile (<1.82 mg/day).
Discussion
The present study investigated the association between the intake of minerals with neuroprotective actions and cognitive function, which was measured via a highly sensitive test for the screening of CI, the MoCA test, in a cohort of Spanish adults.People with CI have lower contributions to the DRIs of iron and manganese, especially women.People in the lowest tertile for magnesium, copper and manganese intake achieved lower scores on the cognitive test, and being in the highest tertile for iron and manganese intake was associated with higher MoCA score in women.
Although AD has been hypothesized to be impacted by copper deficiency [61], studies that have examined the association between copper intake and cognitive function show conflicting results.Some studies observed that compared to those with lower intakes, adults with higher copper intakes have a decreased risk of low cognitive test scores [15] or a slower progression of cognitive decline [62].However, other studies suggest that a higher copper intake may be associated with worse cognitive ability [63], especially when combined with a high intake of saturated fat and trans fats [64].On the other hand, research showing an association between elevated serum copper levels and lower cognitive function has been questioned as they may actually be indicators of copper intoxication and do not adequately reflect the mineral status [61].
In the NHANES study, a positive relationship between the intake of copper and iron intake and cognitive ability was observed [15].However, in a systematic review conducted by Loef and Walach, when analyzing the results of the included clinical trials, no relationship was found between cognitive ability and copper or iron intake [65].
Our study found that in women, iron intakes in the third tertile (>15.37 mg/day) were associated with higher MoCA scores and a lower likelihood of CI.These results are similar to those found in the PATH Through Life Project study [66], which found that for females, those who consumed more iron had a lower risk of MCI.However, contrary to our study, the PATH Through Life Project observed that in men, a high iron intake was associated with a higher risk of MCI.The authors suggested that these differences may possibly be due to physiological differences or to the overall diet, physical activity or general health [66].The physiological differences may explain the results found in our study since when analyzing diet, physical fitness and general health according to sex, no differences were observed between the groups.In addition, Vercambre et al. [67] did not find association between iron intake and cognitive decline or functional impairment by instrumental activities of daily living (IADLs), but the mean intake of iron in this population was below the mean for women in this study.
Iron is an important nutrient for brain metabolism.Lower levels of this mineral can lead to impaired neurotransmitter regulation, reduced myelin production, changes in synaptogenesis and decreased function of basal ganglia [68].Moreover, this deficiency could cause anemia, which is quite common in the elderly and is related to decreases in physical, functional and cognitive capacity [69].Nevertheless, high iron blood levels could be related to pro-oxidant effects [70], so it is important to correctly monitor both parameters.It would be necessary to carry out more studies to further elucidate the relationship of iron and cognitive function.
In a systematic review that analyzed the roles of copper and iron in cognitive ability, it was shown that excessive intakes of iron and copper, combined with a diet high in saturated fatty acids, may have adverse effects in people at risk of AD [65].
In this regard, Morris et al. [64] found a higher incidence of cognitive decline in individuals with increased intakes of copper and with higher intakes of saturated and trans fats but not in those with higher intakes of copper and lower intakes of saturated and trans fats.It has been stated that this could be because these types of fats increase blood cholesterol levels, which could favor the formation and progression of beta-amyloid plaques in the brain; however, an excess of copper could promote the oxidation of fatoriginating compounds that could be neurotoxic, therefore contributing to the accumulation of beta-amyloid plaques [71].However, in our study, when analyzing the probability of presenting CI according to saturated and trans fat intakes, no associations were found.
Manganese is an essential micronutrient that is necessary for different functions as a coenzyme in numerous biological processes involved in the maintenance of cognitive function, which include energy metabolism, antioxidant systems, brain ammonia clearance and the synthesis of neurotransmitters [72].However, in patients using total parenteral nutrition with high levels of this mineral, cognitive problems have been described, causing manganese levels to be reduced in nutritional formulas.Nevertheless, when the intake of this mineral from food is high, plasma manganese levels are autoregulated by increased metabolism for pancreatic and biliary excretion [73][74][75].
In our study, higher manganese intake was associated with lower odds of having CI.In a cross-sectional study conducted with 6863 participants and after adjusting for several variables, no association was found between manganese intake and cognitive capacity, which was explained by the fact that most participants' intakes were below the DRIs [63], so an adequate dietary intake of the mineral may contributes to successful aging; it may not only help sustain a healthy body composition and fitness but possibly also prevent agerelated disorders such as depression, poor cognition, cardiovascular disease and diabetes mellitus [76].
This mineral plays an important role in brain development and adequate cognitive function [77].Although manganese can be found in the body in reduced (Mn 2+ ) and oxidized (Mn 3+ ) states, usually only a small amount of manganese is found in oxidized form.In the literature, it has been described that when Mn 3+ accumulates, it can trigger neurotoxic processes and may affect cognitive function [78,79].In our study, although blood data for this mineral were not available, it was found that women had higher levels of intake of other antioxidant nutrients than men, which could explain the differences found according to sex.
Regarding magnesium, this mineral is essential for neuronal transmission and plays a key role in the major excitatory and inhibitory neurotransmission pathways [80].Thus, Ozawa et al. demonstrated that dietary intake of magnesium was associated with lower risks of all-cause dementia [81] and other studies have found a positive association between cognitive function and the intake of this mineral, and it has been proposed that this mineral has neuroprotective effects, such as the ability to increase cerebral blood flow [82].In our study, in women, magnesium intakes between 296.19 and 356.62 mg/day were associated with better cognition (i.e., less likely to have CI).However, Vercambre et al. [67] did not find an association between magnesium intake and recent cognitive decline or IADL in the E3N cohort which included 98,995 French women.
Regarding the other minerals explored in this study, such as zinc and selenium, the results published to date are mixed.Selenium is involved in central nervous system function, such as memory or cognitive capacity, and its deficiency has been associated with an increased risk of cognitive decline, impairment of the immune system and mortality [83][84][85].
Thus, in our study, no significant differences were found between the levels of intake of these minerals and cognitive impairment, which coincides with the results of Lo et al. [29], Wang et al. [86] and Bojar et al. [82].As for zinc, inverse associations have been found between its intake and cognitive function [87,88], and it has been indicated that zinc is a nutrient involved in decreases in beta-amyloid adhesiveness and in the synthesis of amyloid precursor protein [89].
Limitations and Strengths
Several limitations of our study are noteworthy.First, as a cross-sectional study, it was not possible to infer the causal association of mineral intake with cognition.Second, the participants were recruited from hospital neurology practices and professional associations, presenting a risk of participation bias.In addition, having selected part of the sample through a hospital may imply a higher frequency of risk factors such as diabetes, hypertension and depression in the participants, although the data were corrected for these variables.Third, the pathologies (with the exception of depression) present in the participants were self-reported, which could be a reporting bias.Fourth, the lack of fasting blood data did not allow for a biomarker study.Fifth, participants were recruited only in the Community of Madrid and it was a selective sample since the participants were at high risk of developing AD, so the results may not be generalizable to other populations.Sixth, this study used the MoCA cut-off score (<26) to assess CI, which is the same cut-off used for MCI.Nevertheless, as in the present study a clinical diagnosis of MCI has not been carried out, it is not possible to establish a true diagnosis of MCI.Therefore, the data should be interpreted with caution.
Despite these limitations, the current study also has its strengths.First, the MoCA was used to assess CI.According to several authors, this test has higher levels of sensitivity and specificity than the MMSE in detecting MCI (83-75% vs. 71-74%, respectively) [38,90].In fact, in our study, which included people with an MMSE score ≥ 24 (meaning that they did not present cognitive impairment), when classified according to the MoCA, it was found that 54.2% (34.3% of women and 19.9% of men) of the participants presented CI.Other studies also showed that MoCA could be superior to MMSE in discriminating between individuals with MCI and without MCI [90].MMSE and the MoCA test are the most commonly used screening methods in clinical and research fields.Despite this, the MoCA test has shown differences in cognitive profile even in those individuals who were in the normal range on the MMSE.So, the MoCA test would appear to be a useful brief tool to screen MCI, particularly where the ceiling effect of the MMSE may be a problem [91].
The second advantage is the use of a 3-day food and beverage consumption record in order to obtain the average nutrient intake in the population.The main advantage was the collection of accurate quantitative information on individual intake during the registration period, which provided a high level of specificity for different meals.Moreover, the questionnaire did not rely on the respondent's memory since information was recorded at the time of consumption.This aspect is very important in populations with cognitive decline [92,93].
The last strength of this study is that it took into account some covariables that could influence CI, such as age, BMI, employment status, educational level, drug consumption, physical activity, family history of Alzheimer's disease, APOE genotype and the presence of depressive symptomatology.Among these factors, we would like to highlight that 59.2% of our sample had undertaken university studies; in fact, this aspect influenced the choice of a cut-off MoCA score < 26 for assessing a possible MCI since some authors [94][95][96] indicate that this aspect should be taken into account when establishing the cut-off point.Other authors use lower cut-off points, which would mean an underestimation in the determination of the prevalence of MCI.Another cofactor that we took into account that other authors do not include is the APOE genotype [97,98], although in our studies we have not observed significant differences depending on whether an individual is or is not a carrier of the ε4 risk allele.Our study allows us to evaluate its influence on the obtained results since it was considered in the logistic regression models.Finally, among the covariates, we would like to highlight that, to our knowledge, no study has included the presence of depressive symptomatology, an aspect which is a key factor in the development of MCI [99][100][101].
Conclusions
The intake of minerals with neuroprotective actions, such as iron and manganese, could play an important protective role against CI, especially in women, since the higher the intake of these minerals the lower the odds of having CI.The lack of association in the male population could be due to physiological differences and to the fact that they generally contributed less to the DRIs of the minerals studied.
Therefore, this study highlights the importance of the study of mineral intake in the general population and in groups at greater risk in order to avoid low intake levels that could be associated with a worse cognitive capacity assessed by MoCA.
Intervention and follow-up studies monitoring dietary intake and nutritional status (including biochemical parameters) are needed to confirm the possible protective effect of iron and manganese intake on cognitive impairment and to take a deeper look at the differences found in these associations between mineral intake and cognitive function according to sex.Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Figure 1 .
Figure 1.Flow chart of the selection process.
Figure 1 .
Figure 1.Flow chart of the selection process.
Author
Contributions: Conceptualization, project administration and funding acquisition, F.M.-U.and A.M.L.-S.; formal analysis, A.M.L.-M.; methodology and investigation, A.A., L.M.B., L.G.G.-R., A.M.L.-M.; E.C.-S., Á.P.-S., A.B., M.L.D.-L.and I.C.R.-R.; data curation, A.M.L.-M.and M.D.S.-G.; writing-original draft preparation, A.M.L.-M.; writing-review and editing, all authors; visualization, A.M.L.-S.and L.G.G.-R.; supervision, A.M.L.-S.All authors have read and agreed to the published version of the manuscript.Funding: This research was funded by the Spanish Ministry of Economy and Competitiveness under the grant PSI 2015-68793-C3-1-R, by the Complutense University of Madrid (UCM) through project GR 3/14, by the Complutense University Research Group VALORNUT-920030 thorough FEI 16/127, by a predoctoral contract financed by the Complutense University of Madrid and Banco Santander (CT63/19-CT64/19) and by a postdoctoral Ministerio de Universidades-Complutense University of Madrid Margarita Salas fellowship, founded by European Union-Next Generation.Institutional Review Board Statement: This research followed the criteria of the Declaration of Helsinki and was approved by the Ethics Committee of the Hospital Clínico San Carlos with the internal code 15/382-E_BS.
Table 1 .
General characteristics of the sample according to the MoCA score and sex.
categorical variables were analyzed with the χ 2 test and a Z-test of proportions.* p < 0.05 with respect to non-CI.
Table 2 .
Energy and mineral intake according to MoCA score and sex.
MoCA-Montreal Cognitive Assessment; Non-CI-no cognitive impairment; CI-cognitive impairment; EAR-estimated average requirement; AI-adequate intake.Two-way ANOVA analysis: S-differences according to sex.Significant differences were determined via the Mann-Whitney U test or Student's t-test, as appropriate.Nutrients were adjusted for energy intake via Willett's method of residuals.* p < 0.05.
Table 3 .
MoCA score according to mineral intake tertiles and sex.
MoCA-Montreal Cognitive Assessment; Data are shown as means ± standard deviations.‡ Minerals not covered by DRIs (EAR of magnesium: 350 mg/day in men; EAR of zinc:
Table 4 .
Association between mineral intake and MoCA in a female population.Logistic regression analysis.
Model 1-crude; Model 2-adjusted for age and BMI; Model 3-additional adjustment for educational level, employment status, use of drugs, physical activity, family history of Alzheimer's disease, genotype and depression.
Table 5 .
Association between antioxidant mineral intake and MoCA in a male population.Logistic regression analysis.
Model 1-crude; Model 2-adjusted for age and BMI; Model 3-additional adjustment for educational level, employment status, drugs, physical activity, family history of Alzheimer Disease, genotype and depression.
|
v3-fos-license
|
2022-06-11T06:16:28.717Z
|
2022-05-27T00:00:00.000
|
249544369
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "87cae9a61871dc64c59f0566fd4897e9b10d4e9b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45915",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "5202a8a81a654040abe6efa88084ded8020793d6",
"year": 2022
}
|
pes2o/s2orc
|
EEG Connectivity during Active Emotional Musical Performance
The neural correlates of intentional emotion transfer by the music performer are not well investigated as the present-day research mainly focuses on the assessment of emotions evoked by music. In this study, we aim to determine whether EEG connectivity patterns can reflect differences in information exchange during emotional playing. The EEG data were recorded while subjects were performing a simple piano score with contrasting emotional intentions and evaluated the subjectively experienced success of emotion transfer. The brain connectivity patterns were assessed from the EEG data using the Granger Causality approach. The effective connectivity was analyzed in different frequency bands—delta, theta, alpha, beta, and gamma. The features that (1) were able to discriminate between the neutral baseline and the emotional playing and (2) were shared across conditions, were used for further comparison. The low frequency bands—delta, theta, alpha—showed a limited number of connections (4 to 6) contributing to the discrimination between the emotional playing conditions. In contrast, a dense pattern of connections between regions that was able to discriminate between conditions (30 to 38) was observed in beta and gamma frequency ranges. The current study demonstrates that EEG-based connectivity in beta and gamma frequency ranges can effectively reflect the state of the networks involved in the emotional transfer through musical performance, whereas utility of the low frequency bands (delta, theta, alpha) remains questionable.
Introduction
The proposed mechanisms of emotion induction by music differ in many aspects including the information focus, cultural impact, dependence on musical structure, evaluative conditioning, and others. These are important from the listener's perspective; however, these are also implicated in intended emotion transfer while performing. The present-day research mainly focuses on the assessment of emotions evoked by music. The neural correlates of intentional emotion transfer by the music performer, however, are not well investigated.
McPherson et al. [1] utilizing fMRI discovered that, during the creative expression of emotions through music, emotion-processing areas of the brain are activated in ways that differ from the perception of emotion in music. However, the electroencephalogram (EEG), being a unique real-time brain activity assessment method, appears to be advantageous in the context of music performance allowing more ecologically valid settings. Nevertheless, only few studies related to music improvisation or active playing while collecting and interpreting EEG data [2][3][4]. In our recent study, we investigated spectral properties of EEG activity in musicians while they were instructed to transfer a certain emotion through performance of a predefined simple music score. To our knowledge, it was the first attempt to address the intended emotional communication through an artistic medium using the EEG signal. As the emotion to be communicated by musicians does not necessarily reflect their actual felt emotions in the moment [5,6], subjects self-evaluated their own performances based on how well they felt they expressed the intended emotion within it.
The modulation of emotional intent via the means of expressive cues in academic music is often written in the score, and executed by the performer. However, in jazz or popular music, these expressive cues are often only tacitly implied and the performer is given room for interpretation. For example, the performer may often seek to perform a unique or characteristic rendition of a familiar song by changing the tempo, groove, dynamic, or articulation to help communicate their emotional intent [7,8]. The leeway and capacity that performers have to use expressive cues differs between musical genres and performance environments. As a result, the live version of a song may greatly vary from the studio recording depending on the situational context. From a musicology perspective, modulation of affective intent via expressive cues executed in performance is considered an inextricable aspect of the process of embodied musical communication, which is often overlooked in brain-imaging studies using a musical framework [9,10]. Music created, experienced, and consumed in everyday life often functions as a means of mood modulation, and a catalyst for social cohesion, coordination or contextual human behavior. These aspects are difficult to replicate in highly controlled settings where the focus may be on the isolation of a particular response [11,12]. To address these issues, and capture the neural activity related to elusive creative process of imbuing music with emotion in performance, our study's approach attended to maintaining a level of ecological validity. The recordings took place in a room at the music academy in Riga, where musicians are familiar with practicing, performing, and recording. Musicians were informed that the audio recordings of their performances would be evaluated later by a listener group. This knowledge helped performers to associate each session of EEG recording with an ordinary music studio recording session they may experience in their everyday practice. We probed the brain activity patterns that are differentially involved in distinct emotional states by employing the experimental contrast of emotional playing vs. neutral playing. Differences in power of EEG activity were observed between distressed/excited and neutral/depressed/relaxed playing conditions [13].
The integration of different cortical areas is required for both music perception and emotional processing. Several attempts have been made to investigate network connectivity in relation to emotional aspects of music listening. Previous studies targeting at emotion discrimination while listening to music demonstrated that distinct network connectivity and activation patterns of target regions in the brain are present during listening, particularly between the auditory cortex, the reward brain system, and brain regions active during mind wandering [14]. The existing fMRI-based studies showed that clear variations in connectivity for different music pieces are present. Karmonik et al. [15] reported largest activation for processing of self-selected music with emotional attachment or culturally unfamiliar music. Recently, Liu et al. [16] associated emotional ratings of pleasure and arousal with brain activity. In their study, classical music was associated with the highest pleasure rating and deactivation of the corpus callosum while rock music was associated with the highest arousal rating and deactivation of the cingulate gyrus. Pop music, in contrast, activated the bilateral supplementary motor areas and the superior temporal gyrus with moderate pleasure and arousal. Using EEG, Varotto et al. [17] demonstrated that the pleasant music induces an increase in network number of connections, compared with the resting condition, while no changes are caused by the unpleasant stimuli. Shahabi and Moghimi [18] reported a positive association between perceived valence and the frontal inter-hemispheric flow, but a negative correlation with the parietal bilateral connectivity while listening to music. Recently, Mahmood et al. [19] demonstrated that even a short period of listening to music can significantly change the connectivity in the brain.
Importantly, the activation patterns while listening to music may differ in musicians when compared to non-musicians as demonstrated by Alluri et al. [20]: in their study, musicians automatically engaged action-based neural networks (cerebral and cerebellar sensorimotor regions) while listening to music, whereas non-musicians used perceptionbased networks to process the incoming auditory stream. However, it is not well known to what extent the connectivity differs between states of active emotional performances. With this follow up study, we aim to determine whether connectivity patterns can reflect differences in information exchange during emotionally imbued playing. We assessed brain connectivity patterns from the EEG data while subjects were performing with the emotional intent. We contrasted emotional playing with neutral playing to control over general patterns of motor and sensory activation and expected that observed connectivity patterns are attributable to the emotion-related aspects of the performance.
Participants
Ten musicians (2 males, 8 females; age 19-40 years) were recruited with the criteria that they were experienced piano players with a minimum of 5 years of academic training. For each participant, EEG recording sessions involving a piano-playing task took place over four sessions scheduled on different days. Rīga Stradin , š University Research Ethics Committee approved the study (Nr.6-1/01/59), and all participants provided their written consent.
Experimental Design and Procedure
Participants were provided with a musical score composed by the author (available in Supplementary Materials), designed to be simple enough for trained pianists to learn quickly and make expressive variation upon. The music used an extended pentatonic scale to circumvent Classical Western functional harmony bias, and presented on two pages. The first page was to be performed mechanically, in tempo, neutral in expression. The music on the second page was a repeat of the first page, but freedom was given to the player to alter their manner of play in order to express one of five emotions based on a 2D valence-arousal model of affective space (distressed, excited, depressed, relaxed, neutral). Participants were encouraged to use any and all expressive cues at their disposal (such as tempo, groove, articulation, embellishment), to make a contrast between the neutral first page and the emotion-imbued second page (except when the second page was also neutral). Each page had duration of 30 s, making the duration of each performance 1 min. The protocol was controlled with the Psychopy stimulus presentation software [21], which presented instructions for each trial in order and randomized the sequence of the five emotions over the course of each recording session. A total of 200 trials were recorded for each participant. See Figure 1 for a schematic representation of a single experimental trial. All participants were fully briefed on what to expect before their first scheduled recording session. At each first session, participants were given time to familiarise themselves with the recording sequence and emotional descriptors, ensuring their understanding of the piano playing task as well as the self-evaluation step. During each trial, subjects were asked to remain seated at the piano and follow the instructions presented on a laptop screen at eye level.
One of the five emotion descriptors was presented for 20 s. Next, a fixation cross was presented for 15 s while recording the resting state. This was followed by the first page of the music score that was presented with a 3 s countdown to start playing. The neutral baseline playing instruction was displayed for 30 s alongside the countdown to the start of emotional playing. This was followed by the second page with the score for emotionally expressive playing lasting 30 s. Participants self-evaluated their own performance on a scale from 1-9, on dimensions of valence (from negative to positive) and arousal (from low to high), with 5 representing neutral on both scales ( Figure 2A). Participants were reminded to submit their ratings not based on their actual felt emotions, but, based on how well they felt, their own performance expressed the intended emotion. All participants were fully briefed on what to expect before their first scheduled recording session. At each first session, participants were given time to familiarise themselves with the recording sequence and emotional descriptors, ensuring their understanding of the piano playing task as well as the self-evaluation step. During each trial, subjects were asked to remain seated at the piano and follow the instructions presented on a laptop screen at eye level.
One of the five emotion descriptors was presented for 20 s. Next, a fixation cross was presented for 15 s while recording the resting state. This was followed by the first page of the music score that was presented with a 3 s countdown to start playing. The neutral baseline playing instruction was displayed for 30 s alongside the countdown to the start of emotional playing. This was followed by the second page with the score for emotionally expressive playing lasting 30 s. Participants self-evaluated their own performance on a scale from 1-9, on dimensions of valence (from negative to positive) and arousal (from low to high), with 5 representing neutral on both scales ( Figure 2A). Participants were reminded to submit their ratings not based on their actual felt emotions, but, based on how well they felt, their own performance expressed the intended emotion. All participants were fully briefed on what to expect before their first scheduled recording session. At each first session, participants were given time to familiarise themselves with the recording sequence and emotional descriptors, ensuring their understanding of the piano playing task as well as the self-evaluation step. During each trial, subjects were asked to remain seated at the piano and follow the instructions presented on a laptop screen at eye level.
One of the five emotion descriptors was presented for 20 s. Next, a fixation cross was presented for 15 s while recording the resting state. This was followed by the first page of the music score that was presented with a 3 s countdown to start playing. The neutral baseline playing instruction was displayed for 30 s alongside the countdown to the start of emotional playing. This was followed by the second page with the score for emotionally expressive playing lasting 30 s. Participants self-evaluated their own performance on a scale from 1-9, on dimensions of valence (from negative to positive) and arousal (from low to high), with 5 representing neutral on both scales (Figure 2A). Participants were reminded to submit their ratings not based on their actual felt emotions, but, based on how well they felt, their own performance expressed the intended emotion. When recording, five trials were grouped into a single run. Ten runs were recorded at each session, with short rests between each run. A total of fifty trials were recorded at each of the four sessions scheduled per participant. Audio from the performance was recorded alongside the EEG, and participants were made aware that these would be evaluated by listeners in future steps.
EEG Acquisition
EEG signals were acquired using an Enobio 32 device, with 32 electrodes placed according to the International 10-20 system. Common Mode Sense (CMS) and Driven Right Leg (DRL) connections were applied to the right earlobe for grounding, while signal quality was monitored within the hardware's native signal acquisition software Neuroelectrics Instrument Controller v.2.0.11.1 (NIC). The quality index provided within NIC consists of a real-time evaluation of four parameters, namely line noise, main noise, offset, and drift. Data were recorded at a 500 Hz sampling rate with a notch filter applied at 50 Hz to remove power line noise.
EEG Preprocessing
EEG data were prepared for further analysis using an automated Preprocessing Pipeline script in MATLAB and utilizing several functions from the EEGLAB toolbox [22]. First, the Automated Artifact Rejection function in EEGLAB was applied to the raw EEGs to eliminate the bad portions of the data, and the channels that had lost more than 20% of their data were discarded. The data were filtered using the zero-phase bandpass FIR filter between 0.5 to 45 Hz implemented in EEGLAB, and referenced to the mean of T7 and T8 channels. Independent Component Analysis (ICA) and ICLabel plugin in EEGLAB were used to detect and remove the embedded artifacts including muscle activity, eye blinks, eye movements, and heart electrical activity. The 30 s of neutral baseline performance and 30 s of each emotional performance (distressed, excited, depressed, relaxed, and neutral) were extracted resulting in 2000 EEG time series (data from 10 participants across 4 days and 50 piano-playing excerpts per session) for emotional playing (400 segments for each of the emotional instruction, further called observations), and 2000 EEG time series for the corresponding baseline. Emotional playing and baseline time series were treated separately but in the same way. The electrodes were grouped into ten regions of interest (ROI, Figure 2C) and the average for electrodes within the ROI was obtained. The observations that did not contain information on at least one ROI (due to channels removal in previous steps) were excluded from the analysis. To maintain the homogeneity of the final matrix, the minimum number of observations remaining in each emotional condition was equal to 314. The 30-s time series for each of the 10 ROIs were segmented into 3-s epochs starting from 3rd s to 27th s, and the averages of these segmented data were used for further assessments. The dimensions of the final observation matrix thus were equal to 1570 × 10 × 1500 arrays ((314 observations for each of five emotional conditions) × (ROIs) × (3 s × 500 samples per second)). By filtering the separate rows of the observation matrix into different frequency bands for 1-4 Hz (delta), 4-8 Hz (theta), 8-12 Hz (alpha), 12-30 Hz (beta), and 30-45 Hz (gamma), five sub-band observation matrices were obtained for each emotional playing part and corresponding baseline.
EEG Analysis
We focused on the effective connectivity approach that provides information on the direction of information flow in the nerve systems and illustrates complex interactions in the brain regions [23][24][25]. Granger Causality (GC) was utilized as a relatively simple (with low hardware demand) available method of calculating the directed connections that increase the success of implementing a real-time BCMI machine in future research [26].
Granger Causality
GC is a statistical concept of causality which is founded on prediction. According to Granger causality, if a signal X "Granger-causes" a signal Y, then past values of X should provide information that helps to predict Y, whereas the past values in Y alone are not sufficient to predict its future [27].
GC is formulated as follows: Let y(t) and x(t) be stationary time series. First, find the proper lagged values of y(t − i) to include in the univariate autoregression of y(t) according to (1): Then, the autoregression is recalculated by including lagged values of x(t) as follows (2): where a(i) and b(j) are regressive coefficients and ε(t) is the prediction error calculated by considering the effect of lagged values of x(t) on predicting the y(t), and ε(t) is the prediction error of y(t) calculated without using x(t). Therefore, if the variance of ε(t) is smaller than that variance of ε(t), then the GC will be 1 and x(t) ''Granger-causes" y(t); if the variance of ε(t) is larger than that variance of ε(t), then GC value will be 0 and x(t) does not ''Granger-cause" y(t).
Feature Extraction
For all observations, the GC was calculated between each pair of the ROIs, and 10 × 10 connectivity matrices were created, where the array (i, j) illustrates the GC value between channels i, j. The connectivity matrices were further transformed into a 100element row by placing their 10 × 10 arrays sequentially next to each other, resulting in the observation matrices of 1570 × 100 arrays instead of 1570 × 10 × 1500 [23].
Feature Selection
To reduce the number of features in the emotional and baseline matrices, the following steps were taken. First, it was expected that the neutral baseline EEG and emotional playing EEG would be different, thus a two-sided Student t-test was applied to the data in each column in the feature matrix belonging to one emotional category and the same column in the baseline matrix. The null hypothesis (both data are drawn from the same distribution) was rejected with p-values < 0.05. Since no significant differences between the 'neutral baseline EEGs' and the 'EEG recorded during the expressing neutral emotions' were expected, this step was ignored for 314 observations of neutral playing. The remaining features that (1) were able to discriminate between the neutral baseline and the emotional playing, and (2) were shared across conditions, were used to create new feature matrices.
Furthermore, the one-way MANOVA [28] analysis implemented in MATLAB was performed to identify the features that were able to discriminate between conditions. The components (the linear combination of the features) were created based on the canonical correlation analysis. For each frequency band, four components were retained with p-values < 0.05, and the statistical outcome is presented in Supplementary Materials. The features that contributed to the selected components were chosen as a set of final features.
Graph Quantification
After selecting the features, the matrices of observations were averaged to obtain a single average connectivity matrix per condition with arrays describing the strength of the established connections with values close to 0 indicating a weak connection and values close to 1 indicating a strong connection. Since the connectivity matrices are similar to graphs which contain ROIs as nodes and connections as edges, we applied graph quantifying techniques to statistically evaluate the patterns [18,29]. The degree of nodes referring to the number of outputs from each node were calculated for 1570 connectivity matrices, and, for each ROI (node), one-way ANOVA was performed on 10 pairs of emotional states (C(5,2) = 10). By considering 10 nodes and 10 pairs, 100 p-values and 100 of Cohen's d effect sizes were obtained.
Results
As expected, the tasks of emotional and neutral playing differed considerably in terms of arousal and valence levels with respect to the state of the intended emotion transfer. The means and standard deviations of subjective evaluation on the scales of valence and arousal are plotted in Figure 2B.
Differences in connectivity were expected to be observed in performers for different emotional instructions. However, the number of initial extracted features being very high made results not possible to comprehend. Thus, a further feature reduction was performed and only the features that were able to discriminate between the neutral baseline and the emotional playing and were shared across conditions were utilized further.
The number of selected features that were able to discriminate between conditions is presented in Table 1. Surprisingly, for the low frequency bands-delta, theta, alpha-a limited number of connections (4 to 6) contributed to the discrimination between the emotional playing conditions. In contrast, a dense pattern of connections between regions that was able to discriminate between conditions was observed in beta and gamma frequency ranges (30 to 38). Tables containing statistical outputs and visualization of connectivity for all frequency bands are provided in the Supplementary Materials. The connectivity patterns for beta and gamma ranges are plotted in Figure 3, where outflow connections are color-coded in the same way as the ROIs, and the degrees of node are represented by the size of the node, and strength of the connections is reflected by the thickness of the lines.
As the strength of the connections identified in frequency bands can either increase or decrease reflecting in degrees of nodes, the degrees of nodes for all emotional conditions were followed by a pairwise comparison between emotional instructions. The statistical outcomes of this comparison for beta and gamma ranges are illustrated in Figure 4. (6), parieto-occipital left (7) and right (8), fronto-central (9) and central parieto-occipital (10)) and corresponding connections from each ROI. The size of the circle reflects the degrees of nodes, and the thickness of the lines reflects the strength of the connection.
As the strength of the connections identified in frequency bands can either increase or decrease reflecting in degrees of nodes, the degrees of nodes for all emotional conditions were followed by a pairwise comparison between emotional instructions. The statistical outcomes of this comparison for beta and gamma ranges are illustrated in Figure 4. (6), parieto-occipital left (7) and right (8), fronto-central (9) and central parieto-occipital (10)) and corresponding connections from each ROI. The size of the circle reflects the degrees of nodes, and the thickness of the lines reflects the strength of the connection.
As the strength of the connections identified in frequency bands can either increase or decrease reflecting in degrees of nodes, the degrees of nodes for all emotional conditions were followed by a pairwise comparison between emotional instructions. The statistical outcomes of this comparison for beta and gamma ranges are illustrated in Figure 4. Connections observed in the beta range were more abundant in emotional playing conditions in comparison to neutral playing ( Figure 3). The low valence conditionsdistressed and depressed-were characterized by somewhat reduced node densities in fronto-central, right centro-parietal, right and left parieto-occipital regions. This was especially pronounced when depressed state was contrasted with relaxed state (Figure 4). The high valence states-excited and relaxed-were characterized by denser right frontolateral, fronto-central and parieto-occipital nodes when compared to the neutral condition; this effect was stronger for relaxed-neutral contrast. Moreover, the right mid-frontal node was somewhat more connected to the left and right parieto-occipital regions, and connections with the right fronto-lateral node were reduced.
Connections in the gamma range, although abundant, were not easily discriminating based on either valence or arousal (Figure 3). The right mid-frontal, left centro-parietal and parieto-occipital nodes showed reduced degrees of nodes in emotional playing conditions when compared to neutral playing. However, an opposite effect was observed for the left fronto-lateral and right-parieto-occipital regions, where the node density was higher during emotional performance. In the low arousal states, the left mid-frontal node was connected to both the right and left parieto-occipital nodes, but not this was not seen in the high arousal states. In high arousal conditions, the left parieto-occipital node was somewhat better connected to the left fronto-lateral and central parieto-occipital nodes. Surprisingly, the right centro-parietal and parieto-occipital node showed a very limited connectivity to other nodes in the gamma range ( Figure 4).
Discussion
The aim of the present study was to examine the organization of networks during music performance with the intention to transfer emotion. The EEG responses were collected and the patterns of connectivity were estimated using Granger causality approach. The effective (signal flow) connectivity was analyzed in different frequency bands, i.e., delta, theta, alpha, beta, and gamma. As the connectivity patterns are complex and difficult to evaluate, a set of features able to discriminate between conditions was identified first and analyzed further.
The analysis of the spectral properties of EEG reported in our previous study [13] on the same dataset suggested differences between high-arousal and low-arousal conditions to be reflected in elevated frontal delta and theta activity and signs of increased frontal and posterior beta and gamma. Surprisingly, the connections within the low (delta, theta and alpha) frequency bands did not show a potential to distinguish among emotional playing tasks. This could indicate that the similar connection pattern was present in all emotional playing conditions in the low frequencies. In contrast, the activity in higher frequencies-beta and gamma ranges-demonstrated a dense connection pattern discriminating between emotional performance conditions. The observed effect matches the results by Dolan et al. [30], who, using brain entropy and signal complexity, demonstrated that the prepared performance was associated with the activity in low frequencies (delta, theta and alpha bands), while the improvisation required activity at higher frequencies (beta and gamma). Improvisatory behavior in music was previously related to network of prefrontal brain regions commonly linked to the presupplementary motor area, medial prefrontal cortex, inferior frontal gyrus, dorsolateral prefrontal cortex, and dorsal premotor cortex [31]. We showed that connections from frontal regions to all other regions were present and expressed. This pattern might reflect the activation of anterior brain regions contributing to musical structure building when performing [32], emotional processing where the prefrontal region plays the most important role and interacts with almost all other regions [33], and also executive functioning [34].
Surprisingly, however, in the gamma range, the right mid-frontal node displayed reduced density during emotional performance when compared to neutral playing. This observation could be partly related to the results by Pinho et al. [35]. The authors showed a decrease in dorso-lateral prefrontal cortex activity in professional pianists improvising based on specific emotional cues (happy/fearful) but an increase in activity in the same region when the improvisation was based on specific pitch sets. For the left lateral frontal region, however, an opposite effect was observed-the node density was higher during the emotional performance in gamma and, partly, in beta ranges. Previously, Bhattacharya et al. [36] demonstrated the leftward dominance for the degree of gamma band synchrony in musicians while they were listening to music than in non-musicians and attributed this effect to the manifestation of musical memory. Correspondingly, in the beta frequency band, the predominant activation in the left hemisphere along with the inter-hemispheric integration between the frontal right and parietal left region during improvisation were observed by Rosen et al. [2] in professional musicians but not in amateur musicians [4]. Finally, Kaneda et al. [37] indicated that the left inferior frontal activation might contribute to generation of emotions by semantic elaboration and regulation through reappraisal. Taking into account the setup of the current experiment, where subjects were asked to reflect certain emotion and evaluate the success, it is well likely that semantic and reappraisal-related processes were activated during the performance. Thus, the denser connections in the high frequency bands, capable of showing the most significant differences among the positive, neutral, and negative emotional states [38], may indicate the mediation of information transmission during the processing of emotion-related activities [39] and the cognitive aspects of the active performance [36] on the one hand. On the other hand, it suggests that professional training in music allows a distinct context-sensitive functional connectivity between multiple cortical regions to occur during listening to music [36], and potentially it depends on the level of experience [2].
Investigating the neural underpinnings of active emotional music performance promotes a better understanding of human creative processes and capabilities. Notably, the connectivity measures seem to reflect different aspects of emotional performance than classical spectral EEG measures. The different approaches allow for identifying neural signatures potentially applicable in Brain-Computer Music Interface (BCMI) systems designed for supporting embodied music performance, or neurofeedback contexts where a user may learn to control musical parameters via their online EEG signal. However, the study design, as utilized in the present study, is complex and repetitive, allowing unique data collection but resulting in a small sample size. This prevented the evaluation of the effects of individual factors such as gender, age or musical experience. Future works should test the suitability of the extracted parameter for BCMI use and selection of effective classification approaches [40].
Conclusions
The current study demonstrates that EEG-based connectivity in beta and gamma frequency ranges can effectively reflect the state of the networks involved in the emotional transfer through musical performance, whereas utility of the low frequency bands (delta, theta, alpha) remains questionable.
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Ethics Committee of Rīga Stradin , š University (protocol code Nr. 6-1/01/59).
Informed Consent Statement:
Written informed consent was obtained from all subjects involved in the study.
|
v3-fos-license
|
2018-12-18T06:50:28.054Z
|
2018-06-04T00:00:00.000
|
56542420
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://grasasyaceites.revistas.csic.es/index.php/grasasyaceites/article/download/1721/2279",
"pdf_hash": "74053993c414e9dd4e9e5c1a9de6fe1cbfa5b035",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45918",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "74053993c414e9dd4e9e5c1a9de6fe1cbfa5b035",
"year": 2018
}
|
pes2o/s2orc
|
Chemical and structural variations in hazelnut and soybean oils after ozone treatments
In the present work, the effect of ozone treatments on the structural properties of soybean oil (SBO) and hazelnut oil (HO) were investigated. The study presents the findings and results about the oxidation of HO and SBO with ozone, which has not been fully studied previously. The HO and SBO were treated with ozone gas for 1, 5, 15, 30, 60, 180 and 360 min. The ozone reactivity with the SBO and HO during the ozone treatment was analyzed by H, C NMR, FTIR and GC. The iodine value, viscosity and color variables (L*, a* and b*) of untreated and ozone treated oils were determined. Reaction products were identified according to the Criegee mechanism. New signals at 5.15 and 104.35 ppm were assigned to the ring protons of 1,2,4trioxolane (secondary ozonide) in the ozonated oils in H and C NMR, respectively. Ozonated oils exhibited peaks at 9.75 and 2.43 ppm in H and NMR, which corresponded to the aldehydic proton and α-methylene group and to the carbonyl carbon, respectively. The peak at 43.9 ppm in C NMR was related to the α-methylene group and to the carbonyl carbon. The new signals formed in the ozonation process gradually increased with respect to ozone treatment time. After 360 min of ozone treatment, the carbon-carbon double bond signal, which belongs to the unsaturated fatty acids, disappeared completely in the spectrum. An increase in viscosity, a decrease in iodine value and a dramatic reduction in b* of the oil samples on (+) axis were observed with increased ozone treatment time.
INTRODUCTION
While thermal methods can effectively eliminate pathogens, non-thermal technologies provide the potential advantages of maintaining physical, chemical, and sensory attributes and ensuring the raw characteristics of food products preferred by some consumers (Prakash, 2013).Ozone application is a relatively new method in food processing.It is a viable disinfectant for maintaining the microbiological safety of food products because of its substantial reactivity, penetrating ability and ability to decompose spontaneously to a nontoxic product (i.e., O 2 ) (Atungulu and Pan, 2012).It received the GRAS (generally recognized as safe) status for use as disinfectant and sanitizer in 1997 (USDA, 1997).Suggested applications of ozone in the food industry include food surface hygiene, sanitation of food plant equipment (Greene et al., 1999), reuse of waste water (Rice et al., 1981), cleaning of shellfish and disinfection of poultry carcasses and chill water (Yang and Chen, 1979), increasing shelf-life of fruit and vegetables (Beuchat, 1992), and providing hygienic conditions in storage atmosphere of nuts and cereal (Chen et al., 2014).In addition, the ozone treatment of unsaturated triglycerides present in vegetable and nut oils has been suggested recently due to the use of ozonated oils in several applications (Soriano et al., 2003).Ozone treated oils have been used in the food, cosmetic and pharmaceutical industries as antibacterial and fungicidal agents (Zanardi et al., 2008).The chemical reactions of ozone with unsaturated fatty acids present in oils are very complex.The analyses related to these reactions provide useful information on the functional group changes during ozonation as well as the identification of the products, according to the Criegee mechanism regarding the formation of ozonides from alkenes and ozone.The reaction of ozone with the unsaturated bonds in the lipid fraction generates the formation of a mixture of oxygenated compounds such as ozonides, peroxides and aldehydes according to the mechanism described by Criegee (1975).The initial pathway of formation of the primary ozonide is a 1,2,3 trioxolane.Pryor, (1994) suggests that a diradical intermediate is produced from the primary trioxolane by a O-O bond scission.The diradical can decompose into an aldehyde and a carbonyl oxide.In an aqueous environment, the carbonyl oxide can form a hydroxyhydroperoxide, which breaks down to an aldehyde and hydrogen peroxide.In the relatively anhydrous environment, the primary ozonide rearranges to form the secondary ozonide (1,2,4 trioxolane), which can decompose to a hydroperoxide and an aldehyde.The hydroperoxide can initiate lipid peroxidation (Figure 1).Several oxygenated compounds such as hydroperoxides, ozonides, aldehydes, peroxides, and polyperoxides are produced by the Criegee mechanism.Various methods are used for studying the characterisation of ozonated vegetable oils e.g. 1 H and 13 C NMR, FT-IR, GC, determination of peroxide and acidity values and viscosity measurements (Soriano et al., 2003;Sega et al., 2010;Diaz et al., 2005, Rodrigues de Almeida Kogawa et al., 2015).
Physicochemical changes in the oxidation reaction of ozone with hazelnut oil and soybean oil have not been studied extensively.Hazelnut oil and soybean oil have been selected due to their high amounts of unsaturated fatty acids.In addition, there is a growing interest in evaluating the role of hazelnuts in human nutrition and health (Alasalvar et al., 2006).This is related to their special fatty acid composition which consists of more than 90% oleic and linoleic acids and small amounts of palmitic and stearic acids, along with health-promoting components including tocopherols, phytosterols, polynenols and squalene (Alasalvar et al., 2006;Xu et al., 2007).Hazelnut oil is becoming increasingly popular in Turkey and elsewhere and is widely used for cooking, deep frying, salad dressing, and flavoring ingredients (Alasalvar et al., 2006).Soybean oil is used to produce margarine, shortening, and cooking oil in the food industry (Wang, 2002).They both are rich in monounsaturated (MUFA) and polyunsaturated (PUFA) fatty acids (approximately, 83% MUFA and 9% PUFA in hazelnut oil and 20.32% MUFA and 61.99% PUFA in soybean oil) (Alasalvar et al., 2006;Gunstone, 2002).In the present study, the effect of ozone gas on the oxidation of hazelnut oil and soybean oil were investigated for different ozonation periods. 1 H and 13 C NMR (Nuclear Magnetic Resonance) and FTIR (Fourier Transform Infrared Radiation) were used to detect the chemical changes in ozone treated oil samples.The fatty acid compositions of the oil samples were analyzed using GC (Gas Chromatography) to determine the variation in the fatty acid compositions of untreated and ozone treated oil samples.The changes in viscosity and color of ozone treated oil samples were determined.
Materials
Hazelnut oil and soybean oil were purchased from a local manufacturer (FISKOBIRLIK Company, Giresun, Turkey; Nadir Yag ˘ Company, Gaziantep, Turkey).Acetic acid (100%) and chloroform (99.5%) were purchased from Merck Company.Potassium iodide, sodium thiosulfate, starch, wijs solution, and cyclohexane were supplied from Riedel de Haen.All chemicals used were of analytical grade.
Ozonation of hazelnut and soybean oils
Ozone gas was generated by an ozone generator (Ozone Marine, OMS Model, Izmir, Turkey) operating according to the corona-discharge method at a constant flow rate of 1L/min with an ozone concentration of 40 mg/L.The oxygen required for ozone generation was provided from the air.A 100 mL oil sample was placed in a glass bottle.Ozone gas was directed into the bottle and bubbled through the oil sample for different periods of time (1, 5, 15, 30, 60, 180 and 360 min).The ozonated samples and a control sample were stored in a hermetically sealed glass bottle kept in the dark at room temperature up to 120 days and analyzed at specific time intervals (5, 10, 15, 20, 30, 40, 50, 60, 75, 90 and 120 days).
Iodine value (IV)
The iodine value (IV) represents the quantity of iodine (in grams) that will react with the double bonds in 100 g of oil sample.The IV was determined according to the European Pharmacopoeia, 2004.The IV was calculated by means of the following equation: where A is the volume in mL of thiosulphate solution used for carrying out a blank test, B is the volume in mL of thiosulphate solution used for the titration, and m the quantity, in grams, of substance.
Hunter colorimeter
The color of untreated and ozone treated oil samples was measured using a Hunter Lab colorimeter (Color Flex, Hunter Associates Laboratory Inc., Reston, Virginia, USA).The instrument (65°/0° geometry, D25 optical sensor, 10° observer) was calibrated using white and black reference tiles.The color values were expressed as L* (whiteness or brightness/darkness), a* (redness/greenness) and b* (yellowness/blueness).The Color measurements were reported as the average of triplicate measurements for each sample.
Gas chromatography analysis
The fatty acid compositions of hazelnut and soybean oil were determined using an Agilent 7890A gas chromatograph (Agilent Technologies, Santa Clara, CA, USA) with a split/splitless injector, flame ionization detector and HP-88 capillary column (88% cianopropylaryl; 100 m × 0.250 mm, 0.20 μm i.d.).The injector and detector temperatures were 250 and 260 °C, respectively.The oven temperature was set as follows: 1 min at 120 °C, from 120 to 175 °C at 10 °C/min, 10 min at 175 °C, from 175 to 210 °C at 5 °C/min, 5 min at 210 °C, from 210 to 230 °C at 5 °C/min and 5 min at 230 °C.Helium was the carrier gas with a flow rate of 1.5 mL/min.
For fatty acid methyl esters (FAME) were prepared as follows: 2 mL n-hexane and 2 mL KOH (in methanol) were added to 1 drop of oil and the mixture was shaken vigorously.The supernatant (FAME) was taken and a 1µL of FAME mixture was injected into the GC system.
Viscosity measurements
The viscosity of the untreated and ozone treated oil samples was measured using a controlled stress rheometer (Bohlin CVO-R, Malvern Instruments Limited, UK) with cone and plates (2°/ 20 mm).Shear rates in the range of 0-300 s −1 under steady shear conditions were applied to the oil samples and the resulting shear stress was measured at 25 °C.
FT-IR analysis
FT-IR analyses were applied to investigate the alterations in the functional groups found in hazelnut and soybean oil after ozone treatment.About 2 μL of sample were deposited between two disks of KBr, avoiding air bubble formation, and then IR spectra were recorded using a FT-IR spectrometer (FT-IR 100, Perkin Elmer Incorporation, USA).The FTIR spectra of the samples were measured in the 4000-650 cm −1 region at room temperature.
NMR analysis
1 H and 13 C NMR spectra were recorded on a Bruker Minispec spectrometer 400 MHz at 25 °C in CDCl 3 .All the experiments were performed under the same experimental conditions and at the same sample concentration (about 100 µL of oil sample in 750 µL CDCl 3 ).The reaction of ozone with HO and SBO was studied by 1 H and 13 C NMR at certain time intervals (1, 60, 180 and 360 min).
Statistical analysis
The data from the chemical analyses were analyzed using ANOVA.Statistics analyses were applied to the data using the SPSS statistical package ( 2013), (IBM SPSS Statistics for Windows, Version 22.0.Armonk, NY: IBM Corp.)
Iodine value (IV)
IV is often used to determine the amount of unsaturated bonds in the oils which react with iodine compounds.A high iodine number indicates the presence of a large number of C=C bonds (Thomas, 2000).The IV of HO before ozone treatment was measured as 95.4 ± 0.1 (Figure 2A).This value correlates well with previously reported values ranging from 90.6 to 97.4 (Xu et al., 2007).The IV of HO reduced dramatically as the samples were treated with ozone (Figure 2A) and the IV of the untreated and ozone treated HO samples were significantly different (p < 0.05).As the treatment time is extended, the number of ozone molecules contacting a constant volume of the oil sample increases, causing an extensive reduction in the amount of unsaturated bonds.Although a sharp reduction in the IV of samples was observed with respect to the ozone treatment period, the variation in IV with respect to storage (Figure 2B) was not statistically significantly (p > 0.05).Since ozone reacted with carbon-carbon double bonds in the unsaturated fatty acid, this aspect may represent a measurement of residual double bonds.The IV of the ozone treated oil samples correlated well with data from previous studies on different vegetable fats and oils in relation to ozone treatment time (Skalska et al., 2009;Zanardi et al., 2008).Zanardi et al., (2008) reported that the IV of sesame oil samples decreased as ozone treatment time was extended.Skalska et al., (2009) stated that almost all unsaturated bonds in the oils were saturated depending on ozonation time.
Color measurement
Table 1 represents the variation in L*, a*, and b* of untreated and ozone treated HO and SBO.The L*, a*, b*, or CIE Lab is an international standard, indicating lightness (L=0-100), chromaticity on a blue to yellow (−a to +a) and on a green to red (−b to +b) axes (Rocha and Morais, 2003).While the b* value of ozone treated oil samples was reduced significantly on the (+) axis with increased ozone treatment period (p < 0.05), the a* and L values did not exhibit significant changes (p > 0.05).Changes may arise from the pronounced effect of ozone treatment on carotenoid pigments which give their yellow color to the oils.The color of hazelnut oil and soybean oil turned pale as ozone treatment time increased.A loss in the color of the oils was more evident after 360 min ozone treatment.This was correlated well with previously reported studies.Carotenoids in nut oils are β-carotene, lutein, and zeaxanthin (King et al., 2008).Soybean oil also contains lutein, β-carotene, and chlorophyll pigments (Gunstone, 2002).
The basic chemical structure of carotenoids consists of a polyenic chain ranging from 3 to 15 conjugated double bonds (Britton, 1995).The carotenoids can be influenced by reactions of oxidation and hydrolysis, which modify their biological actions due to the highly unsaturated polyenic chain (Rodriguez and Rodriguez-Amaya, 2007).The oxidation of carotenoids leads to the formation of trace quantities of several compounds with a low molecular weight.Depending on the concentration of ozone, the percentage decay of β-carotene increases, possibly due to the Criegee mechanism (Benevides et al., 2011).
Gas chromatography analysis
The fatty acid compositions in HO and SBO were determined by gas chromatogram.The main unsaturated fatty acid present in HO was oleic acid (C18:1); whereas SBO contained linoleic acid (C18:2) as well as oleic acid (C18:1) as unsaturated fatty acids.The unsaturated fatty acid composition of crude HO was calculated as about 90.76%.A high proportion of unsaturated fatty acids (79.15% oleic acid, 11.43% linoleic acid and 0.18% gondoic acid) and a small proportion of saturated ones (5.31% palmitic acid and 2.75% stearic acid) were contained in HO.SBO contained a high amount of linoleic acid (56.25%) followed by oleic acid (21.93%), and linolenic acid (7.40%).The total unsaturated fatty acid content of SBO was about 85.58%.The polyunsaturated fatty acid (PUFA) contents of HO and SBO were about 11.43% and 63.65%, respectively; while HO and SBO contained about 79.33% and 21.93% monounsaturated fatty acids (MUFA), respectively.
Table 2 illustrates the changes in fatty acid composition of HO and SBO after being treated with ozone.The percentage of oleic acid and linoleic acid was observed to decrease after ozone treatment.The reductions in percentages of linoleic acid an oleic acid in OHO60 were about 48.9% and 7.0%, respectively; while the decrease in percentages of linoleic acid and linolenic acid in SBO60 were about 20.7% and 45.5%, respectively.The reduction in the percentage of linolenic acid was higher than linoleic acid and oleic acid.The percentage of oleic acid in the SBO60 was altered slightly with 60 min ozone treatment since linolenic acid is more prone to oxidation than linoleic and oleic acids, due to its number of double bond.It was determined that the relative percentages of unsaturated fatty acids in the oils was reduced after 60 min ozone treatment while the relative percentages of saturated fatty acids increased, which is consistent with a previous study (Liu and White, 1992).
As seen in Table 2, the percentages of unsaturated fatty acids in the oil samples decreased rapidly after 180 and 360 min ozone treatment.The percentages of oleic acid in the HO and SBO decreased by 86.34% and 95.44%, respectively, and linolenic and linoleic acids were destroyed completely after 360 min ozone treatment.The increase in the relative percentages of saturated fatty acids reached its highest value with 360 min ozonation.Thereafter, new peaks with different numbers of carbon atoms (4-24) were observed in the chromatograms of the ozonated samples.Considering the predominant unsaturated fatty acids in HO and SBO as oleic and linoleic acids, respectively, it could be presumed that the new peaks formed mainly from the reaction of ozone with those fatty acids.In the oleic acid, the double bond was present in the C 9 position; while the linoleic acid contained two double bonds in the C 6 and C 9 positions.Therefore, the highest probability of the ozone reaction was with the double bond in the C 9 position.The oxidation of oleic acids with ozone involved the cleavage of the carbon-carbon (Zahardis et al., 2006).The oxidation products, which are so-called Criegee intermediates, are highly reactive and can undergo ester and acid formation and hydroperoxide formation.In the literature, the products formed from the cycloaddition of a Criegee intermediate with a double bond of fatty acids has been detected from the reaction of ozone with oleic acid in an aqueous medium (Zahardis et al., 2006).The reaction mechanism of ozone with unsaturated fatty acids and formed oxidative products changes in relation to the nature of the solvent (Nishikawa et al., 1995).In an aprotic media secondary ozonide (1,2,4-trioxolane) and peroxide oligomers are formed by the reaction of zwitterions and aldehydes; whereas in a protic medium, the zwitterions react with water to produce different peroxidic species (hydro-peroxides, oligomeric peroxides) and carboxylic acids (Murray, 1968).
The results of the gas chromatography analysis were consistent with the IV analysis in that the decrease in IV showed the reduction in double bonds.As a result of 360 min ozone treatment, the percentage of oleic acid in the HO decreased by 86.34%, while a dramatic reduction in IV was seen at 85.5%.
The ratio of C 18:2 /C 16:0 for HO and SBO was 2.15 and 5.31, respectively.Table 2, also shows that the ratio of C 18:2 /C 16:0 decreased from 2.15 to 0.32 for HO and from 5.34 to 0.49 for SBO as ozone treatment time increased.After 360 min ozone treatment, the ratio could not be calculated due to the complete disappearance of linoleic acid.The ratio of linoleic acid (C 18:2 ) to palmitic acid (C 16:0 ) can be used to determine the extent of oil deterioration because linoleic acid is more sensitive to oxidation while palmitic acid is more stable to oxidation (Tan and Che Man, 1999).The ratio of saturated to unsaturated fatty acids is used to indicate changes in the nutritional value of oils (Tsanev et al., 1998) in such a way that a small ratio is considered to show a high nutritional content of oils.The ratio of saturated to unsaturated fatty acids for HO and SBO was 0.08 and 0.16, respectively.It can be seen in Table 2 that the nutritional values of the oil samples decreased as ozone treatment progressed.This may result from the increase in the relative percentages of saturated fatty acid and the decrease in that of unsaturated fatty acids after ozone treatment.The untreated HO and SBO had the highest nutritional value while the nutritional values of OHO360 and SBO360 were minimal.
Viscosity measurement
Figure 3 represents the flow behaviors of ozone treated and untreated HO and SBO.The flow behaviors of the oil samples were determined by steady shear viscosity measurements at 25 °C.Figure 3 shows a linear increase in shear stress with increased shear rate, characterizing Newtonian flow for all the samples.The viscosity of the untreated and ozone treated oils was calculated from the slope of the curve.It was revealed that fresh vegetable oils showed Newtonian flow behavior (Santos et al., 2004).However, the viscosity of the oil samples varied distinctly depending on the ozone treatment time.The highest viscosity was observed for 360 min ozone treated oil samples followed by 180, 60 and 30 min.However, the viscosities of 1 and 5 min ozone treated samples were observed as close to the viscosity of crude oil.The increase in viscosity of HO and SBO with respect to ozone treatment time could be explained by the decrease in unsaturation (Sega et al., 2010).Adhvaryu et al., (2000) reported that as the oxidation reactions proceed, the products formed lead to oil thickening.This was explained by increased van der Waals interactions due to the disappearance of the double bonds (Zanardi et al., 2008).The modification of the unsaturated acyl chains affects the mobility and the reactivity of the molecules involved in the reaction.The reaction of ozone with unsaturated fatty acids produces different types of peroxidic compounds as described by the Criegee mechanism.Van der Waals interactions of these peroxidic compounds increase to form species having high molar masses.Therefore, the ozonation of double bonds in oils increases their viscosity due to those high molar mass species.However, the viscosities of the untreated and ozone treated HO and SBO were not observed to change with storage time.Figures 4A, 4B and 4C show the relationship between the viscosity and oleic acid, and linoleic acid contents in HO and SBO.The viscosity of untreated and ozone treated oils were plotted against the percentages of oleic acid and linoleic acid found that in the oils.There was a high correlation between them (Figures 4A, 4B, and 4C, R 2 = 0.951, R 2 = 0.87, R 2 = 0.953, respectively).The oleic acid content had more effect on the flow behavior of HO.An increase in viscosity was observed when a decrease in the oleic acid composition of HO occurred with respect to ozone treatment time.It was seen that the viscosity of HO increased by 5.7 times after 360 min ozone treatment.The rise in viscosity of SBO was by about 20 times at the end of 360 min ozone treatment.Kim et al., (2010) revealed that double bonds which form a twist in the fatty acid chain do not allow fatty acids to come together.The dimensions and orientations of oil molecules influence the viscosity of oils.The unsaturated oils converted to saturated forms after ozonation lose their fluidity and their viscosity increases.Fatty acids with more double bonds do not have a rigid and fixed structure, and are loosely packed and more fluid-like (Kim et al., 2010).
FTIR (Fourier transform infrared radiation) analysis
The variations in frequency and absorbance of the FTIR spectra bands especially belonging to unsaturated bonds were observed after ozone treatment (Guillen and Cabo, 2000).The intensity and frequency of the bands related to the carboncarbon double bonds in unsaturated oil decreased or these bands became overlapping and broadened as ozone treatment time was extended.The double bonds finally disappeared completely with increasing ozone treatment time.The broad and relatively strong bands corresponding to secondary ozonide were observed as ozone treatment proceeded.Figures 5A and 5B show the FTIR spectra of untreated and ozone treated HO and SBO.As seen in these figures, the weak bands that corresponded to the C=C-H stretch at 3006 reduced when ozone treatment time increased.In an earlier study, a higher band frequency was observed (=CH stretching vibration assigned to around 3006 cm −1 ) for a large proportion of polyunsaturated fatty acid content in the oil than for monounsaturated fatty acid content in the oil (Guillén and Cabo, 2000).We observed that the frequency of the band of HO and SBO was sharply reduced as ozone treatment proceeded.The reduction in frequency of around 3006 cm −1 in the FTIR spectrum belonging to SBO was more pronounced for 180 and 360 min ozone treatment due to its higher polyunsaturated composition than HO.A dramatic reduction in band intensity at around 722 cm −1 corresponding to overlapping methylene (-CH 2 ) and rocking vibration of olefins was observed as ozone treatment continued (Silverstein et al., 1974).
The bands at 2922 and 2853 cm −1 corresponded to asymmetrical and symmetrical stretching of the methylene (-CH 2 ) group of mainly unsaturated bonds in the lipids (Dogan et al., 2007).It was observed that intensities of the bands decreased slightly during ozone treatment.However, the reduction in frequencies of the bands related to SBO was more pronounced than HO.
The untreated HO and SBO showed a relatively sharp band due to the carbonyl stretch of the triglyceride ester group at 1743 cm −1 (Figure 5A and 5B).It could be observed that the broadening of the band and decrease in frequency occurred due to the formation of new carbonyl compounds as ozone treatment was extended (Soriano et al., 2003).
A small band assigned at 1463 cm −1 corresponds to the bending vibrations of CH 2 and CH 3 groups (Che Man and Setiowaty, 1999).It was detected that the absorbance value of the band decreased as ozone treatment time increased (Figure 5, A and B).The bands at 1237 and 1160 cm −1 related to (=CH stretching vibration) was assigned at around 3006 cm −1 with stretching vibrations of the C-O group in esters were observed in the spectrum of HO and SBO (Figures 5A and 5B) (Guillen and Cabo, 1997).
The band at 1105 cm −1 , which was broader and stronger in the FTIR spectrum of HO and SBO as the ozonation period increased, was assigned in accordance with the C-O stretch of the secondary ozonide, as is consistent with previous research (Wu et al., 1992).
NMR (Nuclear magnetic resonance) analysis
Nuclear Magnetic Resonance (NMR) spectroscopy analysis for untreated and ozone treated HO and SBO were performed in 1 H spectra to identify variations in the structure of HO and SBO with respect to ozone treatment time.
Table 3 shows the assignments of the main groups of untreated and ozone treated HO and SBO and variations in the intensities of those main groups with ozonation period in the 1 H NMR spectra.The signals related with olefinic protons from fatty acids were determined at 5.30 ppm.The signals at 2.00 and 2.75 ppm belong to the methylenic group in both sides of olefinic protons and the methylenic group between olefinic protons, respectively.The methylenic group in the α-position with respect to the carbonylic group was assigned at 2.30 ppm.The double doublet belonging to glycerol protons in the sn 1,3 position of the glycerol moiety was assigned at 4.30 ppm.The intensities of olefinic proton signals at 5.30 ppm and the methylenic group between olefinic protons at 2.75 ppm gradually decreased with the increase in ozone treatment.The reduction in the intensity in the signal at 5.30 ppm for HO was determined as 70.29% after 360 min ozone treatment; while a 79.40% reduction was observed for SBO.The signal at 2.75 ppm for HO disappeared completely; while the intensity of the signal for SBO was reduced by 94.50% after 360 min ozone treatment.It was concluded that the methylenic group corresponding to linolenyl chains in HO was reacted with ozone completely due to a lower concentration of linoleic acid in HO than SBO.The intensity of the signal at 2.00 ppm for HO and SBO diminished when ozone treatment progressed.The reductions were observed to be 92.35 and 63.58% for HO and SBO, respectively.The intensity of the methylenic group in the α-position with respect to the carbonylic group which is assigned at 2.30 ppm remained almost the same as the ozone treatment time changed.A similar trend was observed for the signal at 4.30 ppm, belonging to glycerol protons in the sn 1,3 position of the glycerol moiety.
Similar assignments to 1 H NMR spectra were also determined in the 13C NMR spectrums of HO and SBO.Two signals at 127 and 130 ppm which corresponded to the unsaturation of fatty acids disappeared completely after 360 min ozonation.The signal at 174.3 ppm was related to carbonyl and CH, respectively.The other carbons in the structure were assigned between 24.8 and 33.8 ppm (Diaz et al., 2005).
The new signals found in the 1 H NMR spectra of ozone treated HO and SBO are summarized in Table 3.A new signal at 5.15 ppm was assigned to the ring proton of 1,2,4-trioxolane (Sadowska et al., 2008).The result was confirmed by the appearance of the signal at 104.35 ppm in 13 C NMR spectra (Wu et al., 1992).Ozone treated HO and SBO also exhibited new peaks belonging to the aldehydic proton and α-methylene group to the carbonyl carbon at 9.75 and 2.43 ppm in the 1 H NMR spectra.In the 13 C NMR spectrum, α-methylene group related to the carbonyl carbon was determined at 43.9 ppm.In the 1 H NMR spectrum of HO and SBO, a weak signal which corresponded to aldehydes was identified at 9.75 ppm.Furthermore, the protons of the α-methylene group which were assigned at 2.00 and 2.75 ppm in the 1 H NMR spectrum of the untreated oils shifted to 1.39 and 1.65 ppm in the 1 H NMR spectrum of the ozone treated oil samples due to the formation of 1,2,4-trioxolane (Anachkov et al., 2000).
CONCLUSIONS
The oxidation of HO and SBO with ozone resulted in some oxidation products which have been detected by FTIR and NMR spectroscopic analyses.These spectroscopic analyses involve useful and convenient techniques to explain structural variations in the oil samples during ozone treatment.The NMR and FTIR analyses exhibited evidently, that new signals corresponded to ozonation products which gradually increased with respect to ozone treatment time.After 360 min ozone treatment, the carbon-carbon double bond signal which belongs to unsaturated fatty acids disappeared completely in the spectrum.The results of gas chromatography showed that all of the carbon-carbon double bonds had been consumed completely as ozone treatment progressed.Also, the results of the gas chromatography analysis were consistent with the IV analysis in that a decrease in IV resulted in a reduction in double bonds.
The relationship between ozone treatment time and structural and viscosity changes in HO and SBO were evaluated.We observed an increase in the viscosity of HO and SBO with respect to ozone treatment time due to the decrease in unsaturation.Moreover, ozone treatment may result in a significant variation in color parameters due to the degradation of carotenoid pigments.
The results of this work are consisted with the previously proposed mechanism related to the reaction of ozone with vegetable oils (Soriano et al., 2003;Sega et al., 2010;Diaz et al., 2005).Also, it was demonstrated that the applied ozone dose is an important parameter to determine the effects of ozone on the structural properties of HO and SBO.
Figure 1 .
Figure 1.Classic ozone additions to carbon-carbon double bond of polyunsaturated fatty acids.
Table 2 .
Reduction in fatty acid composition of HO and SBO
|
v3-fos-license
|
2019-12-05T09:32:27.922Z
|
2019-11-01T00:00:00.000
|
213214466
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.iiste.org/Journals/index.php/JHMN/article/download/50488/52145",
"pdf_hash": "2dc38b953591bbede94e0e94a1fc411da6748569",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45921",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "76c0a076531d7683fd3c92d8313c5084f7a4e015",
"year": 2019
}
|
pes2o/s2orc
|
Prevalence of HIV and Associated Risk Factors Among Adults in Negele Borena Hospital, Guji zone, Oromia Region, Ethiopia
Background:- The peak incidence of Human Immunodeficiency Virus (HIV) infection occurs among adults who are at the most productive age of the population. They are vulnerable to HIV because of their age, living arrangement, and cultural influences. Objective:- The aim of this study is to determine the prevalence of HIV infection and its risk factors among adults in Negele Borena Hospital. Method:- A cross-sectional study was conducted on 384 adults in Nagele Borena Hospital from April to September, 2017. Self-administered questionnaire was used to collect data on socio-demographic variables, knowledge of HIV/STIs and behavioral factors. In collaboration with the hospital work, whole blood samples were tested for the presence of antibody to HIV infection using National HIV rapid diagnostic tests algorithm. Chi-square test was conducted to identify risk factors, and finally, regression analysis was computed to identify the independent risk factors that influence the incidence of HIV/AIDS. Results:- The sero-prevalence of HIV was 11(2.86%) where 4(2.03%) and 7(3.74%) were males and females respectively. Alcohol drinking (AOR=5.2(1.1-25) and khat chewing (AOR=5.8, 95%CI 5.8 (1.3-27) discuss about sexual issues openly with their family AOR=13, 95%CI 13(1.6-102), peer pressure AOR=22.9, 95%CI22.9 (3.9-131) and multiple sexual partner (AOR=5.2, 95%CI 5.2(0.9-29) were the risk factors for HIV infection and HIV/AIDS transmission determinants. Conclusion and recommendation:- The prevalence of HIV infection among adults of Negele Borena Hospital is high. New infection among young people suggested that the disease is not under control yet in the country. Therefore, planning strategy to prevent the spread of HIV infection in town is critical.
Introduction
AIDS, an acronym of acquired immune-deficiency syndrome, is a disease that becomes one of the world's most serious health and development challenges; especially within Sub-Saharan Africa, the AIDS epidemic was noticed first in central Africa. Soon after, the epidemic was observed in East Africa, and subsequently in West Africa. There is a great deal of similarity between the HIV virus a retrovirus closely related to the Simian Immunodeficiency Virus (SIV) which is responsible for clinical immunodeficiency in other non-human primates especially the African green monkey, although the SIV does not cause immune-suppression among monkeys. This apparent correlation has led to speculation among scientists that African hunters who butchered and ate monkeys (a traditional food source) might have been exposed to a mutated form of the virus that was infective to humans (Carter et el., 2007).
AIDS was first recognized in USA in 1981 GC among homo sexual males. The opportunistic infections like pneumocystic carnie pneumonia was seen among five homo sexual and Kaposi sarcoma was diagnosed in 26 homosexuals with the virus. HIV virus was isolated from patients with lymphadenopathy in 1983 GC and on 1984 GC. The virus was clearly demonstrated to be the causative agent for AIDS (Lissan, 2004).
The disease AIDS makes the body too weak to fight off infectious diseases. This disease is caused by a virus known as the (HIV). It causes AIDS by injecting and damaging part of the body's defense against infection, part of the white blood cells known as lymphocytes, which are supposed to fight off invading germs. This virus attacks specific lymphocytes called T helper cells takes them over and multiplies this destroys more T cell that damages the body's ability to fight off invading germs and diseases.
When the number of T cells decreases in number to a very low level, people with HIV becomes more susceptible to other infections and they may get infections or certain types of cancer that a healthy body would normally be able to fight off. This weakened immunity is known as AIDS and can result in severe life threatening infections, some forms of cancer, and the deterioration of the nervous system. Characterized by spectrum starting from primary infection with or without the acute syndrome by relatively long period of asymptomatic stage after which in most patients progress to advanced disease. Although AIDS is always the result of an HIV infection, not everyone with HIV has AIDS. In fact Adults who become infected with HIV may appear healthy for years before they get sick with AIDS (Tadesse, 2007).
The Ethiopian Behavioral Surveillance Survey First Round, which used both survey and qualitative methods among 10 socioeconomic groups, reported above 90% knowledge of HIV and AIDS in all groups but high rates of risk behavior and low awareness of preventive measures, with the worst scores for rural females. Only 27% of those who had engaged in unprotected sex considered themselves to be at risk of HIV infection, and about twothirds of the regular drug and alcohol users had recent unprotected sex with a non-marital partner. More than 90% of the infections in Ethiopia take place among aged ranges between 15-49; the most economically productive segment of the population. Therefore, the aim of the current study is necessary to know the current status of HIV infected individual in the study area and exploring the knowledge, attitude and risky practice among the study participants toward HIV/AIDS (HAPCO, 2002).
Study Area Description
Negele Borena is a city administration and found in Guji Zone, state of Oromia. It is located at 600km Southeast of Addis Ababa and about 340km from Hawassa. The town is surrounded at east by Ethiopia Sumale regional state, at South by Borena zone, at the West by West Guji zone and in North by Bale zone of Oromia regional state. The town has one hospital, one health center and many other private clinics which is giving service for the whole population. Based on figures from the central statistical agency in 2004 E.C., this town has an estimated total population of 72,817 of whom 37,527 were males and 35,290 were females. The study will be carried out between April -September, 2017 at Nagele Hospital.
Study Design (Approach)
Health institutional based cross sectional study design was employed on the selected Nagele Borena Hospital from April -September, 2017.
Source Population
All adult population of age between 15-49 years in Nagele Borena city administration, Guji Zone.
Study Population
The target populations of this research was all adults aged 15 to 49 years, that were consult their physician or see their doctor during the study period in Nagele Hospital.
Sample Size
Sample size for this study was determined by using single population proportion formula. Since the prevalence of HIV/AIDS in the study area is unknown, the probability of HIV infection 0.5 (50%), 95% confidence interval, 5% margin of error. n = α ( ) , Where n=minimum sample size needed p=proportion (50%), assuming that proportion of HIV infection is 0.5. Z=significance level at confidence interval of 95% d=margin of error (0.05) Z α/2=value of standard normal distribution corresponding to significant level of alpha (α) 0.05 which is 1. 96. n = . . .
Sampling Methods
Health institutional based comparative cross-sectional study was conducted in Negele Hospital from April -September, 2017. A total of 384 sample size was determined and made to participate in the study. The simple random sampling technique was employed to recruit participants in the Hospital. Male to female ratio was almost kept proportional to participants size.
HIV Infection Screening
In collaboration with hospital, capillary blood was collected using sterile lancet from each study participant after thoroughly cleaning fingertips with 70% alcohol swabs. Then, the first drop of blood was wiped away and the remaining blood drops were collected using capillary tubes. Antibody to HIV infection was tested using the national HIV rapid diagnostic tests algorithm. Initially, HIV infection was screened using KHB (Bio-Engineering, Shanghai, Kehua). HIV positive samples were re-tested with STAT PAK (Chembio diagnostic, INC, Medfold, Newyork). HIV positive samples yielding discordant results between the first and the second tests were tested again with Unigold (Trinity Biotech PLC, Bray, Ireland). The results were interpreted using the current national algorithm for screening of HIV infection from whole blood which was adopted from WHO.
2.9. Eligibility Criteria 2.9.1. Inclusion Criteria All the adult peoples who were aged between 15-49 years that was registered to consult their physician and for voluntary counseling and testing (VCT) of HIV virus during the study period included in the study.
Exclusion Criteria
The study participant less than 15 years old and who are very sick unable to respond to the questions. All the adults who were aged between 15-49 years and registered out of the study period.
Data Analysis
All data was entered into SPSS Version 20 statistical software and the prevalence and risk factors associated with HIV/AIDS was determined. Test of association between outcomes of interest (HIV positive or AIDS patient) and other independent variable socioeconomic, demographic, biological, cultural factors (age, sex, wealth status, type of work, type of partner work, mobility behavior, sexual behavior, knowledge of HIV/AIDS and educational status, risky sexual practice like multiple sexual partner, chewing chat, drinking alcohol l was evaluated using Analysis of Variance assessment. P-values less than 0.05 statistically considered significant. Separate data for HIV risk factors and HIV testing results was merged to form one database for survey. HIV serostatus was set as an outcome variable in the analysis. Data were analyzed using. Bivariate analysis and Backward elimination test was used to see the association between dependent and independent variables.
The Knowledge of Adults on Sexually Transmitted Infections (STIs)
In the study the knowledge of the study participants about sexually transmitted infections was evaluated and 380(98.9%) of the respondents claimed to have ever heard about sexually transmitted infections (STIs), while (1.04%) did not. More than half 74.7%of adults mentioned that they know the modes of STIs transmission, and about25.3%did not know how STIs transmitted. The treatability and preventability of STIs known by the adults goes to 267(69.5%), 286(75.4%) and the other share of the adults 67(17.44%), 94(24.5%) responded that they do not know it respectively. Beside this the knowledge gap between the sex was determined in the (table 3) below.
The Knowledge Adults on HIV/AIDS
Almost all of the adults in this study had heard about HIV/AIDS, and modes of transmission were responded as 381(98.9%), 321(84.3%) respectively. 358(95.2%), 9(2.4%), 9(2.4%) respondents mentioned ways of HIV transmission is through un safe sex, common use of sharp materials, and mother to child respectively. 9(2.4%) of adults were responded as a pregnant woman can transmit HIV to her unborn child. About 33(8.66%) adults responded as Peoples are likely to get HIV by deep kissing. Adults also Responded as how taking a test for HIV one week after having sex will tell a person if she or he has HIV and AIDS preventability 27(7.1%)and 302(79.3%). Abstain 279(73.2%), condom usage 89(23.4%), being faithful 11(2.88%) as means of HIV/AIDS prevention were reported by the adults as shown on (Table 4) below.
The Source of Information about HIV/AIDS of Adults
The source of information about HIV/AIDS was reported as it was from health extension workers, parents, school clubs, and media(Radio, Television) with the respective percentage of 107(28.1%), 23(6%), 99(25.9%), and 152(39.9%)as shown on (table 5) below. (Table 6); below shows the number of CD4 of all adult patients were determined and started ART drugs in hospital. The females are highly vulnerable to AIDS than males with 4(57.1%) their CD4 count is less than 100/mm 3 of blood, while this was not a case in males, but about 2(50%) of male range between 101-350/mm 3 . Among the positive adults males 2(50%) and 3(42.8%) females was above 350 CD4/mm 3 . Regarding the age distribution, 3(60%) of adults with age ranges between 26-35 years old are with less than 350 CD4/mm 3 . Yet again 4(66.7%) of unmarried adults were comprising of CD4 count of less than 350/mm 3 of blood as shown below.
The Adults' Individual Character and Behavioral Risk Practice
Sometimes the individuals characters practiced badly so far become to control and pressure them to perform certain risk practice. The risk of HIV infection is determined by the total number of unprotected sex acts with an HIVinfected partner and the efficiency of HIV transmission as shown on (Figure 4) below.
x x Figure 2: Framework of biomedical and behavioral risk factors for HIV acquisition.
The Bivariate and Multivariate Analysis of Risk Behavioral Practice Among Adults in Negele Borena
Hospital. Table 7 below shows the unadjusted and adjusted association between risk behavioral practice as well as HIV/AIDS prevalence that were determined in bivariate and multivariate analysis respectively. In order to investigate the association of independent variables with both univariate and multivariate analysis were used. A total thirteen variables (age, sex, residence, level of education, marital status, occupation, income, drinking alcohol, chewing chat, smoking cigarette, sexual partner, discussion about sexual issue, peer pressure were considered for the bivariate analysis. However only ten variables (age, sex, residence, marital status, drinking alcohol, chewing chat, smoking cigarette, sexual partner, discussion about sexual issue, peer pressure)were associated with risk behavioral practice on the bivariate analysis and were selected as candidate variables for multivariable logistic regression analysis. To determine independent predictors of HIV infection, multivariable logistic regression analysis was employed by taking variables whose p-value was < 0.1 in the binary logistic regression model. Pvalue of < 0.05 was considered statistically significant. The multivariable logistic regression analysis was used by taking all the ten factors into account simultaneously and only five of the most contributing factors remained to be significantly and independently associated with risk behavioral practice (drinking alcohol, chewing chat, discussion about sexual issue in the family, peer pressure, and multiple sexual partner). Drinking alcohol had showed statically significant association with outcome variable. Adults who were frequent drinker were 5.2times more likely to have risky behavioral practice as compared with those who did not drink (AOR=5.2(1.1-25). Adults those who drink alcohols daily were 3.3 times more likely to experience risky behavioral practice as compared to who did not (AOR=3.3(0.21-50), but not statistically significant. Besides this factor chewing khat is another risk behavior in which adults those who chew khat frequently were 5.8times more likely to have risky behavioral practice as compared with those who did not chew khat (AOR=5.8 (1.3-27).
Furthermore, those adults who did not discuss about sexual issues openly with their family were 13times more likely to have risky behavioral practice as compared to those whom their family discuss about sexual issues (AOR=13 (1.6-102). The adults who were under peer influence practically were 22.9times more likely to have risky behavioral practice as compared to those who didn't (AOR=22.9(3.9-131). Yet again, the adults who had 24 multiple sexual partner were 5.2 more likely to have risky behavioral practice than those who didn't have multiple sexual partner in the last 12 month (AOR=5.2(0.9-29).
HIV/AIDS Transmission Determinants among Men and Women
Table8, below show results from the logistic regression model of determinants of HIV transmission during the study period and their variation between men and women among sexually active individuals age 15-49 in Nagele Borena Hospital. The unadjusted prevalence HIV/AIDS is highly correlated with their respective transmission determinants. However, there is variation between men and women. From what was mentioned in table 7, adults who drinks alcohol frequently were 5.2times more likely to have risky behavioral practice as compared with those who did not drink. Moreover, HIV/AIDS was considered as it was prevalent in men with this large probability 40(55.6%) in those who were frequent drinker than in women 32(44.4%). In frequent chewer of females more than half 53(67.1%) of the probability pu t them under the risk doing of contracting HIV/AIDS than males 26(32.9%) and it was 40(55.6%) for males who drink alcohol frequently. The large sharing prevalence of HIV/AIDS among adults who were influenced and did not discuss about sexual issue goes to females with percentage of 53 (57%) and 106 (54.4%) respectively. Similarly, it was 54.6% for males who has multiple sexual partners.
Discussion
This study identifies the risk associated factors such as drinking alcohol, chewing khat, peer pressure, multiple sexual partner that influence the presence of unsafe sex which are the positive predictor of the prevalence of HIV infection in Nagele Borena Hospital. This study indicated that 380(98.9%) and 381(99.2%) of the respondents claimed to have ever heard about sexually transmitted diseases (STIs) and HIV/AIDS respectively. This finding was comparable with the findings of the study conducted in Tanzania in which majority of students (98%) have heard about STIs (Kazaura, et al., 2009). Similarly, 98.9% of respondents males and females with 196(99.5%) and 185(98.9%) respectively showed that they had heard about HIV/AIDS. This finding was almost similar with EDHS, (2011) report in which 99% of men and 97% of women respondents had heard of about HIV/ AIDS.
In this study74.7% and 96.3% of adults know about transmission of STI and AIDS. Other study finding indicated that above half (59.8%) of participants know about transmission of STI and 96.4% of participants know about ways of transmission of HIV/AIDS (Andualem et al., 2015). The difference might be due to difference in study subject which is adults in this study and most of them (83.3%) were above secondary level of education.
In this study there were several misconceptions responded on the knowledge of HIV/AIDS in which 9(2.4%), 19.96% and 0.5% accounts for all pregnant women infect HIV will have babies born with AIDS, Peoples are likely to get HIV by deep kissing, confusion about method of HIV prevention respectively. Similarly, EDHS, (2011), indicated that prevention is generally stronger than knowledge about sexual transmission, while misconceptions regarding mosquitoes and transmission by supernatural means remain persistent.
In this study, the events after drinking alcohol, chewing khat and living with many sexual partner, peer pressure, and absence of sexual issue within the family were the predictor variables for HIV infection. Adults who were frequent drinker were 5.2times more likely to have risky sexual behavior as compared with those who did not drink (AOR=5.2(1.1-25).Similarly, chewing khat is another risk behavior in which adults those who chew khat frequently were 5.8times more likely to have risky sexual behavior as compared with those who did not chew khat (AOR=5.8 (1.3-27). Among these adults the probability of being under the risk was goes to females (67.1%) with frequent chewer than males (32.9%). The reason may be goes to habit females in the study area on khat chewing. In Negele there were some women who chew khat were two times more likely of performing risk sexual behavior to acquire HIV/AIDS. The large number of respondents among alcohol drinker was larger in males than females. Thus, larger probability goes to men in doing risk sexual behavior.
Comparable study (Wondemagegn et al., 2014) with (AOR= 8.7, CI=1.26 -60) and (AOR=8.16, CI=1.07-62)of alcohol and khat chewing respectively showed that, how adults who drink alcohol were 8.7 times more likely to have risky sexual behavior than who did not drink and adults who chew khat were 8.16 times more likely to have risky sexual behavior than who did not chew khat. Similar findings were also observed in studies conducted in southwest, Ethiopia, revealed that Youths who drink alcohol were more than two times more likely to engage in risky sexual behavior (Fantahun et al., 2014). In addition, studies have showed that youths who drink alcohol are more likely to engage in risky sexual behaviors like having multiple sexual partners, performing unprotected sexual intercourse, and having sex with high-risk partners like commercial sex workers (Miller et al., 2007). Another studies reported that substance abuse causes loss of inhibition and involvement in risky sexual behaviors such as unprotected sex, multiple sexual partners, prolonged and traumatic sex (Tura et al., 2012).
In this study, a problem related to the discussion about the impact of the HIV disease was found to be significantly associated unsafe sex, which leads to HIV infection.195(50.7%) of adults was not discuss about sexual issues with their family. This is greater than research conducted (Zemenu et al., 2016) that found 43% and less than 52% research conducted (Andualem Henoket al., 2015). The difference is because of cultural unacceptability 6(9.23%), shame 43(66%) lack of knowledge 16(24.6%), were considered as reasons for the adults not discussing sexual issue within their family. The study shows 13 (1.6-102) that adults who did not discuss about sexual issue related to HIV/AIDS were more likely 13 times risky of performing unsafe sex than those who discuss. Out of 195 (50.7%) of the total adults who were not discuss about sexual issue more than half 106(54.4) of them were females. This entail that still females have higher probability of conducting risk behavior, because of discussion about issues.
This is similar to (Elias et al., 2014) research conducted which shows 2.23 (1.29, 3.96) that participants who did not discuss about sexual issues were more likely 2.23 times risky to unsafe sex which lead to HIV infection than those adults who discuss about sexual issues within the family.
The other risk factor considered in this study was peer pressure to have sexual intercourse. Adults who were influenced, in this study account 93(24.22%) which, is almost similar 144 (17.2%) to (Almaz Gizaw et al., 2014); with the odd ratio of 1.13 (0.72, 1.79). The study revealed 22.9 (3.9-131) that adults who were influenced by their peer were more likely 22.9times risky of doing unprotected sex which lead to HIV infection than adults who were not influenced. Moreover, unadjusted prevalence was higher in females with greater probability of comprising more than half of the total adults who were at risk 53(57%) than in males 40(43). This case related to the problem decision making that observed in women than males Regarding to sexual partner history of adults in this study 26.5% of adults have multiple sexual partner which is 21.1% higher than research conducted by (Andualem Henok et al,, 2015). The difference may be related to age of participants where many adults can be involved in extra and pre-marital sex than adolescents. The odd ratio of AOR= 5.2(0.9-29) in this study shows that adults who have multiple sexual partners were more likely 5.2 times risky of HIV infection than adults who did not have multiple sexual partners. More than half of adults who were considered as having multiple sexual partners, were males 56(54.6%) than females 46(45.1%). Since having many wives, which extra marital is not restricted by religion and culture, males, involved in many partner than females in the study area.
Similar study was found to be(AOR=2.96, 95% CI=0.1-87) that indicated adults who have multiple sexual partners were more likely 2.96times risky of HIV infection than adults who did not have multiple sexual partners (Wondimagegn et al., 2014).
HIV infection is a significant public health problem particularly in developing countries, including Ethiopia. The prevalence of HIV infection in this study was (2.86%) which is higher than study conducted (EDHS, 2011) which is 1.5 and still lower 3.5% than those that was conducted by (HAPCO, 2007). Higher prevalence of the adults in the Negele Borena Hospital was because many of the adults were under the positive predictors of risky factors as indicated in this study such as drinking alcohol, chewing khat, lower discussion about sexual issues, existence of peer pressure, and having multiple partners which are identified with their respective odd ratio. The other reason may be associated with location of the town in which higher contraband invites mixed populations, labour migration to urban, large dam construction projects nearby the town, higher number of female sex worker as well as a growing service industry. Moreover (EDHS 2011), analysis showed HIV prevalence is four times greater among populations that reside within 5km from a main asphalt road compared to those further away. Moreover, the presence of tourists, pensions in the study area might have encouraged transactional sex and sexual relationship with partners in the city which might in turn have led to the existence of high rate of HIV infection (Tewabe et al., 2012).
Similarly, in this study the prevalence of women (3.74%) is higher than men (2.03%). EDHS, (2011), indicate the prevalence which was higher among women (1.9%) than men (1%). The difference is because relatively most of the females 183(48.6%) did not know modes of transmission of HIV than males 193(51.3%). So, that they get involved in risky sexual behavior like unprotected sex which increase the probability of contracting HIV/AIDS. Similarly, other studies show that prevalence of HIV infection among female students might be due to greater risks to factors such as early sexual debut, early marriage, sexual abuse, violence and transactional sex (Malaju et al.,2013). Moreover, females' sexual relationship and networking with men to meet their financial needs., might be the contributing factors and biological factors including immature genital tract, male to female HIV transmission and lack of comprehensive knowledge about HIV/AIDS, poor access to health services could also explain this difference.
Conclusion
In this cross-sectional survey of adults in Nagele Borena Hospital, awareness of HIV and its mode of transmission were high. However, there were some misconceptions on HIV/AIDS where some of adults believe that as HIV can pass through deep kissing, pregnant mother always give infected child and transmission of HIV. Similarly, this finding showed that there were low communications about sexual health issues. Cultural taboo, feel ashamed and lack of knowledge affect family communication on sexual matters. This is an encouraging finding that health workers 107(28.1%), media (Television and Radio)152(39.9%), parents 23(6%), and school clubs 99(25.9%), were playing an important role in the dissemination of information about HIV/AIDS. Communications about sexual matters depend on same sex basis. Promote family communication on sexuality and improve peer to peer sexuality education program. As educational level of parents is a critical factor in advancing member of family communication about sexual health matters, policy makers and program managers should focus on encouraging parents to pursue education. Moreover, health workers should be effective in equipping member of family with adequate sexual health information.This Improve knowledge, attitude and stigma parameters, including increased condom use with non-regular partners and improve accepting attitude toward HIV-infected people.
This study explains, drugs like alcohol, and khat when taken into body of people enhance desire to risky behavior so that judging capacity of peoples lower. So that the drug abuse were considered as driving force of both males and females to contract HIV/AIDS. Adults who were frequent drinker were 5.2 times more likely to have risky sexual behavior as compared with those who did not drink. Similarly, adults those who chew khat frequently were 5.8 times more likely to have risky sexual behavior as compared with those who did not chew khat. Similarly, the existence of peer pressure, the problem of sexual issue discussion within the family, and multiple sexual partners positively influence peoples to carryout unsafe sex which leads to HIV infection.
Prevalence of HIV infection among Nagele Borena and surrounding adults in the study period was 2.86%. Females 7(3.74%) are highly affected by HIV infection than males 4(2.03%). Similarly, respondents with age range 26-35 years old were highly affected with virus in which the prevalence is large (5(4.06%). The others were 1(1.33%), 3(2.88%), 2(2.44%) with their corresponding age groups were 15-19, 20-25, and 36-49 respectively. New infection among young people suggested that the disease is not under control in the country yet.
The females are highly vulnerable to AIDS than males (0.00%) with 4(57.1%) their CD4 count is less than 100/mm 3 of blood. Another vulnerability was seen in single 2(33.3%) both for boys and girls. This indicates that females and single individuals were fear to go to Hospitals so that VCT has to work hard to save lives of the peoples.
Generally, drinking alcohol, chewing khat, peer pressure, discussion about sexual issues, and multiple partners were factors identified as predictor of HIV/AIDS infection, and they are determinants of HIV transmission. Finally, the HIV/AIDS transmission determinants among sex vary, which includes risk of chewing khat frequently, peer pressure, and not discussing about sexual issues goes to females. While males larger share goes to adults among frequent drinker, and those that have multiple sexual partners.
|
v3-fos-license
|
2017-10-30T22:24:11.860Z
|
2017-02-08T00:00:00.000
|
30877456
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=74984",
"pdf_hash": "ec2945fe947d0be8e2c458005716aab3c0026f50",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45925",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ec2945fe947d0be8e2c458005716aab3c0026f50",
"year": 2017
}
|
pes2o/s2orc
|
Clinical Analysis of Miliary Tuberculosis Patients in Our Tertiary Hospital
Objective: To investigate the clinical characteristics of miliary tuberculosis experienced in our tertiary hospital without a tuberculosis isolation room. Materials and Methods: We obtained a definite diagnosis of tuberculosis for 72 patients in our tertiary hospital between January 2010 and September 2016. There were six patients (8.3%) who were diagnosed with miliary tuberculosis following the isolation of Mycobacterium tuberculosis from several organs, and we analyzed the clinical findings. Results: The average age of the six patients with miliary tuberculosis was 74 years old (3 males and 3 females). All patients had underlying diseases and immunosuppressive treatment was performed for three patients. The detection methods were based on clinical symptoms such as high fever in all patients. Concerning the laboratory findings, three patients showed a negative or indeterminate response for interferon-gamma release assays (IGRAs). Bronchoscopic examinations were performed for five of six patients soon after admission and a definite diagnosis was obtained. Although treatment according to the American Thoracic Society (ATS) guidelines was performed for all patients, three of the six died due to complications or worsening of miliary tuberculosis. Conclusions: Patients with miliary tuberculosis experienced in a tertiary hospital were frequently detected among patients receiving immunosuppressive treatment for a long time. Because these patients showed a rapid progressive clinical course, it is thought to be important to perform bronchoscopic examination proactively for patients suspected of miliary tuberculosis on radiological findings to make an appropriate early diagnosis and start treatment.
Introduction
The incidence of tuberculosis (TB) has declined gradually from 35.4 per 100,000 How to cite this paper: Kobashi, Y., Kittaka, M., Mouri, K., Kato, S. and Oka, M. (2017) Clinical Analysis of Miliary Tuberculosis Patients in Our Tertiary Hospital.people in 1999 to 16.1 per 100,000 people in 2013 in Japan [1].On the other hand, miliary TB occurs when Mycobacterium tuberculosis (MTB) is spread hematogenously, producing widespread lesions in a patient with reduced immune function.It has been defined that disseminated lesions, 1 to 3 mm in diameter, involve the lungs and extrapulmonary sites.Miliary TB is a virulent form of TB with a high mortality rate, especially in elderly patients with underlying diseases [2].In our local tertiary hospital (Kurashiki city, Okayama prefecture, Japan), although TB has also recently declined, patients with miliary TB (the severe type of TB) have sometimes been encountered and showed poor outcomes in our hospital.Therefore, we investigated the incidence of miliary TB and the clinical characteristics of these patients.
Materials and Methods
Seventy-two patients were diagnosed as TB disease by the isolation of MTB from clinical specimens in Kawasaki Medical School Hospital between January 2010 and September 2016.Of these patients, 6 (8.3%) were diagnosed with miliary TB.
The diagnosis of miliary TB was defined as the presence of bilateral diffuse millet-sized nodules on chest radiography and/or chest computed tomography (CT) as observed by a pulmonary radiologist and the detection of MTB in several organs.We retrospectively investigated the backgrounds, laboratory findings, including interferon-γ release assays (IGRAs), radiological findings, length from first visit to diagnosis and outcome, respectively.This study protocol was approved by the Ethical Committee of Kawasaki Medical School.
Results
There were six patients with miliary TB (8.3%) among all TB patients in our hospital during the period of this study.The average age of the 6 miliary TB patients was 73 years old (3 males and 3 females).All patients had underlying diseases and three of six patients received immunosuppressive treatment for them.
All patients had systemic symptoms such as fever and were in poor condition.
Most patients, excluding case 6 who was complicated with acute respiratory distress syndrome (ARDS), were suspected of TB disease on admission.The length from admission to diagnosis was comparatively short (average: 6.7days) and all patients were diagnosed by the identification of MTB using a polymerase chain reaction (PCR) method from the bronchoscopic specimens or urine (5 cases), cerebrospinal fluid (2 cases), and blood (1 case), respectively (Table 1).
The main laboratory findings of the patients with miliary TB showed lymphocytopenia, hypoproteinemia and hypoalbuminemia in most of them.A tuberculin skin test (TST) did not show a weak positive response in one of three patients in whom we could perform it.On the other hand, a QuantiFERON TB-Gold In-tube (QFT-IT) test showed a positive response in three patients, but was negative for one and indeterminate for two.A T-SPOT showed a positive response for three patients, but was negative for two and indeterminate for one on a TB immunological test (Table 2).Concerning the radiological findings, five of the six patients showed typical findings such as millet-sized nodules in the bilateral diffuse lung field on chest CT.However, because only case 6 showed a diffuse bilateral hilar dominant infiltration shadow and ground-glass opacity on chest CT due to ARDS (Figure 1), it was difficult to obtain a correct diagnosis on admission.Regarding complications of miliary TB, ARDS was recognized in one patient and disseminated intravascular coagulation (DIC) in two patients.Although treatment according to the guideline of TB disease [3] was performed for all patients after the definite diagnosis, the prognosis was poor and three of the six patients died of miliary TB (Table 2).
Discussion
The frequency of miliary TB among all TB disease was reported be 1% -2% in previous reports from Japan and it is severe among TB diseases [4] [5].The number of miliary TB cases has been increasing recently, often appearing in female and elderly patients [6].It was recently a higher percentage (8.3%) in our tertiary hospital without a TB isolation room.Although malignant disease, human immunodeficiency virus (HIV) infection, diabetes mellitus, renal failure, collagen disease and immunosuppressive treatment are known as risk factors, most of our patients with miliary TB were elderly (average age was 73 years old) and all patients had severe underlying diseases, including three patients receiving immunosuppressive treatments in our tertiary hospital, which many other patients with various underlying diseases visited.
In the clinical symptoms of patients with miliary TB, high fever (≥38˚C) was recognized in all patients and it was one detection method of miliary TB as in previous reports.Because miliary TB includes the differential diagnosis of febrile illness [7], we have to check carefully formiliary TB.
Concerning the laboratory findings, lymphocytopenia in the peripheral blood, especially CD4-positive T lymphocytes and hyponutritional conditions such as hypoproteinemia or hypoalbuminemia, were recognized in most patients with miliary TB in this study.It was reported that these two findings were related to the severity of TB disease or the prognosis [8] [9].The mechanism of lymphocytopenia in the peripheral blood is thought to be a deviation to the inflammation focuses [10], linked to hyponutritional conditions [9] and the influence of treatment for underlying diseases such as autoimmune disorders.Another important finding, IGRAs (QFT-IT and/or T-SPOT) which was useful as a supportive diagnostic marker for TB disease, showed negative or indeterminate results in three of six patients.Because these IGRA responses appeared in patients with lymphocytopenia or hypoalbuminemia [11], it was not thought to be useful for the differential diagnosis of miliary TB.
Regarding the radiological findings, although most patients showed typical millet-sized nodules in the bilateral lung field, only case 6 complicated with ARDS showed a diffuse hilar dominant infiltration shadow and ground-glass opacity on chest CT.However, because we usually pay careful attention to the diagnosis of pulmonary TB as the differential diagnosis of pneumonia, all patients get the correct diagnosis of miliary TB within one week.It was reported few patients present as smear-positive in the usual sputum acid-fast bacilli examination [4] [12].Therefore, we also investigated using the acid-fast bacilli examination using bronchoscopic specimens, peripheral blood, urine, and cerebrospinal fluid as in previous reports.
The treatment according to the guideline of TB disease [3] was immediately started for all six patients after the diagnosis, but three of the six (50%) died due to complications such as ARDS and/or DIC and worsening of miliary TB.Although the mortality rate of patients with miliary TB has been reported to be high (≥20%) and the prognosis poor [13] [14], it showed a higher percentage in our study.Most patients in our study did not have periodical radiological examinations for a long time during the follow-up period of underlying disease in other departments of our hospital, or at other local hospitals as outpatients.
Therefore, the delay in detection may be linked to the prognosis despite the short duration from admission to diagnosis.The complications of miliary TB consisted of DIC in two patients and ARDS in one patient.Nagai et al. also reported that 5.4% of miliary TB was complicated with ARDS and 12.2% was complicated with DIC and that these complications became poor prognosis factors [4].
Although there are a few reports suggesting that the administration of corticosteroid drugs is effective for the treatment of ARDS complicated with miliary TB [15] [16] [17], they were not effective for case 6 in this study.Therefore, we have some doubts regarding the effectiveness of corticosteroid drugs administration for miliary TB.
There are some limitations in this study.Firstly, a small number of miliary TB patients were included in this study and they were geographically restricted to a small area in Japan with an intermediate TB population.We hope that a large scale nationwide study will be performed in Japan.Secondly, although HIV testing was not performed for all patients with TB disease, all patients with miliary TB showed negative responses for HIV and HIV infection was not related to the occurrence of miliary TB.
Conclusion
Because the prognosis of patients with miliary TB was poor and showed a rapidly progressive clinical course in this study, it is thought to be important to perform bronchoscopic examinations proactively in combination with other common hospitals or other departments in the same hospital for patients suspected
|
v3-fos-license
|
2021-10-27T15:16:40.290Z
|
2021-10-24T00:00:00.000
|
239954812
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/18/21/11155/pdf",
"pdf_hash": "b1562d18712297614b4a93a365f7b111279d1c89",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45926",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "0c0881893e44c7ec008c4e0a47077f6251939dcd",
"year": 2021
}
|
pes2o/s2orc
|
The Influence of Social Capital on Youths’ Anti-Epidemic Action in the Field of Epidemic-Preventative Social Distancing in China
Social distancing restrictions for COVID-19 epidemic prevention have substantially changed the field of youths’ social activities. Many studies have focused on the impact of epidemic-preventative social distancing on individual physical and mental health. However, in the field of social distancing for epidemic prevention, what are the changes in youth anti-epidemic action and states caused by their interpersonal resources and interactions? Responding to this question by studying the impact of the elements of social capital in youths’ anti-epidemic actions and anti-epidemic states could help identify an effective mechanism for balancing social distancing for effective epidemic prevention and sustainable social-participation development among youth. Bourdieu’s field theory holds that the elements of social capital change with a change in the field. Therefore, we introduced the specific elements of social capital as independent variables and used a multinomal logistic model to analyze and predict the levels of youth anti-epidemic action through an empirical investigation of 1043 young people in Guangdong Province, China. The results show that, first, level of social distancing for epidemic prevention shows differences by occupation status and income level and correlates with social support. Second, social support and social norms play positive roles in promoting youth participation in anti-epidemic activities when social distance is certain. Third, social capital has a significant positive effect on youth social satisfaction and core relationships; however, social trust has a significant negative effect on youth physical and mental health. This study emphasized that social distancing for epidemic prevention is a special social situational state, which is a field where social capital has an impact on the differential changes in the public-participating actions and habitus of youth.
Introduction
With COVID-19's characteristics of strong infectivity, potential asymptomatic infection, and high variability, staying at home and social distancing have become the main strategies to reduce the risk of human-to-human transmission during the epidemic [1] However, social distancing blocks or affects interpersonal physical connections, which impacts people's personal lives, physical and mental health, and freedom of movement [2], such as through significant declines in individual health status and subjective feelings [3,4]. When serious, social distancing can also lead to severe social consequences [5,6]. Therefore, society needs a mechanism that can not only effectively prevent and control the spread of COVID-19 but also reduce the negative impact on daily life caused by "increased distance communication" under the normalized circumstances of the epidemic.
During the anti-epidemic period in China, the strict enforcement of maintaining "social distance" brought great challenges to people's everyday living conditions. People had to change their daily habits, especially concerning interpersonal communication, and adapt to new social norms. Youth in China cooperated with the government's anti-epidemic policy with its various unique ways of interpersonal interaction, which has garnered widespread interest. As a generation of active Internet users, the youth can find epidemic information online, including through various social media platforms, to make suggestions for COVID-19 prevention in their communities, enrich community and rural life using network videos and social platforms, and become propagandists and advisers for middle-aged and older adults. By changing their own "habitus," such as by giving up frequent outdoor activities, eliminating group gatherings, and adapting to a new form of Internet learning, young people have altered their time and space needs for epidemic prevention in China. In the epidemic context, this includes cooperating with and responding to the government's anti-epidemic policy, which can be regarded as youth social-participation actions.
However, what are the factors that allow young people to cope with the changes brought about by social distancing? Under the social distancing rules, what changes do their interpersonal resources and interaction patterns have on their anti-epidemic state? Responding to these questions by studying the factors that influence youth's anti-epidemic actions and anti-epidemic status could help identify an effective mechanism for balancing social distancing for effective epidemic prevention and sustainable social participation development among youth. Bourdieu (1986) and Coleman (1988) successively advanced the concept of social capital in the 1980s, while Granovetter (1973), Lin (2001), and Burt (1992), respectively, developed the concept from the perspectives of relationship strength, relational resources, and social network structure [7][8][9][10][11]. Social capital can be defined as the scarce resources actors obtain through social ties to achieve behavioral goals. It focuses on formal and informal relationships among and within families, community organizations, and governments [12].
Social Capital and Social Distance Field of Epidemic Prevention
However, the discussion of social capital cannot be separated from the specific social situation, such as the COVID-19 epidemic, in what is referred to as "field" by Pierre Bourdieu. "Field," "habitus," and "capital" are three important concepts closely related to each other in Bourdieu's work, "An Invitation to Reflexive Sociology." A field is conceptualized as the basic analytical unit of social research, which Bourdieu believes is a relatively independent social space with internal logic and rules. A field can be defined as a network or configuration of objective relationships between various positions. Further, Bourdieu defines habitus as the tendency of an actor to form an action strategy through long-term life experience in a specific field; it is the internalized action consciousness. When actors enter a new field, they are often restricted by the field's rules, and the habitus they form in the old field will prevent them from adapting to their new field. Therefore, when discussing social capital within the context of the epidemic, the individual characteristics, and state changes during the epidemic, we should also include the perspective of the "field." Social distancing to prevent the spread of COVID-19 comprises the field of observation in this study. In this specific field, we discuss the relationship between the changes in social capital elements and anti-epidemic action among youth and the impact on their anti-epidemic state. Recent studies have discussed the impact of social distancing on individual social relationships, social networks, and social support during the COVID-19 epidemic [13]. However, there are few in-depth discussions on the relationship between social distancing and social capital during the epidemic. Therefore, the first hypothesis of this study is as follows: Hypothesis 1 (H1). Social distancing is significantly associated with social capital among youth in the epidemic context.
Social Capital and Youth Anti-Epidemic Action
Bourdieu believes that capital is the key for actors to compete in a field, and the quantity and results of actors' capital have crucial effects on their position and role in that field. Through the concept of "epidemic prevention social capital," Bian and his colleagues (2020) discussed the impact of the changes in cohesive and external social capital on the epidemic prevention effect under social distancing conditions, emphasizing that under effective isolation, the higher a family's epidemic prevention social capital, the better their performance of epidemic prevention social behaviors, and the better the anti-epidemic effect [14]. Anti-epidemic action among youth as social public participation is affected by many factors, such as personal characteristics, educational background, sense of participation, and ability to participate. Social capital is also an important factor that affects social participation. Therefore, to further discuss the relationship between social capital and youth anti-epidemic action, based on the individual characteristics of social capital, the second hypothesis of this study is as follows: Hypothesis 2 (H2). Social capital has a significant positive impact on anti-epidemic action among youth.
Social Capital and the Youth's Anti-Epidemic State
Social capital is multidimensional, with different types, and can produce positive and negative externalities [8,15,16]. The impact of social capital on human quality-of-life indicators has been widely supported, such as in physical and mental health (including selfreported health), subjective well-being, and social attitudes. Woolcock (1998) emphasized that social capital is not unconditionally "good" but may have some adverse effects. For example, social capital contributes to the wider spread of infectious diseases through closer person-to-person contact [17]. Therefore, social capital should be optimized rather than maximized.
"Social norms," "social trust," "social support," and "social connection" are the core elements of the concept of social capital [15,18]. How to maintain social distance from others has become a key issue during the public health emergency of the COVID-19 epidemic. Recent research highlights that individuals experienced more loneliness and a decreased sense of friendship, and that increased social support (such as emotional and instrumental support) emerged during social distancing. Further, high levels of social support are associated with a lower likelihood of anxiety and depression [19]. Social distancing impacts individuals' lives, physical and mental health, and freedom of movement, and requires a certain degree of "personal sacrifice" [2][3][4]. Social norms, however, create expectations of citizenship and social cooperation and promote selflessness and personal sacrifice for the common good of the community [15]. Fukuyama (1997) defines social capital as "the existence of a specific set of informal values or norms shared among a group of members, allowing for cooperation among them" [20]. Social networks can actively motivate individuals to maintain social distancing to conform to social norms when confronted the COVID-19 health threat.
Although social organizations that incorporate some formal or informal relationships can gain access to key social resources (e.g., information and expertise), these social networks are not built spontaneously but constructed through investment strategies oriented toward the institutionalization of group relationships [21]. Therefore, continuous social connection is often needed to obtain social support, while social distancing during the COVID-19 epidemic can damage social connections [22]. However, interpersonal networks within the context of Chinese relational culture have four major characteristics: strong kinship, functional reusability, strong obligation to reward, and a super-stable relationship circle. Thus, the more prominent these characteristics are, and the closer the social ties are, the richer the social support individuals receive [23].
Many studies have focused on the impact of epidemic-preventative social distancing on social connections [22] or the effects on individual physical and mental health [24][25][26]. Advocating for public health policies during the epidemic, such as staying home and social distancing, needs to weigh personal and public interests, which may be affected by individual internal norms and external social influences [27]. Thus, it is necessary to further investigate the impact of social capital on youth's anti-epidemic state during the social distancing phase of the pandemic. However, most empirical studies regard social capital as a holistic and homogeneous concept and mainly focus on its positive effects. To fill this research gap and study the relationship between youth anti-epidemic action and social capital, we need to further explore how the elements of social capital (i.e., social norms, social connection, social trust, and social support) affect the interpersonal networks and life habitus of youth, thus affecting their anti-epidemic state in the field of social distancing. Therefore, we propose the following hypothesis: Hypothesis 3 (H3). Different elements of social capital have a significant positive impact on youth anti-epidemic state.
Procedure
In this study, the research group members invited young people aged 15-35 in Guangdong Province to fill out questionnaires, which included 28 questions, through group chats and the moments function of Wechat from June 4 to June 11, 2020. We used a simple random sampling method to collect data online via WeChat. Respondents who voluntarily participated in the questionnaire survey gave consent for their data to be used in the research when they participated in the study. In total, 1043 online questionnaires were collected, with an average response time of 5 min (308.49 s). Questionnaires with intentionally wrong or random answers were screened and identified, and 858 valid questionnaires were collected with a recovery rate of 82.5%.
Participants
The National Bureau of Statistics of China regards "youth" to be individuals between 15 and 35 years of age [28]. As a southern province in China, Guangdong Province government has made outstanding contributions to China's public health during the COVID-19 epidemic in 2020. Therefore, our study selected individuals aged 15−35 years from cities in Guangdong Province, China, as survey participants, and used the simple random sampling method to collect data online via WeChat. This study was approved by the Academic Ethics Committee of Guangzhou Xinhua University (NO. 2021K003). Among all the participants (female = 625, male = 233), 81.6% of those were aged 15-29, and 18.4% were aged 30-35, 92.4% self-reported that they were undergraduate or above.
Measurement of Social Distance
In this study, we measured social distancing according to the social distancing strategy of "Six Sets of Guidelines on Disease Prevention: For General Use, Tourism, Households, Public Places, Public Transport and Home Observation" [29], which was released by the National Health Commission of the People's Republic of China on 25th January 2020. As shown in Table 1, "social distance" was primarily measured based on two questions-"Maximum duration of staying home during the epidemic period (23 January to 30 March 2020)" (including six items scored from 1-6) and "Frequency of going out to purchase daily necessities during the epidemic period" (including five items scored from 1-5). The higher the score, the longer social distance was maintained. In the reliability test, the reliability coefficient of all social distance items was 0.732. Maximum duration of staying home during the epidemic period 1 = "<1 week" 2 = "1 week" 3 = "1-2 weeks" 4 = "2 weeks" 5 = "2-3 weeks" 6 = "3 weeks or more" 0.732 Frequency of going out to purchase daily necessities during the epidemic period 1 = "3 times or more per week" 2 = "2-3 times per week" 3 = "2 times per week" 4 = "1-2 times per week" 5 = "≤1 time per week" Anti-epidemic Action
Measurement of Youth Anti-Epidemic Action
For this study, we developed a scale of social participation, that was divided into "general participation" and "special participation" [30]. Based on the experience summarized in "Fighting COVID-19 China in Action" [31], we designed a multichoice questionnaire around the idea of "What activities did you participate in during the epidemic period (23 January to 30 March 2020)?", that included seven items, with a score ranging from 0 to 26 (Table 1). We defined the youth's participation degree ("without participation," "general participation" or "special participation") by computing the scores. A higher score represented a higher level of participation in the action, and a score of 5 or less indicated a general level of participation, a score of more than 5 indicated a special level of participation (Table 1).
Measurement of Youth Anti-Epidemic State
This scale, developed by Diener [32] and revised by Bian et al. [14], is used for evaluated the anti-epidemic state among youth across four dimensions: "studying and working state" (2 items) "physical and mental health state" (3 items) "core relationship state" (2 items) and "social satisfaction" (4 items) ( Table 1). Studying and working state and physical and mental health state were used as measurement indicators for young people's subjective states. Social satisfaction mainly measures one's satisfaction with the government, social organizations or groups, enterprises, and institutions regarding antiepidemic measures, and examines social attitudes among youth. Core relationship state mainly refers to whether one's core relationships with family members and good friends have grown closer during the epidemic. Each item is scored on a five-point scale. Higher scores indicate more thought given to future outcomes. For example, the higher the score, the more harmonious the relationship. The reliability coefficient of all items was 0.773, and in the confirmatory factor analysis, the reliability coefficients of the four dimensions were between 0.8 and 0.9, indicating that the measurement scale has good reliability.
Measurement of Social Capital
Social Capital Assessment Tools (SCAT) is the earliest systematic tool for measuring Social Capital, which some scholars have improved upon, and the new system is called (A-SCAT). This study refers to Putnam's tool for measuring macro social capital, which measures social capital from the dimensions of social network, norms, and trust [15]. Based on the empirical research on the localization of social capital in China [23,33], this study's questionnaires were designed to measure social capital from the dimensions of social support (2 items), social norms (3 items), social connection (3 items), and social trust (2 items). Combined with the characteristics of social distancing policy in China, a special scale for measuring social capital was designed for this study (Table 1). To examine the support young people received during the epidemic period, according to the definition of social support, which refers to the size, density, reciprocity of one's social network versus the availability of certain types of aids including practical and emotional support [34,35], this study mainly measured social support as "the use of informal networks (social support networks)" and "resources flowing in social networks" [33]. Since the outbreak of the epidemic at the beginning of 2020, "wearing a mask," "washing your hands," "not spreading misinformation" has become the behavioral norm for people in China to cooperate with the government in the implementation of epidemic prevention measures and social distancing. Thus, these factors were used to measure social norms in our study. Social connection was measured by online interactions between young people and their families and friends, and social trust was measured as the level of trust young people had in different epidemic information sources (Table 1). Each item is scored on a five-point scale, from 1 = very unlike me to 5 = very like me. Higher scores indicate more social capital elements. The reliability coefficients for all social capital items were greater than 0.6 (Cronbach's alpha = 0. 806), indicating that the scale's internal reliability was relatively high.
Measurement of Occupation Status and Monthly income
For occupation status, this study referred to the occupation indicator of individual socioeconomic characteristics in Bian's study [23], which was used to develop Lin Nan's social resources theory [10]. We made some modifications to Bian's scale, according to the latest national occupation division by the National Bureau Statistics of China [28] (Table 1), including seven groups valued from 1 to 7. The higher the value, the higher the occupation status. In addition, based on the division of the monthly income level from the National Bureau Statistics of China in 2019 [28], income was divided into five groups. The higher the value, the higher the income level.
Statistical Analysis
First, we used confirmatory factor analysis (CFA) to test the theoretical construction dimension of social capital in the field of epidemic-preventiative social distancing. Second, this study explored the association between social distance and several elements of social capital among youth with various individual characteristics (e.g., gender, education, occupation status and income indicators) by using correlation analysis for testing hypothesis 1. Third, we used a multinomial logistic regression model and the stepwise method to test hypothesis 2, exploring the impact of social capital on youths' anti-epidemic actions in the field of epidemic-preventative social distancing. Finally, the study tested hypothesis 3 through multiple linear regression, investigating the impact of social capital on youths' anti-epidemic states. In this study, SPSS 21.0 (IBM, New York, NY, USA) was used for the descriptive statistical analysis, analysis of variance, correlation analysis and regression analysis. The CFA was conducted using SPSS AMOS 22.0 (IBM, New York, NY, USA).
Factor Analysis of Social Capital
To test the fit between the overall measurement model of social capital and the sample data, AMOS was used for confirmatory factor analysis (CFA). The parameters of the fourfactor model showed that the model's goodness of fit was as follows: chi-squared = 208.591, p = 0.000 < 0.05; CFI = 0.995 > 0.9, TLI = 0.986 > 0.9, RMSEA = 0.030 < 0.05 [36,37]).
As shown in Figure 1, the four factors of social capital were independent of each other, among which social norms (0.507) and social connection (0.920) had the highest degree of explanation, followed by social trust (0.380), and then social support (0.131) with the weakest degree of explanation. For social support, the factor loadings of the two variables were both higher than 0.8 (0.825, 0.815), which indicated that the information or support youth obtained during the epidemic period mainly came from core relationships, such as family members and friends. Similarly, the factor loading values of the explanatory variables of social norms (0.912, 0.872, 0.866), social connection (0.844, 0.824, 0.753), and social trust (0.909, 0.980, 0.767) were high, which indicated that these variables could strongly explain the relevant dimensions of social capital. The results of factor analysis indicated that the specific connotation of the concept of social capital among youth during the epidemic period was mainly embodied in social norms, social connection, and social trust. Thus, youth were better able to achieve behavioral goals during the epidemic period when they better abided by the social norms of "wearing a mask, washing hands frequently, not spreading rumors," interacted more with family members and good friends online, and trusted the information obtained from their core relationships.
Descriptive Statistics for Youth Anti-Epidemic Action, Social Distance, and Social Capital during the Epidemic
As shown in Table 2, anti-epidemic participation among youth was mainly at the level of general participation (N = 572, 67%), which was in response to the government's call to engage in staying at home and cooperating with anti-epidemic action. Social distance was mainly measured by the maximum duration of staying at home and frequency of going out, with an average value of 7.377, which was a high level, indicating that young people could maintain a relatively long period of social distancing during the epidemic. Regarding social capital, the mean values for social norms (13.89), social connection (16.14), and social trust (10.78) were high, and there were no significant differences among individuals, indicating that youth could observe the social norms of epidemic prevention, such as wearing a mask, washing their hands frequently, and not spreading rumors. Interactions with family at home and online communication with core relationships were more centralized and stable, and youth reported a higher level of trust as the epidemic information was shared between family members and friends, which was consistent with the strong relationship culture in China. However, the level of social support was relatively low (mean = 3.73), and the individual difference was large (SD = 1.892), which will be discussed in more detail below. Table 2 indicates that the overall physical and mental health state of the sample was good during the epidemic. Similarly, youth had higher social satisfaction with the arrangements for implementing epidemic prevention measures in communities, schools, or enterprises (mean = 16.12, SD = 3.111). The state of core relationships with family members and good friends was also good (mean = 7.93, SD = 1.731). Notably, the studying and working state of youth was at a low level (mean = 5.56), and individual differences were large (SD = 2.13). This indicated that young people's studying and working state was greatly affected during the epidemic period and that the degree of variation was high. The sample's income level (mean = 3.47, SD = 1.651) and occupation status (mean = 3.11, SD = 1.85) were at an average level, and because income level and occupation status were sequential variables and the standard deviation was greater than 1, individual differences in the sample were relatively large.
The Association between Social Capital and Social Distancing in the Epidemic Field
We analyzed the association between social capital and social distance among youth with various individual features by using correlation analysis ( Table 3). As shown in Table 3, the results of the correlation analysis partially supported Hypothesis 1: Social distance was positively correlated with social support (β = 0.068, p = 0.045) and was not significantly correlated with other elements of social capital. Social distance showed a significant correlation with gender, age, occupation status, and income level. Social support (β = 0.113, p = 0.001) and social trust (β = −0.078, p = 0.045) under social capital showed significant correlation with educational level. Note: * p < 0.05, ** p < 0.01, *** p < 0.001. Table 4 shows the regression analysis results for the effects of social capital on youth anti-epidemic actions and anti-epidemic state. The dependent variable in Model 1 was youth anti-epidemic action. In Model 1, youth anti-epidemic action was an ordinal variable (valued from 0 to 26), representing different degrees of youth participation in antiepidemic actions. To explore the influencing factors of differences in social participation action among the youth, we first converted the values of "without participation" from 0 points to 1 (1 = "without participation"), "general participation" from 2-5 points to 2 (2 = "general participation"), "special participation" from more than 5 points to 3 (3 = "special participation"). We then used a multinomial logistic regression model and the stepwise method, using the element variables of social distance and social capital as the mandatory input items and the individual characteristics variables as the stepwise items to construct the model equation. According to the descriptive statistics, participation action among youth during the epidemic was mainly at the level of general participation; thus, this was set as the reference category for the dependent variable. Based on the model fit, the p-value of the likelihood ratio test was less than 0.05, and the significance test of the regression equation was passed, which shows that the model was reasonable. Table 4 shows the results of this analysis. The dependent variable of Models 2 to 9 was the effects of youth anti-epidemic action, that is, youth anti-epidemic state, which was measured from four dimensions: studying and working state, physical and mental health state, social satisfaction, and core relationship state. A group of nested models was used for each dependent variable to test whether social capital had an independent effect on youth anti-epidemic state in the field of social distance.
Impact of Social Capital on Youth Anti-Epidemic Action in the Field of Epidemic Social Distance
The variables of age and monthly income as stepwise were deleted, indicating that they were not suitable to add to the equation model. The results of the multinomial logistic regression model analysis for Model 1 showed that the likelihood-ratio test indices for social distance (LRC = 6.165, p = 0.046), social support (LRC = 35.539, p = 0.000), educational level (LRC = 11.452, p = 0.003), and occupation (LRC = 7.724, p = 0.021) were significant, indicating that these four variables added to the model had an effect on youth anti-epidemic action. The regression equations constructed using the stepwise method are as follows: Log (P|without participating) = 1.933 0.094 * social distance − 0.244 * social support − 0.131 * social norms + 0.085 * social connection + 0.068 * social trust − 0.752 * education − 0.036 * occupation (1) Log (P|special participation) = −6.090 + 0.067 * social distance + 0.198 * social support + 0.026 * social norms + 0.052 * social connection − 0.003 * social trust + 0.342 * education + 0.115 * occupation Regression Equation (1) shows that, compared to the general participation level among youth, the lower the social support, social norms, educational level, and occupation state, the greater the chance young people would not participate in anti-epidemic action. Social connection and social trust increased the probability of nonparticipation by 0.085 and 0.068 units, respectively; however, the difference was not statistically significant. When social distance and social capital were constant, the higher the educational level and occupation status, the higher the probability that youth would not participate in anti-epidemic action at a general participation level (e.g., home isolation), for which educational level was statistically significant.
From Equation (2), when social distance, educational level, and occupation status were constant, obtaining social support, abiding by social norms, and maintaining social connections could increase the probability of youth engaging in general participation to special participation by 0.198, 0.026, and 0.052 units, respectively. Among these, social support was statistically significant. Social trust reduced the probability of the corresponding special participation action by 0.003 units, which was not statistically significant. Similarly, the higher the educational level and occupational status, the higher the probability of special participation, for which occupational status was statistically significant.
Model 1 partly supported Hypothesis 2, and the results showed that maintaining social distance during the epidemic increased the probability of general participation in anti-epidemic actions (e.g., staying at home) and special social participation (e.g., volunteer activities). When social distance was constant, the effect of social capital on youth anti-epidemic participation action varied according to different elements. Social support and social norms had significant positive effects on youth action from nonparticipation to general participation, especially the effect of social support on the probability of special participation being more significant. Compared to general participation, social connection increased the probability of youth not participating in anti-epidemic action and participating in special anti-epidemic action; however, this was not significant. In addition, regarding individual characteristics, educational level had a significant positive effect on promoting youth from nonparticipation to general participation, while occupational status, as an individual characteristic of social stratification with a stable positive correlation with social capital, had a significant positive effect on increasing the probability of special participation among youth. Thus, under the social distancing conditions of the epidemic situation, different elements of social capital had differing effects on youth anti-epidemic action.
Impact of Social Capital on Youth Anti-Epidemic State in the Field of Epidemic Social Distance
To identify a mechanism that will not only ensure the effects of epidemic prevention but also allow youth to maintain a healthy social life, we used Models 2 to 9 to explore the role that social capital played on the normalization of the epidemic situation by analyzing the influence of social distance, social capital, and anti-epidemic action on youth antiepidemic state. As shown in Table 4, the impact of social distancing on the life habitus of young people during the epidemic was not significant. The R 2 values of Models 3, 5, 7, and 9 were gradually enhanced and significant after adding the elements of social capital, and the explanatory power of the models was gradually enhanced, indicating that social capital has a significant impact on youth anti-epidemic state in the social distancing field. Different social capital elements have differing effects on varying anti-epidemic states, which supported Hypothesis 3.
In addition, monthly income and educational level had an independent impact on the effect of youth anti-epidemic action. The higher the monthly family income, the better the working and learning state, as well as the physical and mental health state, among youth. Notably, the higher young people's educational level was, the lower their social satisfaction with "prevention and control measures in the community," or "arrangements for stopping classes," and "arrangements for resuming work and classes."
Discussion
Bourdieu believes that analysis of an actor and their behavior not only needs to start from the macrolevel social environment but also needs to understand the actor's field and its capital and habitus. Analyzing the habitus of the actor in the field can clearly show how various forms of capital contend with each other and can also explore the reasons behind the actors' behavior. Therefore, this study aimed to empirically explore the social distancing field of anti-epidemic action, and how different social capital factors would affect youth anti-epidemic behavior and habitus. According to the analysis of survey data, young people in the main cities of Guangdong could maintain social distancing for a long time during the epidemic period, and social distancing among youth showed significant social class differences in occupational status and income. Regarding social distancing for epidemic prevention, the main performance of young people's anti-epidemic action was the general participation in staying at home and cooperating with anti-epidemic policies.
Youth demonstrated an overall good anti-epidemic state; however, there were some individual differences. By introducing the specific elements of social capital, we found that social support and social norms had significant positive effects on young people's participation in the anti-epidemic campaign, while their habitus of life was also influenced by different elements of social capital during the epidemic period.
Analysis of Hypotheses 1 and 2
Bourdieu posits that the amount and structure of the capital that actors own can determine their position in the field, and the rank of that capital will vary with changes in the field [7]. We found that social distancing under epidemic prevention and control among youth showed significant differences according to occupation, education and income level and was significantly positively correlated with social support. However, the direct effect of social distance on youth action and anti-epidemic state was not significant. Many studies have shown a stable positive correlation between people's social capital and their education, occupation, and income [23]. Therefore, the social class differences in social distancing reflect not only the differences in youth social capital but also the prominent role of different elements of social capital in this field.
Moreover, our analysis found that in the case of epidemic social distancing, in addition to occupation status, social support and social norms play a positive role in promoting youth participation in anti-epidemic activities. When young people can obtain information about the epidemic situation from various channels of formal networks, such as the government and community, they can cooperate with home epidemic prevention, social distancing, and other anti-epidemic actions. Additionally, as the main channel of "human resources," strong ties in informal networks have personal inclusiveness. Specifically, they show understanding and tolerance of social distancing and less social interaction during the epidemic period but still provide relevant human resources (e.g., material support for epidemic prevention and spiritual support).
Hypothesis 3 Was Verified by the Regression Analysis of Youths' Anti-Epidemic State
The regression analysis of youth's anti-epidemic state in this study is actually the analysis of actors' habitus in the social distancing field; it can clearly show how various actors' social capital contends though exploring the effects of the social capital and the anti-epidemic action on the anti-epidemic state, and could comprehensively explain the mechanism of youth anti-epidemic action. Based on the analysis results, the discussion points are as follows.
First, social capital had a significant positive effect on social satisfaction and core relationship state among youth. The results showed that the social norms of "wearing a mask, washing hands frequently, and not spreading rumors," social support from family and friends, social trust in family interactions and online communication, and sharing epidemic information could enhance social satisfaction and the harmony of core relationships among young people. Owing to the popularization of information network technology in China and the timely disclosure of epidemic information by the government, information resources were disseminated overall through family interactions and social links on online exchanges in informal networks (core relationships). This improved young people's social trust in epidemic information and motivated them to adhere more closely to social norms, thereby enhancing the effect of epidemic prevention. However, the long duration of staying home for epidemic prevention led to young people spending more time with their families, although they could not work or study normally, and increasing their online contact between relatives and friends. In this case, social distancing generally did not affect social contact, which not only promoted young people's cooperation with the government's epidemic prevention policies but also strengthened their core relationships. Therefore, the epidemic prevention effect improved. This is in line with the positive role of social capital in general: the intake and mobilization of social resources to enhance effective behavior and obtain better social support. This shows that during the epidemic period, strong ties may have been more inclusive-this helps maintain a harmonious relationship without social activities such as meetings and gatherings besides providing social support for young people to fight against the epidemic. It can be seen that China's strong relationship culture has played an important role during the COVID-19 epidemic.
Second, social trust in social capital had a significant negative impact on the physical and mental health of young people. In this study, social trust was mainly measured by the trust of youth in epidemic information conveyed between family and friends, and epidemic information may cause people to experience a certain degree of panic, anxiety, and other negative emotions, which is in line with previous studies [38,39]. Therefore, social trust can also have a certain negative impact on physical and mental health among youth.
Third, youth anti-epidemic action had a significant negative impact on physical and mental health. Young people's participation in anti-epidemic activities was generally manifested as staying home for epidemic prevention, cooperating in epidemic prevention and control measures, and other general participation activities. Although young people spent more time interacting with their families, they also reduced the time spent during normal social interaction (e.g., gatherings), which inevitably had a negative impact on their physical and mental health, such as anti-epidemic fatigue, decreasing anti-epidemic action, and depressive symptoms. The effects of anti-epidemic action on the studying and working state and social satisfaction among youth was not significant. Due to online teaching and office work applied via the government advocacy of "suspended class, ongoing learning" and "orderly resumption of work and production", young people could not only stay at home to prevent the spread of the epidemic but also achieve a better balance with work and study.
Conclusions
The emergence of social distance in epidemic prevention can be seen as a change in the original activity field of youth, which changes not only the geographical field but also social network links and relationships, action rules, and resource support. This study emphasized that the social class differences in social distancing reflect not only the differences in youth social capital but also the prominent role of different elements of social capital in this field.
The social distancing restrictions of home epidemic prevention have substantially changed the original field of youth social activities, and should be a major challenge for youth social interaction. However, our research showed that youth cooperation with and participation in home epidemic prevention is at very high levels. The long-term social support model dominated by strong ties (e.g., material support for epidemic prevention and spiritual support) played an important role in the youth's anti-epidemic action and social satisfaction. The anti-epidemic actions and the evenly distributed access to epidemic information have had different degrees of negative effects on the youth's physical and mental health. However, strict and effective epidemic prevention guidelines, the reconstruction of social order by social norms and social connections based on the Internet and networks can compensate for the discomfort brought by social distancing and anti-epidemic actions.
Therefore, social distancing for epidemic prevention is a special, social, situational state, and it is a field where social capital has an impact on the differential changes in public participation actions and habitus of youth. This study will help further explain the behavioral choices of youth regarding combating the epidemic, and even participating in public policy.
Limitations and Future Direction
This study discussed how the social capital in the specific field of "social distance" affected the youth's anti-epidemic action and its state under the background of Chinese anti-epidemic policy. Some limitations of this study should be taken into account when interpreting our findings. First, owing to the strict anti-epidemic policy requirements at the time in China, it was difficult to control the structure of sample data through online questionnaires via social media. Second, because of the differential implementation of epidemic prevention and control measures among cities in Guangdong Province, many other factors may have occurred at the individual level and regional level. What can be tracked and studied in the future is what changes will take place in these social capital elements in the field of the new normalization of epidemic prevention and control, and what impact will it have on youths' value judgment, social participation, action strategies and life habitus.
|
v3-fos-license
|
2019-02-01T14:02:47.286Z
|
2019-01-30T00:00:00.000
|
64283782
|
{
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0211390&type=printable",
"pdf_hash": "639e65c72dab3c9b645b937e9d99c2532ba54e1b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45927",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "9ec74aaec55eea2d011a8e4d1b50908639e9244f",
"year": 2019
}
|
pes2o/s2orc
|
Assessment of displacement ventilation systems in airborne infection risk in hospital rooms
Efficient ventilation in hospital airborne isolation rooms is important vis-à-vis decreasing the risk of cross infection and reducing energy consumption. This paper analyses the suitability of using a displacement ventilation strategy in airborne infection isolation rooms, focusing on health care worker exposure to pathogens exhaled by infected patients. The analysis is mainly based on numerical simulation results obtained with the support of a 3-D transient numerical model validated using experimental data. A thermal breathing manikin lying on a bed represents the source patient and another thermal breathing manikin represents the exposed individual standing beside the bed and facing the patient. A radiant wall represents an external wall exposed to solar radiation. The air change efficiency index and contaminant removal effectiveness indices and inhalation by the health care worker of contaminants exhaled by the patient are considered in a typical airborne infection isolation room set up with three air renewal rates (6 h-1, 9 h-1 and 12 h-1), two exhaust opening positions and two health care worker positions. Results show that the radiant wall significantly affects the air flow pattern and contaminant dispersion. The lockup phenomenon occurs at the inhalation height of the standing manikin. Displacement ventilation renews the air of the airborne isolation room and eliminates the exhaled pollutants efficiently, but is at a disadvantage compared to other ventilation strategies when the risk of exposure is taken into account.
Introduction
Hospital facilities are places with a high risk of cross infection between their occupants. Studies in European hospitals [1] indicate that nosocomial infections contribute significantly to morbidity and mortality rates and that many of these infections are transmitted by airborne pathogens [2]. Everyday pulmonary activities, like breathing [3], coughing [4,5], sneezing [6], talking [3], are sources of bio-aerosols [7] that may be laden with the pathogens responsible for infectious disease transmission. Once the bio-aerosols leave the infected person their fate depends on multiple and complex factors [7][8][9][10]. One of the most important factors is a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 undoubtedly the airflow pattern, both in the room as a whole as well as in the microenvironment around the source patient and the vulnerable individual.
Airborne infection isolation room
If the appropriate measures are not taken, the bio-aerosols emitted by patients hospitalized with an airborne disease may be dispersed uncontrollably around the airborne infection isolation room (AIIR) or the rest of the hospital [11]. Different methods and technologies are available to provide adequate protection to people who pass through a hospital [12]. One of the recommended measures is to maintain a negative pressure with respect to the surrounding area so that air flows into the room and not in the opposite direction when doors are open. Unfortunately, negative pressure briefly disappears during door operation and air leakage is virtually inevitable [13]. Many guidelines and regulations [14][15][16][17][18] related to airborne isolation rooms (AIIR) advise or require that access to the room should be through an anteroom in order to minimize escape of contaminated air [19,20]. Yet neither the negative pressure nor the anteroom prevents the risk for the person entering the AIIR. Only personal self-protection measures and a suitable ventilation strategy reduce the possibility of contagion [21].
With regard to ventilation, one common recommendation is to use high renewal rates to dilute and remove pathogens [22]. However, this does not prevent the appearance of stagnant zones and short-circuiting, resulting in "clean" and "polluted" areas of exhaled pathogens with the subsequent risk of high cross-infection rates. Several studies indicate that the design of a ventilation system and the resulting airflow patterns play a more important role than just air renewal rates alone [23,24]. Airflow patterns generated by ventilation systems can be controlled, and recent research has focused on providing good air distribution rather than on maintaining high rates of air renewal as a strategy to reduce the risk of airborne contagion [25][26][27][28][29][30].
Displacement ventilation
Various ventilation strategies such as mixing ventilation (MV) and displacement ventilation (DV) offer different possibilities to protect people from airborne cross infection [10,31]. MV is the most widely applied strategy in hospital patient rooms. However, in recent years DV has emerged as an alternative. Some studies have shown that DV is more energy efficient [32]. Standardisation associations have developed DV guidelines and recommendations for designers.
DV systems were initially developed to remove thermal loads in industrial warehouses due to their ability to concentrate heat and pollutants above the occupied zone. DV systems are characterized by thermal and mass stratification such that they cannot be modelled with the fully mixed room air approach [33]. In DV, cool air is supplied into the lower part of the room using low impulse diffusers. This slow moving fresh air fills the room from below, is heated and rises to the ceiling, where the exhaust is located. There must be heat sources for DV to work. As breathing is also a heat and pollutant source, contaminants might be transported directly to the upper part of the room. DV offers the possibility of working with two zones, a low zone with clean air, and an upper zone with pollutants. Some authors report that it is possible to design DV hospital patient rooms that have low human exposure to bio-aerosols that containing pathogens [32,34], although in certain situations high exposure may also exist in rooms with DV [35][36][37].
Objective and methodology
The aim of this work is to evaluate the suitability of applying the DV strategy in AIIRs. The analysis is mainly based on numerical simulation results obtained with the support of computational fluid dynamics (CFD). The interaction between the different air flows -breathing flows [38], convective flows around human bodies [39], thermal plumes above heat sources, rising boundary layer flow at the warm wall together with large-scale air movements due to room air flow instabilities-is so complex that it is difficult to approach the problem directly. Initially, dispersion of contaminants exhaled by a single person standing in an indoor environment was studied [40]. Later, a second person facing the first was added to analyse the interaction between the respiration flows of both people [41]. These two previous studies have enabled an adequate procedure to be established for analysing the role of ventilation in the risk of crossinfection between patient and susceptible health care worker caused by the airborne pathogens exhaled during breathing in an AIIR with DV. Using the validated model, twelve different numerical tests are carried out to analyse how air renewal rates, the position of the health care worker, and the position of air exhaust openings affect the risk of cross infection.
Test room and experimental setup
The experimental study of a patient (P) lying on a hospital bed and a health care worker (HCW) standing close to the bed in a typical AIIR room (Fig 1) [14,15,42,43] was carried out in a test room at Cordoba University, 4.5 m (long), 3.3 m (wide), and 2.8 m (high). The two thermal breathing manikins have the same geometry. The total sensible heat emitted for each manikin corresponds to a metabolic rate of 1 met for the HCW and 0.7 met for the patient, 80 W and 70 W, respectively. There is an external heat gain of 500 W in the 4.5 m wall opposite the HCW, which represents an external wall exposed to solar radiation. The remaining walls as well as the floor and ceiling are adiabatic as the chamber is inside a lab at the same temperature.
A displacement flow diffuser (QLV-180-200-800, Trox, Germany) are used as supply air unit for the hospital room, and two exhaust openings were located on the opposite wall, just below the ceiling. The ventilation system was set at three different air change rates of 12, 9 and 6 ACH, supplying air at 21.8˚C, 20.6˚C and 18.2˚C, respectively to maintain the same mean room temperature. Part of the effective area of the displacement diffuser is partially covered during the 9 ACH and 6 ACH tests in order to maintain the same supply velocity. Information about breathing manikins, measuring instruments and others details of these experiments can be found in [44].
Indices for quantifying the ventilation and infection risk
The ventilation system of a room can pursue different aims: thermal comfort, air renewal, elimination of gaseous or suspended contaminants, avoiding risk of infection, etc. Depending on the activities carried out in the room, one aim or another will prevail. Specific indices exist to quantify the extent to which each aim is achieved.
The most basic index is air changes per hour (ACH). To calculate ACH, only the air volume of the room and the air flow rate need to be known. This index is commonly used in guidelines and recommendations.
The air change efficiency index (ε a ) is defined as the ratio between the minimum and the actual mean replacement times and can be calculated from the expression: where τ n = V/Q, V is the room volume and Q is the flow rate of fresh air, i.e. τ n is the inverse of the number of air changes per second, and t is the average age of the air in the room. Air change efficiency only depends on the overall air flow pattern in the room, and takes values between 0 and 1.
If, in addition to knowing the ACH and the air flow pattern, the characteristics of the contaminant and the point of emission are also known, then the contaminant removal effectiveness index (ε c ) can be used. This index can take any positive value. Assuming that the air supplied to the room is contaminant free and that the flow is steady, ε c is calculated by dividing the concentration of pollutant in the exhaust air c e by the average concentration in room c: Finally, if the area to be protected is known -the surface of a printed circuit during manufacture, the instrument table during a surgical operation or the lungs of a person sharing a room with another infected person-the intake fraction (IF) index may be used. This index is the flow rate of contaminant that crosses the surface to be protected divided by the flow rate of contaminant that enters or is generated inside the room (Bennett et al., 2002). In order to assess the risk of cross-infection, the intake fraction is defined as the proportion of the cumulative mass of contaminant inhaled by the HCW to the mass of contaminant emitted in the patient's exhalation during the same period of time.
Where Q HCW , and Q P are the instantaneous breathing flow rate of HCW and P, respectively, Y HCW is the instantaneous mass fraction of N 2 O in the HCW inhalation air, and Y P is the instantaneous mass fraction of N 2 O in the patient's exhalation air.
Governing equations
Airborne cross infection between occupants is unsteady, non-isothermal and is a three-dimensional problem involving two species: air and contaminant. As modelling tool, CFD has been applied to simulate the unsteady airflow using the URANS method together with the RNG k-e turbulence model equations, mean age of air equation, the N 2 O mass fraction equation and includes the effect of thermal radiation using commercial software Ansys Fluent.
The local mean age of air τ in the whole fluid field is calculated solving the following conservation equation: where v ! is the air velocity and D a is the mass diffusion coefficient [45]. A subroutine solving Eq (4) numerically is written, and the subroutine is built into the CFD-program. Once the average age of the air in the whole room is calculated, it is possible to directly evaluate ε a according to Eq (1). t is the average of τ in the whole room and τ n can be calculated as the inverse of the number of air changes per second or as the average of τ in the air extractions.
The two values coincide. The CFD-program models the mixing and transport of two chemical species, air and N 2 O, by solving equations describing convection and diffusion for each component species without reactions. ε c and IF can be calculated directly from their definitions. Since the model is transient, the time evolution of ε c and IF can be calculated.
Radiation is introduced into the CFD model using the surface-to-surface radiation model [46]. The importance of thermal radiation in airflow with DV was examined experimentally in [47]. The RNG k-ε model that takes into account the low Reynolds-number effects in conjunction with enhanced wall treatment that combines a two-layer model with enhanced wall functions are used in these simulations. Pressure-velocity coupling was resolved using the PISO scheme. A second-order implicit transient formulation is chosen which is unconditionally stable with respect to time-step size. A second-order upwind discretization scheme is used for all equations [46].
In transient simulations (also in experiments) an error might occur during start-up and when letting simulations run sufficient time to achieve characteristic large eddy turnover time [48]. The initial conditions for non-steady computations are obtained for a steady simulation. The first 30 minutes of the transient simulation are discarded. Large eddy turnover time is a characteristic timescale for the domain l 0 /v 0 where l 0 is the largest scale of the room and v 0 is the characteristic velocity. An estimation of v 0 can be made by dividing the ventilation flow rate by a half section of the test chamber, which gives a large eddy turnover time of five minutes for 6 ACH. A large eddy turnover time is inversely proportional to ACH if the remaining parameters remain unchanged. In order to obtain a suitable temporal average, total times of 20, 15 and 10 minutes are simulated for 6, 9 and 12 ACH, respectively (Table 1). To capture the effects of the smaller time scales related to the breathing process, a time step of 0.02 s is selected.
Computational domain
The domain of the computational model mimics in detail the experimental geometry of the life-size hospital isolation room. Most of the domain is built with a hexahedron mesh. A tetrahedral mesh has been employed near the diffuser and near to the manikin's surface due to their geometry complexity. Mesh refinement was performed around the manikins, the exhausts, the walls, and the displacement diffuser since high velocity and temperature gradients are expected. The high concentration, velocity and temperature gradients require a very fine local grid system at the manikins' faces and in the exhalation zones [49]. The shape of the manikins for the numerical simulations reproduces a thermal breathing manikin used by others authors [50,51]. Detailed information about the manikins' shape and mesh can be found in [41]. A sensitivity study was carried out with successive refinements of the exhalation zones and around the displacement diffuser. A final mesh of nearly one and a half million cells is used.
Boundary conditions
The CFD model of a patient and a HCW in AIIR faithfully reproduces the experimental conditions [44]. The lying manikin (also known as the patient or source manikin, SM) exhales through the mouth and inhales through the nose. The standing manikin (also known as the HCW or target manikin, TM) exhales and inhales through the nose. Breathing functions are a very important point in these simulations [52]. The two manikins breathe following a sinusoidal function. For the patient manikin, the tidal volume is 0.57 litres and the breathing frequency is 20 breaths/minute. For the HCW manikin, the tidal volume is 0.66 litres and the breathing frequency is 15 breaths/minute. The patient manikin thus performs four full breaths in a 12-second period and the HCW manikin three full breaths during the same period. Velocity is normal and uniform in the HCW's nostrils and in the patient's mouth. The temperature of expired air in the two manikins is 34˚C. The mass fraction of N 2 O in the exhaled air of the patient manikin is Y P = 0.027. The boundary condition for the displacement diffuser is a uniform velocity of 0.926 m/s, which is normal for vertical diffuser surfaces. For the 12 ACH simulations, the entire front area of the displacement diffuser is used as an inlet. The upper quarter and upper half of the displacement diffuser front area are considered as walls for the 9 and 6 ACH simulations, respectively, in order to maintain the same inlet velocity [53]. The effective area of the displacement diffuser is taken into account adding the corresponding momentum/volume source in a sub-domain in front of the diffuser [54].
The air leaves the room through two exhaust openings located in the same wall as the displacement diffuser (west) or in the opposite wall (east). A pressure-outlet boundary condition is imposed in the exhaust openings.
In order to maintain the same mean indoor temperature for all tests, the air is supplied at a temperature of 21.8˚C, 20.6˚C, and 18.2˚C for 12, 9 and 6 ACH, respectively. The ceiling, floor and all the walls except one 4.5 m × 2.8 m wall (north or south wall) are considered adiabatic. In this non-adiabatic wall a heat flow of 39.7 W/m 2 is imposed as the boundary condition. This represents a glazed wall with a transmission coefficient of 4 Wm -2 K -1 with 10˚C temperature difference between indoor and outdoor air. The lying and standing manikins have a thermal load of 70 W and 80 W, respectively.
Validation of numerical results with experimental data
In rooms with displacement ventilation, thermal stratification is generated. The temperature of the exhaled air is higher than the temperature of the ambient air. The effects of buoyancy will play an important role in both the evolution of exhaled air and the dispersion of the associated pathogens. It is therefore crucial to correctly predict the room's temperature field. This numerical model was previously used to study the dispersion of exhaled contaminants by the mouth of a person standing in a room with DV. The numerical results were compared to data from experiments performed in a full-scale laboratory at the University of Aalborg [55]. It was found that the numerical model was able to accurately reproduce both the thermal stratification in the room and the deflection of the exhaled air jet. Details on the validation of the temperature field are shown in [40].
Exhaled pathogens do not disperse in the same way when expired through the nose or mouth. How the vulnerable individual breathes also affects the microenvironment near the face and, therefore, the risk of inhaling pathogens. This CFD model was used in a previous study to analyse how the way of breathing influences the risk of cross-infection between two people standing facing each other at different distances in a room with DV. The results of the CFD simulations were compared to experimental data obtained in the Alborg laboratory with two manikins at different distances and exhaling through the mouth and inhaling through the nose. Details of this new validation are shown in [41].
The numerical model is now validated again with the experimental results obtained in a real-scale laboratory at the University of Córdoba [44]. Comparing the results of two global ventilation efficiency indices, the air change efficiency ε a and the contaminant removal effectiveness ε c clearly evidences how the numerical model is able to capture the experimental tendencies and to reproduce the values to a reasonable degree of concurrence (Fig 2). The experimental values of contaminant removal effectiveness are slightly higher than the numerical values. The possibility that these differences are due to difficulties that emerge when measuring these indexes using photoacoustic spectroscopy cannot be ruled out.
Temporal evolution of contaminant inhaled
The exhalation, dispersion and inhalation of contaminants are transient phenomenon. Numerical data provide a detailed temporal evolution of the amount of contaminant inhaled by the HCW. A breathing cycle (inhalation and exhalation) lasts three seconds for the P and four seconds for the HCW. Each 12 seconds both start at the same time, although the two breaths are out of phase throughout the cycle. This progressive phase displacement might cause the amount of contaminant inhaled by the HCW in each of the three cycle inhalations to differ. The phase-averaged method was used in Fig 3 to show the concentration of N 2 O inhaled by the HCW in 12-second cycles. It is worth noticing that the average amount of contaminant inhaled in the three inhalations is the same. This is a clear indication that the interaction between the breaths of the two manikins is very weak due to the great distance between the breathing zones of the two manikins [41]. Between the mouth of the patient and the nose of the HCW there is a distance of 0.94 m in a straight line. The nose of the HCW is 0.53 m away from the patient's exhalation axis.
It should also be mentioned that the amount of pollutant inhaled in each 12-second cycle changes significantly. In other words, cyclical dispersion is very noticeable, especially for low ACH. This result shows that the dispersion of exhaled contaminants and their subsequent inhalation are transient phenomena due to the transient nature of the airflow pattern in the area between P and HCW. The overall airflow in the room also exhibits a transient behaviour with room airflow frequencies lower than the manikins' breathing frequencies. These results are in line with previous works [41,56].
Airflow pattern. Air change efficiency
In a perfect mixing ventilation (PMV) flow, air composition is equal throughout the whole room and no contaminant concentration gradients are present. PMV implies infinitely rapid diffusion and perfect displacement ventilation PDV implies absolute absence of diffusion [45]. PMV and PDV are idealized theoretical flow patterns that never occur in practice but which are, nevertheless, useful concepts for comparative purposes. In a perfect mixing ventilation flow ε a = 0.5. In a perfect displacement ventilation flow ε a = 1.
As expected, in all the cases analysed, 0.5<ε a <1.0, when using the DV strategy. There is a clear correlation between the ventilation efficiency and the position of the air exhaust openings The air change efficiency index is also correlated with the position of the radiant wall. In all cases, when the radiant wall is behind the HCW (empty symbols) air change efficiency is greater than when it is opposite the HCW (full symbols), probably because thermal loads are more balanced compared to the room's plane of symmetry. No clear correlation can be established between air change efficiency and ACH.
Dispersion of the contaminant. Contaminant removal effectiveness
DV provides vertical thermal stratification that determines contaminant dispersion in the room. In order to maintain the same mean temperature in the room with less ACH, the difference between the air inlet temperature and the average temperature of the room is increased. Up to a height of 1.1 m, the vertical temperature gradient is greater for low ACH (Fig 5A), whereas from 1.1 m to the ceiling the gradients are almost equal for all ACH. This behaviour can also be seen for the other simulated cases. These numerical results are consistent with experimental observations [57,58].
As the contaminant source is also a source of heat, the contaminant will be transported directly to the top of the room [59]. The pollutant is expected to be distributed according to a two-zone model [35], a clean zone in the lower part (y<y st ) of the room and an unclean zone in the upper part (y>y st ). The mean concentration of N 2 O in horizontal planes is shown in Fig 5B. Below the patient exhalation height, z = 0.78 m, N 2 O concentration is practically negligible. The general airflow pattern in the room, the momentum of the patient's exhalation jet and the convective effects of the exhalation jet and the thermal plume that forms above the patient cause the pollutant to rise. The average N 2 O concentration in horizontal planes increases rapidly, reaching the maximum at a height between 1.6 m and 1.7 m (Fig 5B). The exhaled air is concentrated at the height of the HCW's head. This behaviour is the so-called lockup phenomenon and has recently been studied numerically [59] and experimentally [60]. The lockup phenomenon was more intense when the ventilation rate decreased, since a low ventilation rate causes a large temperature gradient [59]. The stratification height increased as the ventilation rate increased.
In a PMV flow, the concentration of contaminant in the room is homogenous, such that the concentration in the exhaust air is the same as the mean concentration in the room, with contaminant removal effectiveness being 1. In all the cases analysed in the present study, contaminant removal effectiveness ε c is between 0.73 and 2.0 (Fig 6). Only when the radiant wall Assessment of displacement ventilation systems in airborne infection risk in hospital rooms is behind the HCW and ACH is 12, is ε c below 1.0. No clear correlation could be established between ε c or with the position of the radiant wall or the ACH or with the position of the exhaust openings. Moreover, unlike what was observed with ε a , it seems that the exhaust positions barely influence ε c .
In a room with PMV and steady state conditions, the contaminant concentration, Y PMV , would be homogeneous and inversely proportional to ACH: where Q P is the patient's tidal volume by their breathing frequency, Q HCW is the HCW tidal volume by their breathing frequency, and Q s is the supply air flow. The Y PMV values for 6, 9 and 12 ACH are 0.0028 Y P , 0.0018 Y P and 0.0014 Y P , respectively. Between approximately 1.3 m and 1.9 m in height ˗the HCW inhales at a height of 1.52 m˗ the mean N 2 O concentration in horizontal planes is higher than the corresponding Y PMV (Fig 5B). However, this does not mean that the HCW inhales air with that mean concentration as can be seen by comparing Fig 3 and Fig 5B. Firstly, pollutant distribution in horizontal planes is not even and, secondly, the convective boundary layer around the HCW enters the air from the lower part, leading to the HCW breathing zone. When the radiant wall is opposite the HCW, the contaminant cloud is deviated towards the radiant wall away from the HCW. This may be due to the convective effects generated by the radiant wall and the horizontal momentum of HCW exhalation. However, when the radiant wall is behind the HCW, the contaminant is distributed more symmetrically with respect to the room's vertical plane of symmetry. The lack of symmetry in N 2 O distribution at z = 1.52 m affects N 2 O concentration in each of the two extractions. This tendency is less pronounced for the 12 ACH cases.
Infection risk. Intake fraction
The intake fraction index was obtained as the ratio between the mass of N 2 O inhaled by the HCW during the time simulated and the total mass of N 2 O exhaled by the patient at the same time. In a room with PMV and steady state conditions, the intake fraction index, IF PMV , would be homogeneous and inversely proportional to ACH: If the breathing rate by tidal volume of both manikins were equal then IF PMV = Y PMV /Y P . The Y PMV values for 6, 9 and 12 ACH are 0.0024, 0.016 and 0.0012, respectively. Fig 8 shows the IF values for all the DV cases analysed together with the IF values that would correspond to PMV. As ACH increases, IF values are expected to decrease. However, when the radiant wall is opposite the HCW (Fig 8A) the lowest IF values are for 9 ACH. This unexpected behaviour was also observed in the validation experiments [44].
As with the results shown in Fig 7, the position of the exhaust openings does not have a significant influence on IF. In 8 of the 12 cases analysed IF is clearly higher than IF PMV , the amount of inhaled contaminant, and therefore the risk of infection, is clearly higher than what would correspond to PMV. These results discourage the use of displacement ventilation in AIIRs.
Comparison between indexes
Both ε a and ε c are global indices and provide valuable information for quantifying the quality of ventilation in the whole room, regardless of the ACH. However, the spatial distribution and temporal evolution of both the contaminant concentration and contaminant age are neither homogeneous nor static. In order to estimate the risk of infection, a local index such as the IF is more appropriate. The drawback, however, is that IF depends on ACH. If the aim is to compare cases with different ACH, it is necessary to normalise. One possible normalisation is with the IF that would correspond to a PMV, IF PMV . The three indices shown in Table 2 are independent of ACH. Table 2 shows that air change efficiency is not correlated either with contaminant removal effectiveness or intake fraction. There is also no clear correlation between contaminant removal effectiveness and intake fraction. The position of the HCW and the number of ACH have virtually no influence on air change efficiency The position of the exhaust openings has a greater impact on ε c than on ε a , whereas changing the position of the HCW has a greater effect on ε a than on ε c . Only in two of the twelve cases analysed is IF/IF PMV clearly below 1. Therefore, although DV is a suitable strategy to renew the air in the room and to eliminate exhaled contaminants (high values of ε a and ε c ) it is not as effective at decreasing cross-infection risk.
Conclusions
Detailed transient CFD tests have been carried to determine the suitability of a DV strategy in a representative case study of a one-bed hospital room. Two exhaust configurations, two external wall positions and three air renewal rates have been tested. In view of the results, the following conclusions may be drawn: In most of the DV cases analysed in this paper, the values of air change efficiency and contaminant removal effectiveness are very promising. Nevertheless, analysing AIIR ventilation system performance based exclusively on global indices, such as air change efficiency or contaminant removal effectiveness, entails major limitations that can lead to erroneous decisions. Intake fraction provides more useful information in this type of rooms. This conclusion is supported by the lack of correlation between IF and the rest of the indices.
A priori, it could be assumed that an increase in the number of ACH leads to a decrease in the risk of infection. However, it has been found that an increase in the ventilation rate could not decrease exposure and in certain circumstances may indeed increase it. This conclusion concurs with other works [23,61,62]. The reason for this behaviour is unknown, and further analyses are required to understand the phenomenon. Local airflow pattern plays a crucial role in the transport and dispersion of exhaled pathogens and, consequently, the risk of crossinfection. Numerous works have studied the impact of the position of the air inlets and exhaust openings or ACH in ventilation efficiency. In most of the works, the room envelope was considered to be well insulated and therefore adiabatic. This work has been shown that the presence of a radiant wall significantly affects the air flow pattern and contaminant dispersion. For the cases analysed, the relative position between the HCW and the radiant wall has a greater impact on the risk of infection than the extraction position or the number of ACH. When the radiant wall is opposite the HCW, the combined convective effect of the wall and the exhaled air drive the pollutant cloud away from the HCW.
In all the DV cases analysed, contaminants exhaled by the patient accumulate at the HCW inhalation height. This is the well-known lockup phenomenon. However, the air inhaled by the HCW comes from lower layers due to the effect of the convective boundary layer around the HCW. This argument has been used previously to recommend the use of DV in this type of room. However, despite this effect, the results obtained in this work do not advocate the use of DV. IF is no better than what may be expected for a PMV air flow pattern. It would be interesting to analyse the suitability of other ventilation strategies, such as downward-directed ventilation, for airborne infection isolation rooms.
Limitations of the work
This work fails to consider either human movement or the movement of opening and closing doors. An analysis of how human movement affects the air flow pattern of displacement ventilation would prove extremely valuable. A further limitation is that the external wall is simulated with only one heat gain value. In a real situation, heat gain changes during the day and during the seasons. In addition, the study only analyses the exhalation of small respiratory bioaerosols during breathing. Taking into account other pulmonary activities, such as coughing or sneezing, which expel larger droplets with higher initial momentum, would provide further insights.
Microbial survival in the environment is another issue not dealt with as it also lies beyond the scope of this article. The results obtained and discussed in this work are obtained under specific conditions which, whilst representative of the most common situations, do not represent every possible scenario. For instance, if the distance between the people or the height at which the contaminants are exhaled were changed then exposure to the contaminants might change. Despite these limitations, however, the results might prove helpful in the design of AIIRs in order to implement ventilation strategies where the exposure to exhaled contaminants may be reduced in most situations.
|
v3-fos-license
|
2022-06-03T15:08:08.473Z
|
2022-06-01T00:00:00.000
|
249297804
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00170-022-09400-z.pdf",
"pdf_hash": "825bc715126e8c66410ab785aa63c6cd929e8e4d",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45929",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "e1cccc9f390855267d4ba45a8e1530097dcc1306",
"year": 2022
}
|
pes2o/s2orc
|
Geometry modifications of single-lip drills to improve cutting fluid flow
For single-lip drills with small diameters, the cutting fluid is supplied through a kidney-shaped cooling channel inside the tool. In addition to reducing friction, the cutting fluid is also important for the dissipation of heat at the cutting edge and for the chip removal. However, in previous investigations of single-lip drills, it was observed that the fluid remains on the back side of the cutting edge, and accordingly, the cutting edge is insufficiently cooled. In this paper, a simulation-based investigation of an introduced additional drainage flute and flank surface modifications is carried out using smoothed particle hydrodynamics as well as computational fluid dynamics. It is determined that the additionally introduced drainages lead to a slightly changed flow situation, but a significant flow behind the cutting edge and into the drainage flute cannot be achieved due to reasons explained in this paper. Accordingly, not even a much larger drainage flute with unwanted side-effect of a decrease tool strength is able to archive a significant improvement of the flow around the cutting edge. Therefore, major changes to the cooling channel, like the use of two separate channels, the modification of their positions, or modified flank surfaces, are necessary in order to achieve an improvement in lubrication of the cutting edge and heat dissipation.
Introduction
Deep hole drilling is used for manufacturing in numerous areas. Which type of deep hole drilling method is used depends on the required borehole properties, but in particular also on the borehole diameter, as well as the desired length-to-diameter (L/D) ratios. For the single-lip deep hole drilling (SLD), the length-to-diameter ratio is up to L/D = 900 and the diameter is D = 0.5…80 mm. SLD is used for the production e.g. of diesel injection nozzles in the automotive industry and is characterized by a high surface quality and residual compressive stresses in the hole bottom [1]. When drilling without cutting fluid, very high temperatures occur in the cutting zone. These are also dependent on the material to be machined. For example, they are between T = 600 °C and 800 °C for 42CrMo4 [2] and even higher for materials such as Inconel 718. To ensure workpiece quality, to reduce tool wear, and to improve process productivity, it is therefore necessary to use a cutting fluid [3]. In SLD, the cutting fluid is usually fed to the cutting zone through cooling channels located inside the tool. The shape and number of cooling channels depend on the bore diameter and the tool diameter. If the diameter is D < 10 mm, the tool usually has only one kidney-shaped cooling channel [4]. However, the efficiency of the cutting fluid in the cutting zone cannot be determined experimentally due to the inaccessibility during deep drilling. For the analysis, it is therefore important to use simulations, to deepen the understanding of the process and to initiate optimization measures based on the results.
Smoothed particle hydrodynamics (SPH) is a method for the description of fluids by moving interpolation points, the so-called particles. Therefore, the Navier-Stokes equations describing the fluid movement are represented by weighted sums over a set of neighbor particles [5]. Its Lagrangian and meshfree nature allows SPH to describe arbitrary and moving surfaces and interfaces. Its areas of application are especially when non-regular, moving free surfaces or dynamic fluid/structure interaction are present.
The use of computational fluid dynamics (CFD) for investigations is in the field of machining still not very common. However, to get progress in this field, it is very important to develop solutions for demanding applications efficiently [6]. The application of CFD is complicated due to the definition of the physical conditions (fluid mechanics), the modelling of the internal flow field (fluid modelling), the definition of boundary conditions, the meshing strategy and meshing of the inflation layer, the correct choice of the turbulence model to be used, and the handling of the actual simulation software in complicated situations.
The understanding of the complex fluid-dynamics in combination with mechanical effects is very important to improve machining processes. The design of machining tools can be optimized with detailed knowledge of the cutting fluid and the complex interactions. In [7], a combined approach shows for SLD that the mass exchange of the cutting fluid close to the cutting edge is far too low in order to guarantee the required cooling effect. The simulation of an SLD [8] shows a great potential for optimizing the tool geometry.
Earlier investigations of the fluid flow of single-lip drills show that the fluid gets trapped on the backside of the cutting edge [9]. As shown in Fig. 1, these investigations of different designs show that a large drainage flute diverts a significant amount of the cooling fluid flow in direction of the backside of the cutting edge. Accordingly, it can be assumed that the transport of the heat which is generated by the cutting process has great potential for improvement.
However, these modifications are made purely from a fluid-dynamics point of view without considering the stresses and the limits of manufacturability of these additional drainage flutes. Similar modifications have been made to the flank face design of twist drills [10], which show an improved cutting-fluid flow around the cutting edges and reduce the tool wear. Furthermore, a combination of both ideas, an additional introduced drainage flute together with a modification of the flank face design, might be beneficial for the fluid flow around the cutting edge.
The novel contribution of this paper is the simulationbased investigation whether an additional drainage flute and flank face modifications considering also the manufacturability point of view can divert the fluid flow around the cutting edge. Therefore, SPH and CFD analyses of three modifications are carried out. These modifications show one way how an additional drainage flute could be realized as well as in combination with modifications of the flank face design, where the size of the modifications are chosen based on technological aspects.
Geometric modifications of the single-lip drill
The SLD considered in this work has a diameter of as little as D = 2 mm and a single kidney-shaped cooling channel. The small diameter was chosen because the machining of miniaturized industrial components requires high process reliability. The extremely high strength, the strong tendency to work hardening, and the low thermal conductivity make the drilling machining of nickel-based alloys such as Inconel 718 very demanding. With regard to deep drilling, the pronounced ductility poses further challenges in terms of chip formation as well as process-safe chip evacuation, which are primarily influenced by the cutting fluid flow. When deep drilling with the smallest diameters, the limited dimensions are an additional factor. Micro deep drilling in particular is one of the critical key processes here and is used widely e.g. in the medical technology, textile, and automotive industries. The introduced drainage flute has a circular shape with a radius of r = 0.2 mm and a depth of d = 0.07 mm to be machined by a grinding wheel. This would allow simple machining of existing drills for experimental validation in the case of a successful predictive simulation result. The drainage is placed as close as possible to the cutting edge while ensuring that the tool structure is not weakened. Additionally, the flute is placed on the opposite end where the cutting fluid flows into the chip flute to redirect the flow around the back of the cutting edge into an area where almost no flow occurs, as later shown for the reference design (V0). Thereby, fresh cutting fluid could better absorb the dissipated heat behind the cutting edge and the borehole ground directly after the material removal. Furthermore, a larger radius or depth while maintaining the minimum distance extends the drainage further away from the cutting edge, which does not improve the desired lubrication of the Fig. 1 Fluid analysis of a large drainage flute which diverts a significant amount of flow along the cutting edge [9] cutting edge. The second modification is a direct channel in the flank face from the outlet to the drainage flute. The geometries of all investigated variants are shown in Fig. 2. The geometries were investigated by SPH and by CFD in combination of both methods. The respective simulation models have different assumptions as SPH starts with an empty borehole in a transient simulation while CFD meshes the complete fluid domain, thereby assuming a completely filled borehole that is statically simulated. Consequently, both methods supplement each other in modelling the highly turbulent and dynamic cutting-fluid flow. The SPH simulation uses the weakly compressible formulation for which no pressure boundary condition needs to be applied. The influx velocity is selected as v = 69 m/s. This choice leads to the same outflow velocity in the chip flute as if a pressure boundary condition of p = 100 bar for the CFD simulation is applied. The properties of the fluid and further boundary conditions are listed in Table 1.
The simulation models used in both numerical methods, CFD and SPH, were validated for the reference design in earlier studies [11] by tracking via high-speed camera analysis micro-particles, which were added to the cutting fluid. In contrast to earlier studies [11], a drilling oil, which properties are listed in Table 1, is used instead of water.
Simulation results
As the results of the reference simulation (V0) show, the fluid gets trapped behind the cutting edge as shown by the absence of streamlines in this area and by the low fluid velocity. In this area, as well as on the cutting edge itself, higher temperatures can be expected due to the chip formation. Thus, major improvements in the cutting fluid flow and thereby in the heat transportation are required. Figure 3 shows the streamline plots of the reference solution in comparison to the investigated drill modifications. For the SPH simulations (V0, V1, V3), an empty borehole at the beginning of the simulation is used as starting point and the fluid flow is investigated after a time of t = 0.2 ms when the flow around the cutting edge is nearly static. The CFD simulation (V0, V1, V2) starts with an initially completely filled borehole and assumes a static flow. The comparison of CFD and SPH analyses for the reference design (V0) and the drainage flute (V1) show a good agreement in their results as in previous works [11].
The results of both the CFD and SPH simulation show that the cutting fluid flow is not improved significantly by the geometric changes compared to the reference design V0. The CFD simulation uses an outflow velocity of approximately v = 80 m/s for the geometric modifications of V1 and V2 in the center of the kidney-shaped cooling channel. However, in all designs, the flow slows down to approximately v = 55 m/s in the chip flute, similar to the reference design. In contrast to V1, the additionally modified flank face channel in V2 and its modification in V3 achieve a considerably better diversion and the fluid behavior shows to be better influenced. Nevertheless, the impact on the velocity and thereby volume flow around the cutting edge are minor. The fluid flow inside the drainage flutes is shown for the modification V3 in Fig. 4. It can be seen that the flow inside the drainage flute is laminar, which also applies to the other modifications. The flow is accelerated due to the throttling effect at the entrance to the drainage flute and flows at a speed of between 20 and 30 m/s within for modification V3. However, the volume flow rate in the drainage flute is negligible as it is about two orders of magnitude smaller than in the chip flute.
Despite the low flow rate in the drainage flute, it might create additional vortices behind the cutting edge, which benefit the heat conduction by distributing fresh cooling liquid. For the analysis of the distribution of the fluid and the vortices, the Q-criterion identification method [12] is used. This describes the local balance between rotation and shear in all spatial directions. As soon as the vorticity outweighs the shear, vortices are generated. The Q-criterion is determined with the rotation tensor and the rotation of the vorticity Figure 5 shows the calculated Q-criterion with an isosurface. Due to the meshless characteristics of SPH, a Delaunay triangulation is performed before calculating the Q-criterion for the SPH results. The visualization is carried out based on the same threshold value for all SPH results. Larger visible areas correspond with larger areas where more vertices occur, which benefit the intermixture of the cutting fluid and thereby improved heat transport. The comparison of the results for the designs V1 and V2/V3 shows that the additional flank face channel in V2 and in V3 results in more vortices around the cutting edge as shown by more highlighted areas. This results in a better mixture of the fluid layers around the cutting edge and consequently, in a better cooling and lubrication around the cutting edge.
While all modifications show some fluid flow through the additional drainage flute, the total fluid volume flowing through the second flute is small. Therefore, the modifications have little influence on the chip evacuation which depends highly on the cutting-fluid flow for single-lip drills. Especially, the additional flank face channel leads to a slightly improved flow into the drainage flute. Furthermore, the orientation parallel to the cutting edge leads to a higher amount of fluid flowing around the area as well as an improved mixture of the fluid layers behind the cutting edge, which should improve the cooling in this area. However, the changes in the fluid distribution introduced by modifications are quite small and are not yet sufficient for a better flow with higher velocities around the cutting edge. Furthermore, the small improvement of the flow behind the cutting edge does not improve the tool life in a factor to compensate or outweigh the introduced weakening of the cutting edge by The observed fluid behavior can be explained by the stream filament theory [13]. As the cross-section does not significantly change by the additional modifications and the fluid flow is mostly driven by its momentum and follows the streamline, a significant streaming behind the cutting edge and into the drainage flute cannot be achieved. Therefore, significant changes to the design of the cooling channel and its outlet would be necessary to divert the cooling flow into the area behind the cutting edge. Furthermore, this effect is increased by the adhesion of the fluid molecules to the wall in the boundary layer and the creation of many small vertices at the inlet to the drainage flute.
As the presented modifications do not lead to major improvement of the cutting edge, a significant larger design change of the drainage flute was tested. Therefore, a finite element-based parameter study was carried out, which showed that the torsional strength of the drill is not significantly reduced except for very large modifications. However, the deformation of the right-hand side of the cutting edge by the cutting forces increases rapidly with increasing size of the drainage flute. In the following, a larger modification was used which cutting edge deformation is already critical to get an idea about the upper limit achievable by the given design of the drainage flute. Figure 6 shows the streamline plot and the iso-surface of the Q-criterion of the increased drainage flute with a radius of r = 0.336 mm and a depth of d = 0.158 mm.
The streamline plot shows that more fluid is diverted into the drainage flute compared to the smaller design of modification V1 and that higher flow velocities around the front into the drainage of about v = 20 m/s are reached. Furthermore, the Q-criterium shows higher vorticity in the region below the cutting edge which benefits the heat dissipation. This makes clear that the flow can be influenced. However, as stated before, the increased size of the drainage flute still moves the flow away from the cutting edge. Therefore, strongly improved flow around the cutting edge is not achieved and the increased vorticity is not sufficiently attractive compared to the major weakening of the tool strength. Consequently, it shows that even a larger modification of the presented design does not significantly benefit the cutting edge lubrication und heat dissipation.
Conclusions
In this paper, the cutting fluid distribution in the SLD was analyzed using SPH and CFD simulation. In order to bring more cutting fluid closer to the main cutting edge of the tool, an SLD with a diameter of D = 2 mm was geometrically modified. The first modification V1 was a drainage flute and in the second modification V2 an additional flank face channel from the outlet to the drainage flute was introduced into the tool. Based on the reference model, both the SPH and CFD investigations showed that no significant improvements could be achieved with the very shallow additional drainage flute along the drill shaft (V1). The radius of r = 0.2 mm and the depth of d = 0.07 mm are geometrically limited by the required strength of the drill. With an additional flank face channel (V2), a better diversion of the fluid in the direction of the cutting edge could already be achieved compared to V1. However, the modification does not yet represent a satisfactory result, as it further weakens the cutting edge zone. In general, it is possible to influence the flow along the backside of the cutting edge; however, the three investigated modifications are not able to divert enough flow to provide a significant improvement. A largely increased drainage flute design, which already weakens the cutting edge, leads to an increased flow velocity into the drainage, but diverts the flow away from the cutting edge. Therefore, further geometric changes are planned for the future, such as a change in the kidney-shaped cooling channel, including the comparison with the use of two separate cooling channels and the modification of their position, to divert the cooling flow into the area behind the cutting edge.
Funding Open Access funding enabled and organized by Projekt DEAL. This research was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant numbers 405605200 and 439917965.
Availability of data and material
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Declarations
Ethics approval Not applicable.
Competing interests
The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
v3-fos-license
|
2021-10-14T05:19:29.301Z
|
2021-09-28T00:00:00.000
|
238742365
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2021.693673/pdf",
"pdf_hash": "c0c320522030afa1b4c4fbae5c38fee79671538b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45930",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "c0c320522030afa1b4c4fbae5c38fee79671538b",
"year": 2021
}
|
pes2o/s2orc
|
Understanding Fatal and Non-Fatal Drug Overdose Risk Factors: Overdose Risk Questionnaire Pilot Study—Validation
Background: Drug overdoses (fatal and non-fatal) are among the leading causes of death in population with substance use disorders. The aim of the current study was to identify risk factors for fatal and non-fatal drug overdose for predominantly opioid-dependent treatment–seeking population. Methods: Data were collected from 640 adult patients using a self-reported 25-item Overdose Risk (OdRi) questionnaire pertaining to drug use and identified related domains. The exploratory factor analysis (EFA) was primarily used to improve the interpretability of this questionnaire. Two sets of EFA were conducted; in the first set of analysis, all items were included, while in the second set, items related to the experience of overdose were removed. Logistic regression was used for the assessment of latent factors’ association with both fatal and non-fatal overdoses. Results: EFA suggested a three-factor solution accounting for 75 and 97% of the variance for items treated in the first and second sets of analysis, respectively. Factor 1 was common for both sets of EFA analysis, containing six items (Cronbach’s α = 0.70) focusing around “illicit drug use and lack of treatment.” In the first set of analysis, Factors 2 (Cronbach’s α = 0.60) and 3 (Cronbach’s α = 0.34) were focusing around “mental health and emotional trauma” and “chronic drug use and frequent overdose” domains, respectively. The increase of Factor 2 was found to be a risk factor for fatal drug overdose (adjusted coefficient = 1.94, p = 0.038). In the second set of analysis, Factors 2 (Cronbach’s α = 0.65) and 3 (Cronbach’s α = 0.59) as well as Factor 1 were found to be risk factors for non-fatal drug overdose ever occurring. Only Factors 1 and 3 were positively associated with non-fatal overdose (one in a past year). Conclusion: The OdRi tool developed here could be helpful for clinical studies for the overdose risk assessment. However, integrating validated tools for mental health can probably help refining the accuracy of latent variables and the questionnaire’s consistency. Mental health and life stress appear as important predictors of both fatal and non-fatal overdoses.
INTRODUCTION
The rates of drug-related deaths (DRDs) and non-fatal drugrelated overdoses (ODs) of opioid users are increasing (Iversen et al., 2016). Illicit and licit drug overdose is a leading cause of premature death and morbidity among this population (Darke et al., 2003;Iversen et al., 2016). Worldwide, overdose-related mortality accounts for 0.65 (0.55-0.75) per 100 person-years, followed by trauma and suicide-related deaths, with values of 0.25 and 0.12, respectively (Degenhardt et al., 2011). In Scotland, 49% of the drug treatment seeking population had experienced a drug overdose at some time in the past and 11% had overdosed in the past 3 months (Bohnert et al., 2011).
A review of the risks of fatal drug overdose in opioid users identified the following three key components (Frisher et al., 2012): 1) individual-relating to the drug (licit or illicit) users; 2) situational-circumstances surrounding an overdose; and 3) organizational-the response to an overdose incidence.
Taken together, these components lead to a complex set of risk factors which will influence the likelihood of a drug overdose occurrence being fatal (European Monitoring Centre for Drugs and Drug Addiction, 2015). Given the premise that multiple variables will influence the risk of drug overdose, it is important to develop preventative measures which can take account of multiple components and provide a more tailored approach to opiate overdose. To date, research has focused on identifying individual person-centered characteristics and circumstances as risk factors. The severity of dependence, recent prison release, recent detoxification, polysubstance use, social deprivation, history of suicide attempt, recent hospital discharge, length of drug using career, number of network members who inject drugs, lifetime history of negative life events, male gender, and homelessness have all been reported as risk factors for fatal opioid-related overdoses (Wolff, 2002;Neale and Robertson, 2005;Coffin et al., 2007;Rome et al., 2008;Backmund et al., 2009;Bohnert et al., 2010;Merrall et al., 2010;Jenkins et al., 2011;Frisher et al., 2012;Mathers et al., 2013).
However, the relative impact of these factors on overdose risks, or how the factors may combine to predict the risk of experiencing a fatal drug overdose, remains poorly determined. Despite the considerable scope of the problem, the independent predictive factors for opioid-related drug overdoses have not been the subject of robust methodological evaluation (Laupacis et al., 1997;McGinn et al., 2000;Reilly and Evans, 2006). This problem is likely to get worse given the aging population of opioid drug users in the United Kingdom (Public Health England, 2016). A recent survey of 123 drug users over 35 years found 75% had overdosed at some point in their lives and 33% in the last 12 months. Extrapolation to the drug using population in Scotland estimated that 4,500 drug users aged over 35 years will experience an overdose event annually (Matheson et al., 2019). As this group has multiple health challenges and problems of social isolation, the number of fatal overdoses should be expected to increase.
Perception of risk is conceptualized in terms of 1) personal vulnerability to the health effects of their risky behavior through knowledge acquisition (Kotchick et al., 2001), 2) "optimistic bias" (inaccurate estimation of lower personal risk in comparison to other counterparts), and 3) "precaution effectiveness" (believing that engaging in precautionary behavior will be beneficial to their health) (Peretti-Watel, 2003). As a result, this cognitive process could increase vulnerability to drug overdose.
For overdose prevention and response research, a broad assessment capable of capturing behavioral risks in populations with varying substance choices and use patterns is critically important, particularly as we seek to understand the precipitants of changes in overdose risk behaviors among at-risk populations. To better understand the factors that cause opioidrelated overdose, a first step is to comprehensively assess overdose risk behaviors and test their associations with overdose events.
One difficulty in preventing fatal as well as non-fatal drug overdoses is that the risk factors for such episodes are not well understood, and therefore, at-risk individuals cannot be reliably identified and interventions cannot be targeted at those most at risk. To date, research has focused on identifying isolated characteristics and circumstances as risk factors, such as age, gender, previous overdoses, being homeless, recent prison release, and adverse life events (Rome et al., 2008). However, as there is no understanding about the relative impact of these factors on drug overdose risks, or how these factors may combine to affect the risk of suffering an overdose, the ability to predict overdoses and fatality remains poor (see (Fischer et al., 2015) for an overview).
To date, longitudinal work with substance abusers has been focused on understanding the risk factors for moving from substance use to dependence (Wittchen et al., 2008;Swendsen et al., 2009). Such work has highlighted the importance of sociodemographic and gender factors when estimating risk in this population. However, despite the considerable scope of the problem, the risk factors relating to drug overdoses have never been examined in a comprehensive, principled, and methodologically rigorous manner.
The present study proposes to address this issue by piloting a data collection form (overdose risk assessment (OdRi) questionnaire) designed to link drug overdose risk factors in a cohort of treatment-seeking opioid-dependent population in Scotland to actual incidences of fatal and non-fatal drug overdoses these individuals subsequently experience (Supplementary Material). As such, this study would help start identifying the quantitative weighting of risk factors for fatal and non-fatal drug overdoses, both in isolation and in combination. Such understanding would be fundamental to targeting specific interventions more effectively to those most at risk for suffering overdoses, with the potential to prevent such outcomes and ultimately save lives. This will also help establish algorithms to support ecologically valid user applications that can predict outcomes to risky behaviors in this population.
Information and Ethical Governance Approvals
The OdRI study received the Caldicott Guardian approval from NHS Fife in November 2010. Following consultation with the
Participants and Sample Size
The participants for this study are patients of the National Health Service (NHS) Fife Addiction Services, which treats approximately 1900 substance users at any one time.
In Fife, on average, there have been 30 fatal drug overdoses (drug deaths) each year over the past 6 years. Of these, around 50% were known to NHS Fife Addiction Services (Baldacchino et al., 2009;Baldacchino et al., 2010;Frisher et al., 2012;Bartoli et al., 2014). Therefore, during a data collection period of 12 months, it was anticipated that approximately 10 individuals (of the 600) would suffer a fatal drug overdose.
The anticipated numbers of non-fatal overdoses are somewhat more difficult to estimate. The Scottish Ambulance Service attend around 15 non-fatal overdoses (illicit and licit) each week in Fife with a guesstimate that only about 30% of these are individuals known to Fife NHS Addiction Services. Therefore, over a 12month period, it was estimated that around 84 non-fatal drug overdose events were likely to occur in individuals known to Fife NHS Addiction Services (note that these are overdose incidents, not number of individuals-i.e., a single individual is likely to suffer repeated overdoses). One longitudinal study of a cohort of Scottish drug users receiving treatment for substance use disorder has found that 49% of the sample had overdosed at least once in the past, and 11% had done so in the past 3 months (McKeganey, 2008).
For the purpose of this pilot study, 640 individuals that were referred to NHS Fife Addictions Services for opioid dependence completed an OdRi questionnaire during their initial assessment between 2010 and 2012. These OdRi data were then followed up during the subsequent 5-year period for incidents of fatal and non-fatal drug overdoses and additional proxy measures of morbidity and mortality as indicated through the linkage of clinical datasets of the cohort studied.
Overdose Risk Assessment Questionnaire
Overdose risk factors initially identified through a systematic review as "individual," "situational," and/or "organizational" risk factors were subcategorized into the following:
This OdRi questionnaire is a 25-item self-reported measure assessing risk of fatal and non-fatal overdoses. Each item is rated from 0 (No) to 1 (Yes), and a higher score indicates a higher risk of overdose (Supplementary Material: OdRi questionnaire).
Data Linkage
All treatment-seeking opioid-dependent users attending NHS Fife Addiction Services completed this overdose risk assessment (OdRi) questionnaire with a clinical staff member. These data were inputted into an NHS electronic system and then deposited, in an anonymized and coded electronic format, into the Health Informatic Centre (HIC) Safe Haven (University of Dundee, Health Informatic Centre (HIC), 2015) for it to be subsequently interrogated by the researchers of this pilot study within a time-limited period. HIC Services is a University of Dundee research support unit within the Farr Institute-Dundee, in collaboration with NHS Tayside and NHS Fife.
This database was expanded through linkage processes to include overdose events which these individuals experience over the following 5-year period. Information about overdoses was obtained from the A&E and hospital discharge records (for non-fatal overdoses) and procurator fiscal (for fatal overdoses). Other datasets used within the Health Informatic Centre (HIC) safe haven include 1) Scottish Morbidity Register (SMR) 01 and SMR04 datasets which register all hospital medical and psychiatric admissions, respectively, and 2) SMR25a/b which records new treatment episodes for substance misuse. Demographic data were also collected, including the Scottish Index of Multiple Deprivation (SIMD) (1 most deprived and 10 most affluent). The CHI (Community Health Index) number, a unique patient identifier, was used to link healthcare records to the abovementioned datasets held within the HIC.
All relevant data were anonymized for the researcher when conducting the analysis.
Statistical Analysis
Stata 14 (Stata Corporation, College Station, TX, United States, 2015) was used for data management and statistics. The data analyzed were based on a factor analysis followed by logistic regression in order to gain initial insights into the relative strength of the individual risk factors in predicting fatal and non-fatal drug overdoses.
Before operating the explanatory factor analysis, the Kaiser-Meyer-Olkin (KMO) test and the Bartlett's test of sphericity were used to evaluate the factorability. We opted for the exploratory factor analysis with oblique rotated (Promax) tetrachoric correlation matrix in order to collapse the questionnaire items into interpretable underlying factors. This approach was retained because of the binary format of the OdRi questionnaire items (Muthén, 1978;Muthén and Hofacker, 1988; University of Dundee, Health Informatic Centre (HIC), 2015).
Only items with a communality above 0.4 (Osborne et al., 2008) and loading factor >0.4 were retained in the Results section. The three factors retained were as follows: 1. Illicit drug (usually heroin and benzodiazepine) and alcohol use and lack of treatment 2. Mental health and emotional trauma 3. Chronic drug use and frequent overdose Factor retention was based on their interpretability along with the scree plot examination (Cattel, 1966) and Kaiser criteria of Eigenvalue >1 (Kaiser, 1960). The reliability of items was examined by computing the Cronbach's alpha coefficient (Santos, 1999).
For fatal drug overdose, all 25 items were included in the exploratory factor analysis, while for non-fatal drug overdose events, the same explanatory factor analysis was repeated with the exclusion of items 9 to 11. Logistic regression was used to assess factors predicting fatal and non-fatal overdoses. In adjusted analysis models, age and sex were introduced as covariates. Risk was expressed as odds ratio (OR) with 95% confidence interval [95% CI]. Alpha risk was set at 5%.
Demographics
Completed data from 640 participants were used for the current analysis. The average age of participants was 42.2 ± 0.3 years, and 30.2% of them were women. The mean Scottish index of multiple deprivation (SIMD) was 2.9. Of the participants, 8.6% (n 55) died due to an fatal drug overdose (drug death), 38.2% experienced at least one non-fatal drug overdose across their life span, and 6.9% experienced a non-fatal drug overdose during the last year, while 2.2% experienced two or more non-fatal drug overdoses during the last year. All steps that were undertaken to develop and validate the questionnaire were reported as a Supplementary Material (Supplementary Figure S1).
Fatal Drug Overdose
EFA suggested a three-factor solution accounting together for 75% of the total variance.
Internal reliability: Overall, the questionnaire showed a questionable reliability level of 0.645. Subgroup analysis of Factors 1-3 (Table 1) showed a satisfactory level for the item belonging to the first factor (illicit drug use), while reliability was questionable too low for the second (mental health and emotional trauma) and the third (chronic drug use and overdose) factors.
Predictability of fatal drug overdose: Results displayed in Table 2 showed that the increase of the Factor 2 (Mental health and emotional trauma) score by one unit increases the risk of fatal drug overdose by nearly two-fold.
Non-Fatal Drug Overdose
EFA suggested a three-factor solution accounting together for 97% of the total variance.
Predictability of non-fatal drug overdose: According to Table 2, the regression analysis showed that all the three factors are significantly associated with non-fatal drug overdose (ever), while only the first and the third factors are significantly associated with experiencing a drug overdose during the past year. The increase of the Factor 1 (illicit drug use) score by one unit increases the risk of more than one overdose during the past year by three-fold.
Summary and Questionnaire Validity
In this study, data from 640 patients were collected from the National Health Service (NHS) Fife Addiction Services using the OdRi questionnaire. This pilot study aimed to start identifying the quantitative weighting of risk factors for fatal and non-fatal drug overdoses. The exploratory factor analysis, tetrachoric correlation matrix, for fatal overdose identified three factors, namely, Factor 1 "illicit drug use and lack of treatment," Factor 2 "mental health and emotional trauma," and Factor 3 "chronic drug use and frequent overdose." A similar number of factors were identified for nonfatal overdose, but the mental health item was loaded on a third factor along with drug use-related items. The overall questionnaire's (all items) internal consistency was questionable; however, after running factor analysis, we found that items of the Factor 1 (in both fatal and non-fatal overdose data analysis) items reached an acceptable value. Items of Factors 2 and 3 fell below the requirement for internal consistency, which could be attributed to the low number of items or due to the poor interrelatedness between items (Tavakol and Dennick, 2011). It is unexpected that the obtained internal consistency of both Factors 2 and 3 could be attributed to constructs' heterogeneity. Indeed, a difference in participants' characteristics may evolve a large interindividual variability and then impact the homogeneity of measurement items (Tavakol and Dennick, 2011). However, in our study, we have very few measurements of individual characteristics. For example, the subjects' education level was not measured. Of note, the questionnaire's multidimensionality might contribute to the poor internal consistency of certain items (Tavakol and Dennick, 2011). Beyond that, the internal consistency is proportional to the number of items, and the low item number might alter the questionnaire performances. Bernardes Santos et al. (Santos et al., 2009) indicated that the combination of scales assessing independent constructs might introduce bias in internal consistency interpretation.
Interpretation
This study showed that mental health factors were positive predictors of both fatal and non-fatal overdoses. In the available literature, individuals suffering from mental health have been reported to be more likely to experience drug abuse and then to have an increased risk of opioid overdose (Cicero and Ellis, 2017). Specifically, depression was associated with fatal (Foley and Schwab-Reese, 2019) and non-fatal (Tobin and Latkin, 2003) overdoses. Noticeably, our results were in agreement with a growing body of literature showing that early life stress is associated with both forms of overdoses (Braitstein et al., 2003;Cutajar et al., 2010;Khoury et al., 2010;Lake et al., 2015). For example, participants from two Canadian cohort studies (n 1,679) found that physical, sexual, and emotional abuse during childhood increased (1.5-fold) the risk of non-fatal overdose (Lake et al., 2015). These findings highlight the need for systematically screening for mental health and emotional trauma in order to predict fatal and non-fatal overdoses. While limited importance has been given to the mental health component in drug overdose developed questionnaires at the time of study, Fendrich et al. (2019) suggested integration of validated questionnaires for mental health rather than introducing few selfreported items as in the study by Butler et al. (2008). Indeed, Fendrich et al. (2019) have combined four validated scales, for depression (PHQ-9 questionnaire), severe anxiety (Beck Anxiety Inventory), post-traumatic stress disorder (Mini-International Neuropsychiatric Interview), and psychosis (Behavior and Symptom Identification Scale-24). They found that individuals with severe depression, post-traumatic stress disorder, or psychosis have an increased risk (2.5-fold) to experience a drug overdose during the previous 3 months.
In comparison to Factor 2, Factor 1, that is, "Illicit drug use and lack of treatment," was found to be a predictor of recent and lifetime non-fatal drug overdose. Individuals who are not and/or have just been stabilized in a treatment program continue to experience drug overdose. Additionally, individuals who are integrated within a drug treatment program are also at risk of further non-fatal overdose due to increasing susceptibility for overdose through reduction of individual tolerance (Pollini et al., 2006). Moreover, multi-substance use may complicate treatment and management of addiction.
Finally, it is worth to mention that there was not a significant association between age and gender with fatal and non-fatal overdoses.
Strengths and Limitations
The study accounts on the OdRi questionnaire that drives from an exhaustive literature review for risk factors of overdose. Indeed, the questionnaire gathers several factors related to overdose, including "individual," "situational," and/or "organizational" ones. Second, the important number of patients enrolled in this study would increase the generalizability of the results obtained from this study. Finally, stringent criteria were used for the exploratory factor analysis and factor identification.
Our study has some limitations. The patients were not randomly selected, so no inference could be made to general population of illicit drug and substance users in Scotland. Second, the self-reported data may introduce a recall bias. Third, no validated scales were used for the assessment of specific aspects of mental health (i.e., depression and anxiety). Fourth, emotional trauma (including all forms) might be underreported. Fifth, our study includes few potential confounders (i.e., age and sex); then an extension to others such as socioeconomic level and family context should be warranted. The analyses were conducted among patients from low-income areas as mirrored by the mean Scottish index of the multiple deprivation index. Then, the strengths of association between the constructs and overdose occurrence (both fatal and non-fatal) might be different in highincome areas. Finally, the response collected about health problems was subjective as no clinical diagnosis was realized. The establishment of these data for this study could have been enhanced by using tertiary data such as clinical notes and electronic portal systems.
Clinical and Public Health Relevance
The ultimate importance of this work lies in the potential to greatly enhance our current knowledge of the risk factors underlying drug overdoses. This is of utmost importance knowing that in Scotland, 1,339 drug-related death cases were identified in 2020 (National records of Scotl, 2021), and nowadays, it is estimated to be the highest rate in Europe. Such information would help identify individuals most at risk, facilitating more targeted and timely interventions, and thereby save lives. The understanding of the relative importance of the risk factors for suffering fatal and nonfatal drug overdoses that would be gained by the present study is also fundamental to the development of an overdose risk assessment tool. This is one of the future directions of this line of research, should the study be successful in securing funding in the future. The data collection process would be continued in Fife in order to expand the sample size to obtain more reliable results. If successful, this process could be set up in other services and regions, expanding the sample size and potential knowledge gain even further. Knowledge transfer and exchange to policy-makers, professionals, substance misuse treatment service users, the general public, families, and careers are an essential outcome of the proposed study, and the study team are very well placed to disseminate the study findings in their respective roles.
It will also be a unique opportunity to established highly predicable algorithms which can be used to establish user applications that can be therapeutic in nature and empowering for the service user. It will help build on the work initiated by the EU-funded ORION project (http://orion-euproject.com/) which established a PC-based eHealth tool. This can be further developed using a mobile digital application platform.
CONCLUSION
Our study represents the first application of the OdRi questionnaire for the assessment of the overdose risk factors. Further studies are needed to assess the questionnaire's reproducibility (test-retest approach) for internal consistency. However, our study showed that mental health and life stress conditions increase the risk of fatal and non-fatal overdoses among adult drug using treatment-seeking cohort users. Systematic screening of mental health and life stresses (including early life stress) should be encouraged to provide the necessary assistance for patients and organize a service that will be trauma-informed. Further studies should be conducted to assess the different forms of mental health problems and their association with overdose. Along with the mental health management, any intervention should promote other microlevel factors such as healthy lifestyle (i.e., healthy diet and regular physical activity). Because of the health and economic burden of drug misuse, acting at the macrolevel is necessary; indeed, that more attention should be given to substance use through an effective community-based prevention.
DATA AVAILABILITY STATEMENT
The data analyzed in this study are subject to the following licenses/restrictions; these data are the property of the University of Dundee (Health Informatics Centre); requests to access these datasets should be directed to https://www.dundee.ac.uk/hic/ hicsafehaven/.
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
AUTHOR CONTRIBUTIONS
Conceptualization, AB; data handling and management, KA; formal statistical analysis, RD; visualization, all authors; writing-original draft preparation, RD and AB; writing-review and editing, all authors; supervision, AB; project administration, AB.
|
v3-fos-license
|
2020-01-17T16:02:46.935Z
|
2020-11-23T00:00:00.000
|
227159328
|
{
"extfieldsofstudy": [
"Medicine",
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0956797620957625",
"pdf_hash": "87930766e073f7395b24890ee0df0ebd554baec8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45932",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "120e205156c51bb3fd11dec47b374ff02fc22f77",
"year": 2020
}
|
pes2o/s2orc
|
Prosocial Influence and Opportunistic Conformity in Adolescents and Young Adults
Adolescence is associated with heightened social influence, especially from peers. This can lead to detrimental decision-making in domains such as risky behavior but may also raise opportunities for prosocial behavior. We used an incentivized charitable-donations task to investigate how people revise decisions after learning about the donations of others and how this is affected by age (N = 220; age range = 11–35 years). Our results showed that the probability of social influence decreased with age within this age range. In addition, whereas previous research has suggested that adults are more likely to conform to the behavior of selfish others than to the behavior of prosocial others, here we observed no evidence of such an asymmetry in midadolescents. We discuss possible interpretations of these findings in relation to the social context of the task, the perceived value of money, and social decision-making across development.
people, is modulated by age and whether this further depends on the source of influence (i.e., whether one is influenced by peers, nonpeers, or a computer) and on the direction of influence (i.e., whether others are more or less prosocial than oneself).
Prosocial influence is observed in early childhood in humans (Schmidt & Tomasello, 2012) as well as in nonhuman primates (Berthier & Semple, 2018). Adolescents are no exception (Choukas-Bradley, Giletta, Cohen, & Prinstein, 2015;van Goethem, van Hoof, van Aken, Orobio de Castro, & Raaijmakers, 2014;Van Hoorn et al., 2016). For example, the mere presence of peers increased monetary contributions by adolescents in a public-goods game (Van Hoorn et al., 2016). Similarly, volunteering by adolescents has been found to be influenced by whether their peers also volunteer (Choukas-Bradley et al., 2015;van Goethem et al., 2014). It follows that prosocial influence is prevalent across ages, but less is known about whether, as in the domain of risk, prosocial influence is heightened during adolescence. One recent study showed that prosocial influence decreased linearly with age between early adolescence and adulthood (Foulkes et al., 2018). However, this study focused on hypothetical prosocial decisions, which can be unrelated to incentivized decisions (e.g., Böckler et al., 2016). It thus remains unclear whether age might similarly affect prosocial influence in decisionmaking that involves real monetary incentives.
In fact, these monetary incentives plausibly underlie a known opportunistic asymmetry in the way adults adapt their prosocial behavior to the prosocial behavior of others (Charness et al., 2019), leading them to conform to the behavior of others more when this aligns with their own material self-interest and, thus, to preferentially decrease rather than increase their prosocial behavior. For example, adults adjusted their contributions to a public good (i.e., a public radio station) more in line with other individuals' contributions when informed that others had contributed less than them, compared with when others had given more (Croson & Shang, 2008). Similarly, adults have been shown to conform more to antisocial relative to prosocial behavior (Dimant, 2019) and to align their trust-related decisions with others more when this allows them to earn more (Charness et al., 2019).
Finally, none of the studies above controlled for nonsocial-influence effects, which are known to have an impact on decision-making in adults. For example, adults adapted their decisions to those of a computer when making incentivized decisions (Moutoussis, Dolan, & Dayan, 2016), even when they were informed that the agent they were observing was simulated. This introduces the possibility that some of the previously reported social-influence effects might have been conflated with nonsocial-influence effects, such as more automatic or narrow forms of imitation (Nook et al., 2016), priming effects (Moutoussis et al., 2016), and anchoring effects (e.g., Wilson, Houston, Etling, & Brekke, 1996).
We aimed to fill these gaps by addressing the following hypotheses. First, we aimed to extend the agedependent-influence hypothesis-previously observed in the domains of risk (Knoll, Leung, Foulkes, & Blakemore, 2017;Knoll et al., 2015) and hypothetical prosocial behavior (Foulkes et al., 2018)-to situations in which prosocial behavior has real monetary costs. Second, we tested the peer-influence hypothesis by introducing a teenager-versus-adults distinction in the source of influence, as in previous studies (Foulkes et al., 2018;Knoll et al., 2017;Knoll et al., 2015;Reiter et al., 2019;van Goethem et al., 2014). Third, we tested a social-influence hypothesis by comparing social influence from other people (teenagers or adults) with nonsocial influence from a computer (Moutoussis et al., 2016). Finally, we investigated a direction-of-influence hypothesis by assessing how participants responded when they learned that other people donated more or less than them (Knoll et al., 2017;Reiter et al., 2019).
Participants
Previous research suggests that developmental effects on decision-making during adolescence range between small and medium (Defoe, Dubas, Figner, & van Aken, 2015;Knoll et al., 2015;Reiter et al., 2019). In particular, previous studies employing a similar paradigm to the one used here (Knoll et al., 2015;Reiter et al., 2019) suggest that recruiting between 100 and 250 participants between early adolescence and early adulthood should suffice to detect these effects. We recruited 220 participants (106 female) between the ages of 11 and 35 years (see Table 1 for participant demographics; see also Section 1 in the Supplemental Material available online). Participants were divided into three age groups for comparability with previous research (Foulkes et al., 2018;Knoll et al., 2017;Knoll et al., 2015;Reiter et al., 2019): young adolescents (11-14 years), midadolescents (15-18 years), and adults (23-35 years). All analyses were additionally conducted using age as a continuous variable, to avoid any grouping criteria. Participants younger than 18 years were recruited through schoolwide announcements from teachers within participating schools. Group sessions took place in school computer rooms, and group size varied between 1 and 14 pupils per classroom. Each participant completed the task on an individual computer, and desks were sufficiently spaced apart so that participants would be unable to read the screens of the other participants. Participants older than 18 years were recruited through University College London's subject-pool recruitment system. Sessions took place in groups at the university's computer cubicles, and group size varied between 1 and 4. Adult participants and parents of participants younger than 18 years provided informed consent. All procedures were approved by University College London's ethics committee (Approval Code 3453/001). The study was not preregistered. Deidentified data, stimuli, and scripts are available on OSF at https://osf.io/3e9s6/.
To measure prosocial behavior, we adapted a charitabledonation task (e.g., Böckler et al., 2016) in which participants were allotted 50 tokens and asked to decide how many, if any, they wished to donate to a number of charities. Participants were informed that tokens had real monetary value, and consequently, prosocial behavior was costly. Specifically, we informed participants that one random charity would be selected at the end of the session and that any tokens not donated to that charity would be converted to money and paid to them. This occurred as stated. As in previous studies, participants were informed that tokens were worth money, but they were not informed about the exchange rate (e.g., Geier, Terwilliger, Teslovich, Velanova, & Luna, 2010). We did this to avoid selecting an exchange rate that may have been relevant to only a subsample of participants and to reduce the possibility that participants would mentally convert tokens into money, thus freeing working memory for the task. In addition, at the end of the experiment, we asked participants to provide a rating indicating how much money they thought a single token was worth. We observed no age differences in the guesses of such exchange rates (for details, see Section 2 in the Supplemental Material).
To investigate prosocial influence, we divided each of 36 donation trials into two phases (Fig. 1). In Phase 1, participants decided how many tokens to donate to each of 36 different charities, without knowing anything about how much other donors had given (we henceforth refer to these as first donations). In Phase 2, participants first observed how much other donors had given to the same charities and were then requested to donate again. There was no time limit to any decisions. Our main variable of interest was whether participants changed their donations from Phase 1 to Phase 2, adjusting them to the donations they observed. In particular, we investigated how the likelihood of prosocial influence was modulated by participants' age, by the source of influence, and by the direction of influence.
The observed donations could come from one of three supposed sources: the average donation of a group of adults, the average donation of a group of teenagers who had previously taken part in the study, or a randomly generated donation by a computer. These three source-related levels were included in blocks and presented in a counterbalanced order between participants.
As for the direction of influence, the specific donations that participants observed were generated according to an adaptive algorithm that was designed to balance the number of prosocial-influence trials, in which other donors had given more generously than the participant, and selfish-influence trials, in which other donors had given more selfishly than the participant. For each charity, they were given 50 tokens to allocate to that charity as they wished, knowing that their donation to one of the charities would be randomly selected at the end of the study, converted into money, and paid. Prosocial behavior was thus costly. In Phase 2, participants observed the average donation made by other donors (teenagers, adults, or a computer) to the same charities and were simultaneously reminded how much they had previously donated themselves. They were then requested to donate to the charity a second time.
These conditions were included to potentially induce more prosocial or selfish behavior, respectively (Nook et al., 2016;Wei et al., 2016), and thus to assess possible age effects on opportunistic conformity (Charness et al., 2019;Croson & Shang, 2008;Dimant, 2019). Specifically, the observed donation was a random number in the relevant interval: observed donation ∈ [(donation 1 + 1), 45] for prosocial-influence trials and observed donation ∈ [(donation 1 -1), 5] for selfish-influence trials. These intervals were capped at 5 and 45, respectively, to avoid implausible observed donations. These capping rules were, however, relaxed when participants displayed skewed donations in Phase 1 (e.g., most donations at ceiling or floor in Phase 1; see Section 3 in the Supplemental Material).
Stimuli. In both phases of the prosocial-influence task, participants indicated their donation by moving a slider on a bar with "0" and "50" written at the two extremes. A cursor on top of the bar indicated the current number of tokens, allowing participants to be precise if they wished to. The initial position of the cursor was always set to 25 to provide an unbiased anchor. First and second donations were probed in different phases to obtain an entirely unbiased estimate of participants' baseline donation behavior (i.e., to avoid any possible cross-item influence). In Phase 2, in addition to a bar indicating the observed donation of other donors, a second bar reminded participants of their own previous donation. This was done to avoid any confounding effects related to forgetting one's initial donation (Fig. 1). For each participant, the 36 charities were randomly selected from a set of 120 possible charities dedicated to social, health, or environmental missions (see Section 4 in the Supplemental Material). This variety of charities was adopted to decorrelate prosocial-influence effects from any particular charity contents. Each charity was represented by an image and a brief sentence (M = 47 characters, SD = 11) related to the charity's mission. The images were drawn from a number of online image platforms (e.g., Google images) and were all labeled "free for reuse." All stimuli are available on OSF at https://osf.io/3e9s6/. Stimuli never referred to charity names or logos, to reduce any political connotations or legal implications. The task was implemented on Gorilla (https://gorilla.sc/; Anwyl-Irvine, Massonnié, Flitton, Kirkham, & Evershed, 2020). It can be sampled at https://gorilla.sc/openmaterials/133819 and can be freely cloned.
Suspicion. After the prosocial-influence task, participants were probed for any suspicion of deception with a single open-ended question: "Did you feel that any aspect of the task was strange in any way? If so, can you briefly describe what seemed strange? If not, simply respond No." Nine participants (one young adolescent, two midadolescents, six adults) expressed potential doubts about the veracity of donating to charity or the fact that the observed donations really came from the stated sources. Results were qualitatively unaffected by the exclusion of these participants.
Abstract-reasoning task
To investigate effects of prosocial influence exclusive of potential interindividual or age differences in nonverbal reasoning abilities, we had participants take part in the Matrix Reasoning Item Bank (Chierchia et al., 2019). The task consists of a 3 × 3 matrix containing eight abstract shapes and a missing shape. Participants are asked to complete the pattern by clicking on the correct shape among four available alternatives. The proportion of correct choices was taken as a measure of nonverbal ability and thus used as an additional control. The task takes 8 min to complete.
Overall, the entire experimental session thus lasted around 35 min on average.
Statistical analysis
The analysis includes four dependent variables. We first analyzed donations in Phase 1 because they are relevant to a distinct literature on age, gender, and economic behavior (reviewed by Sutter, Zoller, & Glätzle-Rützler, 2019) and because they could potentially affect social influence. In fact, social influence is generally proportional to the distance between one's own baseline behavior (i.e., in this case, donations) and the decisions of others (Foulkes et al., 2018;Knoll et al., 2015;Moutoussis et al., 2016). In a follow-up analysis to participants' first donations, we assessed whether there were age effects in the difference (delta) between participants' first donations and the donations of other donors (Foulkes et al., 2018). We took the absolute value of this difference to obtain a more direct comparison of cases in which other donors gave more or less than participants (i.e., cases of prosocial influence vs. selfish influence).
Our central social-influence dependent variable was influence probability. To measure this, we first created a trial-level vector of 1s and 0s, where 0 indicated that no change in donation occurred between Phases 1 and 2 or that a change occurred but in the opposite direction of the observed donations, and 1 indicated that a change occurred in the direction of the observed donations. For a secondary dependent variable, to assess whether prosocial influence is associated with more deliberative or impulsive decision styles (Reiter et al., 2019), we investigated how response times (RTs) in Phase 2 varied as a function of whether or not participants were influenced and how this relation may change as a function of age and direction of influence. Finally, we analyzed influence magnitude, that is, the degree to which participants changed their donations in the direction of the observed donations between Phases 1 and 2. Specifically, following previous work (Foulkes et al., 2018;Knoll et al., 2017;Knoll et al., 2015;Reiter et al., 2019), we defined change as the amount donated in Phase 2 minus the amount donated in Phase 1. Then, all donation changes in the direction of the observed donations (i.e., conforming change) were transformed to positive (i.e., by taking the absolute value of change magnitude), whereas all changes in the opposite direction of the observed donations (i.e., anticonforming change) were taken as negative (i.e., by taking the absolute value of change and multiplying it by −1). Trials in which participants did not change their donations had a change value of 0. As main independent variables, the factors of the 2 × 3 × 3 design described above were used.
Raw trial-level data were modeled using generalized linear mixed models (GLMMs; Barr, Levy, Scheepers, & Tily, 2013) in the R programming environment (Version 3.4.1; R Core Team, 2017). Influence probability was modeled using the binomial distribution with logit link function. RTs lower than 250 ms (23 out of 7,906) were excluded from the analysis (Reiter et al., 2019). Remaining RTs were modeled on the log scale because this better approximated a normal distribution and additionally resulted in a lower Akaike information criterion (AIC) during model estimation. The three-way interaction among the main factors described above and all lower level interactions were included as fixed effects in all models. In the RT model only, we additionally included an influence term, a factor indicating whether or not the participant was influenced on the given trial (and all possible three-way interactions among this and the other factors of the model). In the influencemagnitude model only, following Foulkes and colleagues (2018), we additionally included the delta term (and all possible three-way interactions among this and the other factors of the model). Fixed effects for donations in Phase 1 included only age because the other factors did not apply. To obtain more parsimonious models, we progressively excluded nonsignificant higher level interactions via nested model comparison. All models clustered data by subject (i.e., as a random intercept) and additionally included maximal random slopes for the within-subjects factors (Barr et al., 2013) as random effects.
We modeled age as both categorical and continuous. When treating age as continuous, we first compared different curve-fitting regressions-linear, quadratic, and cubic (and combinations thereof) as well as inverse of age (1/age), logarithmic, and exponential (Luna, Garver, Urban, Lazar, & Sweeney, 2004)-in simpler models predicting the dependent variables of interest with the single independent variable (i.e., age alone). We then selected the trend or trends yielding the lowest AIC (to account for potential differences in the number of parameters) and forwarded this to the same models described above. For influence probability, the inverse of age had the lowest AIC. For the log of RTs, first donations, and influence magnitude, the lowest AIC was obtained by including linear, quadratic, and cubic components of age. For RTs and influence magnitude, but not first donations, the cubic component did not significantly contribute to the model fit and was thus discarded during model reduction. Polynomials were orthogonalized to eliminate multicollinearity. Main effects and interactions of the best-fitting models were inspected using omnibus Type III Wald χ 2 tests. Planned and post hoc comparisons were performed using the emmeans package (Version 1.3.0; Lenth, Singmann, Love, Buerkner, & Herve, 2018) and Bonferronicorrected for multiple comparisons.
We call the models described above reference models because they focused exclusively on the main variables of our experimental design. For each reference model, a number of additional control models probed the robustness of the findings to other potentially relevant factors. For example, given that adolescents and adults differ in a wide range of behaviors (Blakemore, 2018;Defoe et al., 2015;Reniers et al., 2017), these can introduce baseline differences in studies on social influence (given that social influence is frequently measured as a change in behavior relative to some baseline; Knoll et al., 2015;Moutoussis et al., 2016;Reiter et al., 2019). Therefore, to account for potential differences in baseline donations (either age-dependent differences or skewed patterns of baseline donations in general, such as participants who never donated anything) and the imbalance in delta that might have resulted from this, we added to one model a regressor for donations during Phase 1 and another for the delta term. A different control model was used to control for nonsocial-influence effects and thus to isolate influence effects that are not entirely explained by nonsocial processes. This control model focused on noncomputer trials only and included an additional regressor related to the degree of influence displayed by participants on computer trials. For influence probability, this regressor was the proportion of trials in which participants had been influenced in the computer condition. For influence magnitude, it was the mean influence magnitude displayed in the computer conditions. Finally, to control for response variability, we coded responses as 1 if participants conformed, as 0 if they did not change, and as −1 when they anticonformed. We then took the participant-level variance of this vector as a measure of response variability. We used this variance measure as a covariate to assess whether age-related decreases in conformity were still observed after controlling for age-related decreases in response variance.
For reasons of space, we provide the results of each of these models only for influence probability in the manuscript (Table 2), whereas the same control models for this and the other dependent variables can be found in Tables S10 through S17 in the Supplemental Material. Those supplemental tables also show the results of control models controlling for a number of other factors. For example, to account for potential gender differences in pubertal onset, one such control model controlled for gender and its interaction with age, and other models accounted for abstract-reasoning performance, group size, block order, and the guess of the token-pound exchange rate, among others. For RTs only, a control model additionally controlled for RTs of donations to the same charity during Phase 1 (for details on each control model, see Section 5 in the Supplemental Material).
In addition to these control models, we further ran a number of additional reduced models, which probed the robustness of results when various exclusion criteria were adopted (see Section 5). Among the latter, we assessed whether the omnibus effects of interest remained significant when we excluded participants who displayed skewed decision patterns (e.g., floor or ceiling effects) in the donations of Phase 1. At the request of a reviewer, we also ran an exploratory reduced model that included data from the adolescents only (adults were excluded; see Section 6 in the Supplemental Material).
All of the significant omnibus results reported below were robust to all such control models and reduced models unless otherwise noted (see Section 5). Data and scripts are available on OSF at https://osf.io/3e9s6/.
Thirty-nine participants displayed a skewed pattern of first donations. For these participants, it was not possible to generate an equal number of prosocial and selfish influence trials: One participant always donated the maximum amount of 50 tokens, five participants always donated 0, nine participants were skewed toward the maximum (i.e., they donated the maximum amount in more than half of the trials), and 24 participants were skewed toward the minimum (i.e., they donated 0 in more than half of the trails). Our reduced models showed that all significant results reported in the study were robust to the exclusion of these participants.
Prosocial-influence manipulation checks
After observing the amount of tokens given by other donors and being reminded of their own previous donation to a given charity, participants changed their donations in 43% of trials. In such cases, 76% of adjustments (2,624 of 3,433 trials) moved in the direction of the observed donations (i.e., consistent with a socialinfluence effect), whereas the complementary percentage (24%) moved in the opposite direction (i.e., anticonforming choices). Supplemental analyses further showed that age trends in anticonforming probability were entirely explained by interindividual differences in response variance, whereas age trends in conforming probability were not (see Section 8 in the Supplemental Material).
Exact binomial tests confirmed that these frequencies significantly differ from a uniform distribution (i.e., rejecting the null hypothesis that conforming and anticonforming adjustments occurred with equal probability; frequency of conforming adjustments = 0.76, 95% confidence interval, or CI = [0.75, 0.78], p < .001). This was also true when we inspected each of the 2 × 3 × 3 cells of our experimental design (all p Bonf s < .001) as well as for the computer conditions.
Similarly, inspecting the size of donation changes (averaged at the participant level), we found that t tests against 0 showed that the average change in donation was positive and statistically different from 0 (influence magnitude = 1.78, 95% CI = [1.48, 2.09], p < .001). This indicates that social-influence magnitude was on average larger when participants adjusted their donations toward as opposed to away from the donations they observed. With four exceptions, this too held (all p Bonf s < .035) for each cell of our experimental design. Three exceptions were in the computer condition: Influence magnitude in midadolescents was not significantly different from 0 under both prosocial and selfish influence, whereas the same held for young adolescents under selfish influence only. The fourth exception was in the adult group under selfish influence from teenagers. Taken together, these results suggest that participants' second donations were reliably influenced by the donations they observed, in terms of both influence probability and influence magnitude. Finally, we inspected the relation between social influence and one's distance from the observed norms (i.e., Δ). For example, suppose participant i donated five tokens to a given charity at baseline and subsequently observed one of two norms: In one case, i observed other donors giving seven tokens to the same charity (thus, Δ = 2), whereas in another case, i observed that other donors gave 15 tokens to the same charity (thus, Δ = 10). It seems plausible that the second case may lead i to adjust his or her donation more than the first. In fact, previous studies have consistently reported that such a positive linear relation exists (e.g., Foulkes et al., 2018;Knoll et al., 2017;Knoll et al., 2015;Moutoussis et al., 2016). However, other studies have shown that there are boundary conditions to this linear socialinfluence effect (e.g., Shang & Croson, 2009). In particular, if deltas are very small or very large, participants may deem them irrelevant, and this may result in diminished social influence. If so, this may result in a quadratic relationship rather than a linear one. Our study adaptively capped the observed donations to avoid such extreme and irrelevant deltas. In addition, to assess whether this sufficed to isolate a linear relation between social influence and deltas, we fitted the social-influence variables (both influence probability and magnitude) to polynomial functions of delta (up to the fourth degree included) using mixed models and then compared these models using AICs. The model with the best fit was linear. This was true at the full-sample level, as well as for each of the six possible subsample combinations of age groups and direction of influence, for both influence probability and influence magnitude. More specifically, in each of these cases, the linear term was always significant and positive (influence magnitude: all slopes > 0.4, all ps < .001; influence probability: all slopes > 0.14, all ps < .050).
Influence probability. A GLMM revealed a significant main effect of source on influence probability, χ 2 (2) = 39.48, p < .001 (Fig. 3): the probability of changing one's donation between Phase 1 and Phase 2 in the direction of the observed donation. Planned contrasts showed that participants were more likely to be influenced by other Within each source type, the black squares represent the fixed-effects estimates of influence probability from the trial-level generalized (logistic) linear mixed model, and error bars show the corresponding 95% confidence intervals. Asterisks indicate significant differences between sources of influence (***p < .001, Bonferroni corrected). For statistics of all contrasts, see Table S2 in the Supplemental Material available online.
people than by the computer (teenagers -computer: contrast = 0.48, SE = 0.08, p Bonf < .001; adults -computer: contrast = 0.43, SE = 0.08, p Bonf < .001; for all contrasts, see Table S2 in the Supplemental Material). There was no significant interaction between source and any of the other factors (ps > .10). The model also revealed a significant impact of age on influence probability, χ 2 (2) = 16.02, p < .001. Contrasts showed that young adolescents were more likely to be influenced than adults and midadolescents (young adolescents -midadolescents: contrast = 0.63, SE = 0.18, p Bonf < .001; young adolescents -adults: contrast = 0.67, SE = 0.19, p Bonf = .002; for all contrasts, see Table S3 in the Supplemental Material), whereas midadolescents and adults did not significantly differ in this respect (Fig. 4, top panel). The effect of age was also reliably observed when models used age as a continuous variable, χ 2 (1) = 20.31, p < .001: There was a linear relation between the inverse of age and influence probability (slope = 23.21, SE = 5.15, p < .001; Fig. 4, bottom panel).
The effect of age was marginally modulated by the direction of influence, χ 2 (2) = 5.04, p = .080 (Fig. 5, top panel): Under prosocial influence, influence probability was higher for young adolescents than for adults (young adolescents -adults: contrast = 1.05, SE = 0.27, p Bonf < .001; for all contrasts, see Table S4a in the Supplemental Material) and marginally higher than for midadolescents (young adolescents -midadolescents: contrast = 0.65, SE = 0.25, p Bonf = .061), whereas this was not the case under selfish influence, where influence probability did not differ between age groups (all p Bonf s > .110). The interaction between age and direction was significant when the inverse of age was taken as a continuous predictor, χ 2 (1) = 3.95, p = .047 (Fig. 5, bottom panel). Contrasts suggested that the inverse of age decreased influence probability to a greater extent for prosocial influence, relative to selfish influence (prosocial -selfish: estimate = 13.87, SE = 6.98, p = .047). In line with this, post hoc analyses indicated that the linear trend of the inverse of age was present under prosocial influence (slope = 23.21, SE = 5.15, p Bonf < .001) but not selfish influence (slope = 9.34, SE = 6.01, p Bonf = .120).
To assess an effect of age on opportunistic conformity, we ran a separate set of within-age-group contrasts comparing influence probability under prosocial and selfish influence. These showed that adults were more likely to be influenced by other donors when others had given less than them, rather than more (prosocialselfish: estimate = −0.75, SE = 0.28, p Bonf = .025; for all contrasts, see Table S4b). This was not the case for the two adolescent age groups, whose donations were equally likely to conform to those of other donors, regardless of the direction of influence (p Bonf s = 1). The significant omnibus effects reported above were robust to all control models. In particular, they remained significant when models controlled for participants' first donations. Thus, even though first donations differed between age groups (Table 2), they did not cancel out the age differences in influence probability. Similarly, although the amount of influence exerted on participants (i.e., the Δ) also differed between age groups and robustly predicted influence probability, controlling for this did not cancel out the effects of age on influence probability (Table 2). It should also be noted that because deltas are positively associated with social influence (Foulkes et al., 2018;Knoll et al., 2015;Moutoussis et al., 2016), the particular pattern of age differences in deltas in our data would predict that adults Influence Probability * * * * * * * * Fig. 4. Effect of age on influence probability. In the top graph, age is treated as a categorical variable. Dots are the frequencies of trials (%) in which participants changed their donations and conformed them with those of other donors. The violin plots represent kernel probability density of the data at different values (randomly jittered across the x-axis). Within each age group, the black squares represent the fixed-effects estimates of influence probability from the trial-level generalized (logistic) linear mixed model, and error bars show the corresponding 95% confidence intervals. Asterisks indicate significant differences between groups (**p < .01, ***p < .001, Bonferroni corrected). For statistics of all contrasts, see would be more influenced than adolescents toward prosocial behavior and less influenced toward selfish behavior-the opposite pattern from that observed.
Thus, it is highly unlikely that the age effects on social influence reported above were due to age differences in baseline donations or deltas. Influence Probability * * * * * * * † Young Adolescents
Adults
Young Adolescents Fig. 5. Interaction between age group and direction of influence (prosocial vs. selfish) on influence probability. In the top graph, influence probability is shown as a function of age (treated as a categorical variable), separately for the two directions of influence. Dots are the frequencies of trials (%) in which participants changed their donations and conformed them with those of other donors. The violin plots represent kernel probability density of the data at different values (randomly jittered across the x-axis). Within each age group, the black squares represent the fixed-effects estimates of influence probability from the trial-level generalized (logistic) linear mixed model, and error bars show the corresponding 95% confidence intervals. Symbols indicate significant and marginally significant differences between groups ( † p < .10, *p < .05, ***p < .001, Bonferroni corrected). For statistics of all contrasts, see Tables S4a and S4b in the Supplemental Material available online. In the bottom graph, influence probability is shown as a function of age (treated as a continuous variable) and direction of influence. Circles are grand means of trials in which participants adjusted their donations to the observed donations. Circle area is proportional to the number of participants; the key shows three examples for reference. The colored lines shows the overall trends for the inverse of age as estimated by the generalized linear mixed model, and the shaded areas are 95% confidence intervals. Asterisks indicate a significant trend (***p < .001).
Mid-Adolescents Adults
Importantly, another control model showed that the significant age effects remained significant when models controlled for nonsocial influence (i.e., the proportion of trials in which participants had been influenced in the computer condition), suggesting that age effects of prosocial influence are not entirely explained by nonsocial anchoring effects. They were also robust when models controlled for response variability. In particular, even though response variance significantly contributed to the probability of conforming, age effects of conformity were not entirely explained by this (Table 2).
RTs.
A GLMM on the log of RTs during participants' donations in Phase 2 revealed a main effect of the influence term (i.e., whether or not participants changed donation in the direction of the observed donation), χ 2 (1) = 8.08, p = .004: Contrasts showed that participants took longer to reach a decision when they adjusted their donations to the observed donation, relative to when they did not (influenced -not influenced: contrast = 0.15, SE = 0.02, p < .001). This effect was further modulated by age, as demonstrated by a significant three-way interaction among the influence term, participant age group, and the direction of influence, χ 2 (2) = 16.31, p < .001 (Fig. 6, top left panel). Contrasts suggested that adults took less time when resisting prosocial influence, relative to both young adolescents and midadolescents (young adolescentsadults: contrast = 0.17, SE = 0.05, p Bonf = .013; midadolescents -adults: contrast = 0.22, SE = 0.05, p Bonf < .001; for all contrasts, see Table S5a in the Supplemental Material). Furthermore, consistent with an opportunistic conformity effect, a separate set of within-age-group contrasts showed that adults were the only age group that took less time to resist prosocial influence than selfish influence (prosocial -selfish: contrast = −0.18, SE = 0.04, p Bonf < .001; for all contrasts, see Table S5b in the Supplemental Material).
When we investigated age as a continuous variable, we found that two three-way interactions showed that both linear and quadratic components of age interacted with the direction of influence and the influence termlinear: χ 2 (1) = 16.32, p < .001; quadratic: χ 2 (1) = 13.78, p < .001 (Fig. 6, bottom panels). Contrasts demonstrated that under prosocial influence, RTs linearly decreased with age to a greater extent when participants were not influenced than when they were influenced (influenced -not influenced: contrast = 6.80, SE = 2.01, p Bonf = .003), whereas under selfish influence, RTs were quadratically associated with age to a greater extent when participants were influenced as opposed to not influenced (influenced -not influenced: contrast = −5.21, SE = 1.96, p Bonf = .031; for all contrasts, see Table S6 in the Supplemental Material). The dashed lines of Fig. 6 (bottom panel) highlight the components that interacted with the influence term: When participants were not influenced by more generous others, age was linearly associated with decreased RTs (slope = −7.33, SE = 1.81, p Bonf < .001); when participants were influenced by more selfish others, age was quadratically associated with RTs, peaking between mid-and late adolescence (slope = −4.90, SE = 2.22, p Bonf = .027).
Influence magnitude. Influence magnitude was measured as the degree to which participants changed their donation in the direction of the observed donations. A linear mixed model showed a main effect of source, χ 2 (2) = 6.40, p = .041, suggesting that, as for influence probability, influence magnitude was adapted more to other people than to computers (teenagers -computer: contrast = 0.86, SE = 0.19, p Bonf < .001; adults -computer: contrast = 0.64, SE = 0.15, p Bonf < .001; for all contrasts, see Table S7 in the Supplemental Material). On the other hand, the model showed no main effect of age on influence magnitude, χ 2 (2) = 1.30, p = .521. Instead, it showed that age (and direction of influence) modulated the extent to which it affected subsequent adjustments (in a three-way interaction among delta, age group, and direction), χ 2 (2) = 16.56, p < .001 (Fig. 7). In other words, age and direction of influence affected slope differences in the positive relation between delta and influence magnitude. Specifically, contrasts showed that such slopes were greater when adults decreased their donations to comply with observed (selfish) norms, rather than increasing them (prosocial -selfish: slope = −0.12, SE = 0.03, p Bonf < .001), whereas this distinction was only marginal or absent in both the young adolescent (prosocial -selfish: slope = −0.05, SE = 0.02, p Bonf = .055) and midadolescent (prosocial -selfish: slope = 0.01, SE = 0.02, p Bonf = 1) age groups. As for influence probability, this finding is consistent with the notion that adults but not adolescents display opportunistic conformity: changing their donations to a greater extent when other donors had given less than them, compared with when they had given more. Moreover, contrasts comparing the slopes between age groups showed that, under selfish influence, slopes were smallest for midadolescents, relative to either of the other two age groups (young adolescents -midadolescents: slope = 0.07, SE = 0.03, p Bonf = .042; midadolescentsadults: slope = −0.10, SE = 0.03, p Bonf = .008; for other contrasts, see Table S8 in the Supplemental Material), whereas slopes did not differ between age groups under prosocial influence (p Bonf s = 1). The model also revealed a marginal three-way interaction among age group, direction, and source, χ 2 (4) = 9.26, p = .055, because of adults' higher susceptibility to being influenced by a computer under selfish influence, relative to other age groups. However, this effect broke down, χ 2 (4) = 4.27, p = .370, when we removed extreme values (< 1% of the data) from the model and thus will not be discussed further. Similar effects were observed when age was used as a continuous variable. A linear mixed model showed a reliable main effect of source, χ 2 (2) = 26.74, p < .001, and a significant trend for the delta term of the model, χ 2 (1) = 36.39, p < .001: Participants were more influenced by other people (i.e., either teenagers or adults) than by the random computer (teenagers -computer: contrast = 0.90, SE = 0.19, p Bonf < .001; adults -computer: contrast = 0.64, SE = 0.15, p Bonf < .001; for all contrasts, see Fig. 6. Interaction of age group, direction of influence (prosocial vs. selfish), and influence (influenced vs. not influenced) on response times (RTs). In the top graph, age is shown as a categorical variable. Colored squares show the fixed-effects estimates of RTs from the trial-level linear mixed model. Error bars are the corresponding 95% confidence intervals. Symbols indicate significant and marginally significant differences between and within groups ( † p < .10, *p < .05, ***p < .001, Bonferroni corrected). For statistics of all contrasts, see Tables S5a and S5b in the Supplemental Material available online. In the bottom graph, age is shown as a continuous variable. Circles are grand medians of RTs. Circle area is proportional to the number of participants; the key shows three examples for reference. The lines show the overall polynomial trends of age as estimated by a trial-level generalized linear mixed model on the log of RTs (then back-transformed to the response scale). The shaded area is the 95% confidence interval. Asterisks indicate significant components of the trends (dashed lines; *p < .05, ***p < .001). For statistics of all trend contrasts, see Table S6 in the Supplemental Material. from the norms (i.e., the Δ) predicted increased magnitude of donation adjustment (estimate = 0.85, SE = 0.14, p < .001). As for the categorical analysis of age, there was no linear effect of age on influence magnitude, χ 2 (1) = 0, p = .947, but there was a three-way interaction among delta, direction of influence, and age (both linear and quadratic components of age), linear: χ 2 (1) = 8.74, p = .003; quadratic: χ 2 (1) = 5.49, p = .019. Post hoc analyses suggested that the linear but not quadratic component of age marginally interacted with deltas under prosocial influence, χ 2 (1) = 3.80, p = .051: The linear component of age marginally decreased the effect of deltas on change in donations under prosocial influence (slope = −20.62, SE = 10.58, p = .053), whereas it increased it under selfish influence (slope = 44.22, SE = 18.70, p = .019), hence the interaction. On the other hand, the quadratic component of age was marginally associated with influence magnitude under selfish influence (slope = 32.96, SE = 18.21, p = .072) but not under prosocial influence (slope = −1.11, SE = 10.60, p = .917).
Discussion
The current study showed that the probability of social influence decreased between early adolescence and adulthood, independently of the prosocial or selfish direction of influence. This age-dependent socialinfluence effect might have been due to adolescents being more uncertain than adults about their decisions and thus relying more on other donors to inform their choices (e.g., Moutoussis et al., 2016) or being motivated by a greater need to fit in with others. Our finding that heightened social influence was associated with increased RTs (also see Reiter et al., 2019) argues against the notion that heightened social influence is due to impulsive or reactive decision-making.
In addition, we found that participants of all ages were equally influenced by peers and nonpeers. Notably, the peer-matching procedure used in this study (showing adolescents how much other adolescents had donated, as opposed to how much adults had donated) was the same as the one used in three previous studies on risk perception or risky decisions, as well as in the study by Foulkes and colleagues (2018), which focused on hypothetical prosocial behavior. Despite employing the same peer manipulation, the studies involving risk showed that adolescents are more influenced by other teenagers than by adults (Knoll et al., 2017;Knoll et al., 2015;Reiter et al., 2019), whereas here, as well as in the other study on prosocial influence (Foulkes et al., 2018), adolescents were equally influenced by teenagers and by adults. These results suggest that peer influence during adolescence (namely, heightened susceptibility to social influence for peers relative to nonpeers) is domain dependent and that peer influence might play a greater role in the domain of risk than prosocial behavior.
Our results showed that, even if decisions were costly, and even if participants were explicitly reminded (at the single-trial level) that some of the numbers they were observing were generated by a random computer, participants still aligned their decisions with those numbers. This corroborates the notion that anchoring effects are highly resistant, even to explicit reminders of their irrelevance (Wilson et al., 1996). Importantly, however, our control condition also allowed us to detect variance Fig. 7. Interaction of age group, distance from norm, and direction of influence (prosocial vs. selfish) on influence magnitude. Slopes quantify the association between the distance from the norm and the subsequent magnitude of conforming donation change. Bars show fixed-effects estimates of slopes from the trial-level linear mixed model. Error bars are the corresponding 95% confidence intervals. Symbols indicate significant and marginally significant differences between groups (pink) and within groups (black; † p < .10, *p < .05, **p < .01, ***p < .001, Bonferroni corrected). For statistics of all contrasts, see Table S8 in the Supplemental Material available online.
in social influence that was not explained by nonsocial influence, in that participants were more influenced by other people than by the computer. In fact, when focusing on social-influence trials (i.e., not computer trials), we still observed the reported effects of age on social influence, also controlling for the nonsocial influence. This suggests that social-influence effects are not entirely explained by nonsocial processes.
Finally, opportunistic conformity (the tendency to conform with selfish norms more than prosocial norms) has been observed in a number of previous studies in adults (Charness et al., 2019;Croson & Shang, 2008;Dimant, 2019) as well as in the adult sample analyzed here (see Section 9 in the Supplemental Material). However, we found that midadolescents displayed no signs of a directional asymmetry in social influence. This age effect of opportunistic conformity was unanticipated and would need to be replicated. That said, the finding fits with those of previous studies in this area. For example, recent reviews have suggested that prosocial preferences plateau or peak during adolescence (Sutter et al., 2019;Van der Graaff et al., 2018), and prosocial preferences have been suggested to modulate opportunistic conformity (Wei et al., 2016). In line with this, our results showed that baseline donations, which are related to prosocial preferences (Böckler et al., 2016), were greater in adolescents relative to adults and that adolescents simultaneously displayed reduced opportunistic conformity. In addition, negative affect is reported to be heightened in midadolescents relative to young adolescents (Larson, Moneta, Richards, & Wilson, 2002), and self-conscious emotions peak during mid to late adolescence (Somerville et al., 2013). This might amplify negative feelings associated with selfishnorm compliance. For example, it could heighten guilt aversion, which is a frequent motive for prosocial behavior in adults (Battigalli & Dufwenberg, 2007), children (Hoffman, 1998), and adolescents (Roos, Hodges, & Salmivalli, 2014). We thus speculate that potential age differences in prosocial preferences, coupled with a heightened sensitivity toward guilt, may contribute to a more unbiased weighting of prosocial and selfish influence in midadolescents.
Limitations
Because of restricted logistic control in school recruitment and testing, adolescent participants were tested in different experimental settings from adults: Their testing environment was more familiar (given that they were tested in their own schools), and they were tested in larger groups than adults. We cannot exclude that such different experimental settings may partly explain the differences observed between adolescents and adults. For example, the larger groups in which adolescents took part may have increased the social salience of the stimuli used in our task. Importantly, however, our control analyses showed that the probability of social influence (and the magnitude of selfish influence) also differed between young adolescents and midadolescents, even in the absence of such differences in experimental settings. This suggests that even if experimental settings partly modulated the results, they are not the only mechanism at play. It would be important in future studies to match testing group size and experimental settings.
Second, our study did not control for age differences in the utility of money. Previous studies have suggested that the value of money could decline with age (e.g., Fehr, Glätzle-Rützler, & Sutter, 2013), and if this extended to our participants, it is unlikely to explain our results: If the value of money declines with age and this was the sole factor driving the results (i.e., no age differences in willingness to conform), younger individuals would be less susceptible to (costly) prosocial influence, whereas our results showed the opposite. However, our findings are consistent with the opposite pattern, namely, that adolescents are less incentivized by money than adults. This would fit with the general framework of our proposal that adolescence is a period of social reorientation, during which social concerns (e.g., to fit in or learn from other people) might crowd out other factors such as monetary concerns (Gneezy, Meier, & Rey-Biel, 2011). In future studies, researchers should aim to control for this by assessing a task-independent measure of the perceived value of money at different ages.
Conclusion
Our study suggests that heightened social influence during adolescence is not only a source of vulnerability but also one of opportunity (Van Hoorn et al., 2016), one in which heightened social concerns could be harnessed to modulate prosocial behavior. For example, one such intervention reported that public endorsement of anticonflict (e.g., antibullying) values by referent students reduced reports of school conflict by around 25% in 1 year, relative to control schools (Paluck, Shepherd, & Aronow, 2016). Finally, we found that for both adolescents and young adults, social anchors are more effective at modulating prosocial behavior than nonsocial anchors. This provides novel insight into the notion that social-norm-based interventions are a particularly effective device in promoting cooperation in the field (Kraft-Todd et al., 2015).
Transparency
Action Editor: Erika E. Forbes Editor: D. Stephen Lindsay
Author Contributions
S.-J. Blakemore wrote the initial grant application. S.-J. Blakemore and G. Chierchia designed the study. G. Chierchia and B. Piera Pi-Sunyer conducted testing and data collection. G. Chierchia and B. Piera Pi-Sunyer analyzed and interpreted the data under the supervision of S.-J. Blakemore. All the authors contributed to the writing of the manuscript and approved the final manuscript for submission.
Declaration of Conflicting Interests
The author(s) declared that there were no conflicts of interest with respect to the authorship or the publication of this article.
Funding
This study was funded by a Jacobs Foundation Research Prize to S.-J. Blakemore, Wellcome Trust Grant 104908/Z/14/Z, a Royal Society Fellowship, and the Nuffield Foundation.
Open Practices
Deidentified data, stimuli, and scripts have been made publicly available via OSF and can be accessed at https:// osf.io/3e9s6. The design and analysis plans for the study were not preregistered. This article has received the badges for Open Data and Open Materials. More information about the Open Practices badges can be found at http://www.psychologicalscience.org/publications/ badges.
|
v3-fos-license
|
2021-10-20T16:18:35.867Z
|
2021-09-13T00:00:00.000
|
239133525
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-4701/11/9/1448/pdf?version=1631525581",
"pdf_hash": "7d6eea6e03300ae436529fd3ba185b7293bece02",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45933",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "857918b988474bf031c25e708ef7cc1b114f409e",
"year": 2021
}
|
pes2o/s2orc
|
Utilizing Iron as Reinforcement to Enhance Ambient Mechanical Response and Impression Creep Response of Magnesium
: To realize light-weight materials with high strength and ductility, an effective route is to incorporate strong and stiff metallic elements in light-weight matrices. Based on this approach, in this work, magnesium–iron (Mg-Fe) composites were designed and characterized for their microstructure and mechanical properties. The Mg-Fe binary system has extremely low solubility of Fe in the Mg-rich region. Pure magnesium was incorporated with 5, 10, and 15 wt.% Fe particles to form Mg-Fe metal–metal composites by the disintegrated melt deposition technique, followed by hot extrusion. Results showed that the iron content influences (i) the distribution of Fe particles in the Mg matrix, (ii) grain refinement, and (iii) change in crystallographic orientation. Mechanical testing showed that amongst the composites, Mg-5Fe had the highest hardness, strength, and ductility due to (a) the uniform distribution of Fe particles in the Mg matrix, (b) grain refinement, (c) texture randomization, (d) Fe particles acting as effective reinforcement, and (e) absence of deleterious interfacial reactions. Under impression creep, the Mg-5Fe composite had a creep rate similar to those of commercial creep-resistant AE42 alloys and Mg ceramic composites at 473 K. Factors influencing the performance of Mg-5Fe and other Mg metal–metal composites having molybdenum, niobium, and titanium (elements with low solubility in Mg) are presented and discussed.
Introduction
Magnesium (Mg) alloys and Mg composites are prospective materials for automotive and aerospace industries [1,2], as they can significantly reduce the weight of structures due to their low density and high strength-to-weight ratio. Al, Zn, Zr, Mn, and rare earth metals are the elements commonly used for alloying Mg. To make Mg composites, usually ceramic particles (micron to nano-scale sizes) such as alumina, silicon carbide, boron carbide, boron nitride, and carbonaceous materials (e.g., carbon nanotubes and graphene) are incorporated as reinforcement to Mg matrices [3,4]. To achieve both high strength and ductility, metals that are strong, are stiff, and have a high melting point (e.g., Ti, Mo, Fe, Cr, and Nb) are potential reinforcements to make Mg-based composites, due in part to their limited solubility or no solid solubility with magnesium [5][6][7][8][9][10]. For example, incorporation of titanium in pure Mg to form Mg-5.6Ti improves tensile strength and ductility by 60% and 50% [5,6]. The addition of niobium and molybdenum to pure Mg has shown similar properties of improvement [6][7][8][9]. Sintered Mg-Ti composites have shown retention of their mechanical properties even at high temperatures [10]. Iron (Fe) is a low cost, high strength, and high melting point element. Iron has low solubility in magnesium, and the formation of magnesium-iron intermetallic compounds is absent [11,12]. There exists a eutectic reaction on the Mg-rich side at a temperature very close to the melting point of Mg; however, the composition of the eutectic point lies at <0.025 wt.% of Fe [12,13]. In the past, the presence of Fe in Mg (caused due to melting Mg in iron and steel crucibles) was considered as an impurity, as it drastically reduced the corrosion resistance of Mg [14]. However, a recent investigation [15] has found that Fe content even up to 25% in Mg-Fe does not lead to severe corrosion. Recent research has revealed that the addition of Fe to Mg is beneficial for applications related to biomedical implants [16,17], hydrogen sensing and storage [18,19], and electromagnetic shielding [20].
For high temperature performance such as creep resistance, the high temperature stability of materials is essential [4,[21][22][23]. High temperature stability is important for powertrain components in automobile applications, e.g., engine components. At high operating temperatures, the coefficient of thermal expansion (CTE) is of concern as it affects the dimensional stability of components during service [24][25][26]. Mg-Al commercial alloys have better high temperature stability only for an Al content ≤3% [21]. An increase in Al content increases the formation of Mg 17 Al 12 intermetallic, and this phase degrades at temperatures >170 • C (443 K), thus deteriorating mechanical performance [21][22][23]. A better elevated temperature performance of Mg materials has been achieved via making Al-free Mg alloys, by using rare earth (RE) metals as alloying elements, and by making Mg composites [23,27,28].
The Mg-Fe system does not form intermetallics, and Fe is inherently a high strength metal and has a high melting point (1811 K); hence, it is considered for investigation in the present work to assess its efficacy for applications that require high strength, high ductility, and high temperature stability. In this work, Mg-Fe metal-metal composites (Fe = 5, 10, and 15 wt.%) were synthesized by the disintegrated melt deposition technique and investigated for their microstructural characteristics and mechanical properties. A discussion on the performance of the composite that showed the best properties with those of Mg metal-metal composites containing other metallic elements such as titanium, niobium, and molybdenum [5][6][7][8][9] is also presented.
Materials
Magnesium ingots of 99.8% purity (supplied by Tokyo Magnesium Company Limited, Yokohama, Japan) and iron particles of size 75 microns (98% purity, supplied by Alfa Aesar, Singapore) were used as the matrix and reinforcing materials, respectively.
Processing
Mg-Fe materials used in this study were synthesized through the liquid metallurgy route based disintegrated melt deposition (DMD) technique [1]. Mg turnings and Fe particles were heated in a graphite crucible to 750 • C in an electrical resistance furnace under inert argon gas protective atmosphere. The superheated molten slurry was stirred at 465 rpm using a twin blade (pitch 45 • ) mild steel impeller (coated with Zirtex 25) for 5 min so as to facilitate uniform distribution of the Fe particles in molten Mg. The molten slurry was then bottom poured into a steel mould after disintegration by two jets of argon gas oriented normal to the melt stream. Following deposition, an ingot of 40 mm in diameter was obtained, which was then machined to a 36 mm diameter and soaked at 400 • C for 60 min. Hot extrusion was conducted using a 150 T hydraulic press at 350 • C with an extrusion ratio of 20.25:1, to obtain rods of 8 mm in diameter.
Synthesis of pure Mg was carried out using the steps mentioned above except that no reinforcement particles were added. Density measurements were conducted on polished samples taken from the extruded rods using Archimedes' principle. Distilled water was used as the immersion fluid. Weight measurements were made using an electronic balance (A&D HM-202, A&D Company, Limited, Tokyo, Japan) with an accuracy of ±0.1 mg. Theoretical densities of composites were calculated using rule of mixtures. To determine their porosity, experimentally measured densities were compared with those of their theoretically estimated values. Porosity was calculated using the formula Porosity (%) = [(ρ Theoretical − ρ Experimental )/(ρ Theoretical − ρ air )] × 100, where ρ air at 20 • C, and 1 atm is 0.0012 g/cm 3 .
X-ray Diffraction
X-ray diffraction (XRD) analyses on the polished samples from extruded rods were performed using an automated Shimadzu LAB-XRD-6000 X-ray diffractometer (Shimadzu, Columbia, SC, USA) (Cu, Kα = 1.54056Ǻ), with a scanning speed of 2 • /min. Phase identification was carried out by matching the Bragg angle and the intensity of the peaks with the standard peaks of Mg, Fe, and other related phases. X-ray measurements were conducted on extruded samples, both in the transverse (i.e., sample cross-sectioned direction) and longitudinal directions to obtain information on bulk texture (i.e., crystallographic orientation) of Mg matrix.
Microstructural Characterization
Microstructural characterization was conducted on the extruded pure Mg and Mg-Fe materials to examine the morphology of the grains and Fe particle distribution. Optical microscope (Olympus, Singapore, Singapore), Jeol JSM-5800 LV scanning electron microscope (Jeol SEM, JEOL Ltd., Tokyo, Japan), and field emission scanning electron microscope (Hitachi FESEM-S4300, Hitachi High-Technologies Corporation, Tokyo, Japan) coupled with energy dispersion analysis (EDS) were used for the characterization. Specimens for optical microscopy were sectioned, polished, and etched with citral (4.2 g of citric acid monohydrate in 100 mL of water) for 5 s at room temperature. Scion image analysis software was used to identify the average grain size and aspect ratio.
Microhardness
Microhardness measurements were made on polished extruded samples using Matsuzawa MXT 50 automatic digital micro-hardness tester (Matsuzawa Co. Ltd., Tokyo, Japan). Tests were conducted in accordance with ASTM standard E3 84-99 using a Vickers indenter (test load: 25 gf; dwell time: 15 s). Three samples were tested for each composition. Ten to fifteen readings were taken, and the average values are reported.
Tensile and Compressive Properties
Tensile and compression properties of the as-extruded pure Mg and Mg-Fe samples were evaluated using a fully automated servo-hydraulic mechanical testing machine (Model-MTS 810) (MTS Systems, Eden Prairie, MN, USA). For tensile tests, smooth bar tensile specimens with diameter 5 mm and gauge length 25 mm (in accordance with ASTM test method E8M-96) were used. The crosshead speed was set at 0.254 mm/min (1.69 × 10 −4 s −1 ). For compression tests, samples with diameter of 8 mm and length to diameter ratio of l/d~1 obtained from the extruded rods were used as test specimens. The crosshead speed was set at 0.040 mm/s (8.3 × 10 −5 s −1 ). For each composition, 5 tests were conducted, and the average values are reported.
Fractography
Fracture surfaces of the tensile and compression tested samples were investigated using FESEM to identify the failure modes of the samples.
Coefficient of Thermal Expansion
Thermal expansion coefficient (CTE) of the extruded pure Mg and Mg-5Fe samples was conducted using an LINSEIS TMA PT 1000LT thermo-mechanical analyser (Linseis, Selb, Germany) at a heating rate of 5 • C/min. Argon gas flow rate was 100 ccm/min. Displacement of the test samples (each 5 mm long) as a function of temperature in the range 323 to 673 K was recorded using an alumina probe and was used to determine the CTE.
Impression Creep Tests
For impression creep tests, cylindrical samples with 6 mm diameter and 6 mm height were prepared from the extruded rods. Impression creep tests were carried out using an impression creep testing machine (Spranktronics, Bangalore, India), which contains constant-load equipment, a temperature controller, and a computer-controlled data acquisition system. In this method, by the application of a constant load (stress) at specific temperatures, the indenter is impressed on the sample. A tungsten carbide cylindrical punch (diameter: 1.5 mm) was used as the indenter. Indentation creep tests were performed under a load of 160 MPa and temperature of 473 K for dwell times up to 10,800 s (up to 3 h). During the load application, the impression depth was acquired as a function of time. Impression creep tests were performed on pure Mg and Mg-5Fe. For comparative study, impression creep tests were also conducted on Mg-5.6Ti and Mg-5.6Nb materials that were prepared by DMD method followed by hot extrusion [6,7].
Density and Porosity
The measured values and theoretical estimates of the density of pure Mg and Mg-Fe composites are given in Table 1. The porosities of Mg-Fe materials are higher than that of pure Mg. The porosity of the composites increased relative to the increase in Fe addition. The overall porosity for all the composites, however, remained <2%, indicating the near net shape forming capability of the processing methodology used in the present study. The absence of magnesium oxide peaks shows the capability of the fluxless and SF 6 -free DMD technique to mitigate the oxide formation solely using argon gas. The absence of intermetallic phase formation is due to the lack of solubility of Fe in Mg, as can be inferred from the Mg-Fe phase diagram [11][12][13]. Reinforcements and second phases can change the orientation (i.e., texture) of Mg matrices [29][30][31]. To understand the effect of Fe addition on the crystallographic orientation of the Mg matrix, X-ray analysis along the transverse and longitudinal sections was considered. Mg peaks observed at 2θ = 32°, 34°, and 36° diffraction patterns correspond to the (1 0-1 0) prism, (0 0 0 2) basal, and (1 0-1 1) pyramidal planes of the HCP Mg crystal ( Figure 1). Along the transverse direction, the prismatic plane intensity (2θ = 32°) is maximum for pure Mg and Mg-15Fe (Figure 2a), which is indicative of fibre texture (basal texture). This implies that the preferred alignment of prismatic planes in pure Mg and Mg-15Fe is in the direction perpendicular to their extrusion direction. Mg-5Fe and Mg-10Fe showed a maximum Mg peak intensity at 36°, which corresponds to the pyramidal planes ( Figure 2a). For Mg-10Fe, the intensity of basal planes (2θ = 34°) was found to be equally prominent as those of pyramidal planes. Along the longitudinal section, the intensity of basal planes (2θ = 34°) is maximum for pure Mg, Mg-10Fe, and Mg-15Fe, whereas the intensity of pyramidal planes (2θ = 36°) is maximum for Mg-5Fe. This observation shows a change in the bulk crystallographic orientation of Mg grains, caused by Fe addition, such that (i) the bulk texture change was not directly proportional to the amount Reinforcements and second phases can change the orientation (i.e., texture) of Mg matrices [29][30][31]. To understand the effect of Fe addition on the crystallographic orientation of the Mg matrix, X-ray analysis along the transverse and longitudinal sections was considered. Mg peaks observed at 2θ = 32°, 34°, and 36° diffraction patterns correspond to the (1 0-1 0) prism, (0 0 0 2) basal, and (1 0-1 1) pyramidal planes of the HCP Mg crystal ( Figure 1). Along the transverse direction, the prismatic plane intensity (2θ = 32°) is maximum for pure Mg and Mg-15Fe (Figure 2a), which is indicative of fibre texture (basal texture). This implies that the preferred alignment of prismatic planes in pure Mg and Mg-15Fe is in the direction perpendicular to their extrusion direction. Mg-5Fe and Mg-10Fe showed a maximum Mg peak intensity at 36°, which corresponds to the pyramidal planes ( Figure 2a). For Mg-10Fe, the intensity of basal planes (2θ = 34°) was found to be equally prominent as those of pyramidal planes. Along the longitudinal section, the intensity of basal planes (2θ = 34°) is maximum for pure Mg, Mg-10Fe, and Mg-15Fe, whereas the intensity of pyramidal planes (2θ = 36°) is maximum for Mg-5Fe. This observation shows a change in the bulk crystallographic orientation of Mg grains, caused by Fe addition, such that (i) the bulk texture change was not directly proportional to the amount Reinforcements and second phases can change the orientation (i.e., texture) of Mg matrices [29][30][31]. To understand the effect of Fe addition on the crystallographic orientation of the Mg matrix, X-ray analysis along the transverse and longitudinal sections was considered. Mg peaks observed at 2θ = 32 • , 34 • , and 36 • diffraction patterns correspond to the (1 0-1 0) prism, (0 0 0 2) basal, and (1 0-1 1) pyramidal planes of the HCP Mg crystal ( Figure 1). Along the transverse direction, the prismatic plane intensity (2θ = 32 • ) is maximum for pure Mg and Mg-15Fe (Figure 2a), which is indicative of fibre texture (basal texture). This implies that the preferred alignment of prismatic planes in pure Mg and Mg-15Fe is in the direction perpendicular to their extrusion direction. Mg-5Fe and Mg-10Fe showed a maximum Mg peak intensity at 36 • , which corresponds to the pyramidal planes ( Figure 2a). For Mg-10Fe, the intensity of basal planes (2θ = 34 • ) was found to be equally prominent as those of pyramidal planes. Along the longitudinal section, the intensity of basal planes (2θ = 34 • ) is maximum for pure Mg, Mg-10Fe, and Mg-15Fe, whereas the intensity of pyramidal planes (2θ = 36 • ) is maximum for Mg-5Fe. This observation shows a change in the bulk crystallographic orientation of Mg grains, caused by Fe addition, such that (i) the bulk texture change was not directly proportional to the amount of Fe addition and that (ii) pure Mg and Mg-15Fe have a fibre texture, whereas Mg-5Fe and Mg-10Fe have a randomized texture. The measured grain size values of the test materials are given in Table 1. By the addition of Fe to Mg, the grain size decreased. As an example, the average grain size of Mg-5Fe decreased by >50% when compared to pure Mg ( Figure 3). Grain refinement is due to Fe particles that act as obstacles and hinder grain growth via restricting the migration of grain boundaries during hot extrusion. The average grain size increases with increasing Fe content but still remains smaller than that of pure Mg. of Fe addition and that (ii) pure Mg and Mg-15Fe have a fibre texture, whereas Mg-5Fe and Mg-10Fe have a randomized texture. The measured grain size values of the test materials are given in Table 1. By the addition of Fe to Mg, the grain size decreased. As an example, the average grain size of Mg-5Fe decreased by >50% when compared to pure Mg ( Figure 3). Grain refinement is due to Fe particles that act as obstacles and hinder grain growth via restricting the migration of grain boundaries during hot extrusion. The average grain size increases with increasing Fe content but still remains smaller than that of pure Mg. The distribution of Fe particles in the Mg matrix is shown in Figure 4. Mg-Fe composites showed no interfacial reaction products and no interfacial debonding (e.g., Mg-5Fe, Figure 4a). In the Mg-5Fe composite, a uniform distribution of Fe particles was observed ( Figure 4b). However, in Mg-10 Fe and Mg-15 Fe composites, Fe particles were found to be clustered (Figure 4c,d). The clustering of reinforcement particles in metal-metal composites has been observed earlier [6][7][8][9]. In the present work, the clustering of Fe particles in the Mg matrix (i) indicates that beyond a certain limit of Fe addition (in the present case 5%), agglomeration occurs due to the low solubility of Fe in Mg; (ii) leads to an increase in the porosity level (cluster-associated porosity due to inefficient packing of Fe particles, Table 1); and (iii) does not effectively enable grain nucleation and, thereby, the relatively larger grain sizes seen in Mg-10 Fe and Mg-15 Fe when compared to Mg-5Fe (Table 1).
Microhardness
Microhardness values of pure Mg and Mg-Fe materials are given in Table 2. All Mg-Fe materials have a higher hardness than pure Mg, which is due to Fe particles. Fe has a hardness value of ≈150 Hv [32], which is almost four times that of Mg (42.4 Hv, Table 2). Fe can be considered as metallic reinforcement to Mg, as it does not form any intermetallic phase with Mg and increases the hardness. The presence of hard Fe particles in the Mg matrix causes a higher constraint to localized matrix deformation during indentation, resulting in a higher hardness [33,34]. However, it is seen that the percentage increase in hardness value of Mg-Fe composites decreases with the increase in Fe content, such that (i) a 5% addition of Fe increased the hardness value by >50%, (ii) a 10% addition of Fe increased the hardness value by >30%, and (iii) a 15% addition of Fe increased the hardness value by >10%, when compared to pure Mg. This decreasing trend in hardness with an increase in Fe content is attributed to (a) an increase in porosity level in the composites with an increase in Fe content (Table 1) The distribution of Fe particles in the Mg matrix is shown in Figure 4. Mg-Fe composites showed no interfacial reaction products and no interfacial debonding (e.g., Mg-5Fe, Figure 4a). In the Mg-5Fe composite, a uniform distribution of Fe particles was observed ( Figure 4b). However, in Mg-10 Fe and Mg-15 Fe composites, Fe particles were found to be clustered (Figure 4c,d). The clustering of reinforcement particles in metal-metal composites has been observed earlier [6][7][8][9]. In the present work, the clustering of Fe particles in the Mg matrix (i) indicates that beyond a certain limit of Fe addition (in the present case 5%), agglomeration occurs due to the low solubility of Fe in Mg; (ii) leads to an increase in the porosity level (cluster-associated porosity due to inefficient packing of Fe particles, Table 1); and (iii) does not effectively enable grain nucleation and, thereby, the relatively larger grain sizes seen in Mg-10 Fe and Mg-15 Fe when compared to Mg-5Fe (
Tensile Properties
The tensile properties of pure Mg and Mg-Fe composites are given in Table 3. Engineering stress-strain curves of pure Mg and Mg-Fe composites under tensile loading are shown in Figure 5. All Mg-Fe composites have a higher tensile yield strength (TYS), higher ultimate tensile strength (UTS), and higher% elongation (except for Mg-15Fe) than pure Mg, which is due to the presence of Fe particles. Fe particles act as metallic reinforcement and provide strengthening by efficient load transfer from the matrix to reinforcement [35,36]. The tensile properties and ductility of Mg-Fe composites in comparison to those of pure Mg were such that (i) for Mg-5Fe, yield strength and ultimate strength increased by 28% and 34%, respectively, and ductility by 57%; (ii) for Mg-10Fe, yield strength and ultimate strength increased by 12% and 28%, respectively, and ductility by 25%; and (iii) for Mg-15Fe, yield strength increased by 7.5%, and ultimate tensile strength was retained, whereas the ductility decreased by 60%. Mg-5Fe displayed the highest tensile properties. Table 2. All Mg-Fe materials have a higher hardness than pure Mg, which is due to Fe particles. Fe has a hardness value of ≈150 Hv [32], which is almost four times that of Mg (42.4 Hv, Table 2). Fe can be considered as metallic reinforcement to Mg, as it does not form any intermetallic phase with Mg and increases the hardness. The presence of hard Fe particles in the Mg matrix causes a higher constraint to localized matrix deformation during indentation, resulting in a higher hardness [33,34]. However, it is seen that the percentage increase in hardness value of Mg-Fe composites decreases with the increase in Fe content, such that (i) a 5% addition of Fe increased the hardness value by >50%, (ii) a 10% addition of Fe increased the hardness value by >30%, and (iii) a 15% addition of Fe increased the hardness value by >10%, when compared to pure Mg. This decreasing trend in hardness with an increase in Fe content is attributed to (a) an increase in porosity level in the composites with an increase in Fe content (Table 1) and (b) non-uniform matrix hardening due to clustering of Fe particles in Mg-10Fe and Mg-15Fe composites (Figure 4). Amongst Mg-Fe composites, the Mg-5Fe composite displayed the highest microhardness.
Tensile Properties
The tensile properties of pure Mg and Mg-Fe composites are given in Table 3. Engineering stress-strain curves of pure Mg and Mg-Fe composites under tensile loading are shown in Figure 5. All Mg-Fe composites have a higher tensile yield strength (TYS), higher ultimate tensile strength (UTS), and higher% elongation (except for Mg-15Fe) than pure Mg, which is due to the presence of Fe particles. Fe particles act as metallic reinforcement and provide strengthening by efficient load transfer from the matrix to reinforcement [35,36]. The tensile properties and ductility of Mg-Fe composites in comparison to those of pure Mg were such that (i) for Mg-5Fe, yield strength and ultimate strength increased by 28% and 34%, respectively, and ductility by 57%; (ii) for Mg-10Fe, yield strength and ultimate strength increased by 12% and 28%, respectively, and ductility by 25%; and (iii) for Mg-15Fe, yield strength increased by 7.5%, and ultimate tensile strength was retained, whereas the ductility decreased by 60%. Mg-5Fe displayed the highest tensile properties. Despite the lack of solid solubility between Mg and Fe, the ductility (i.e., strain to failure, % elongation) improvement in the composites can be attributed to (i) good mechanical bonding between Fe particles and the Mg matrix and (ii) the dissipation of stress concentration present at the crack front by Fe particles [37,38]. However, a drastic reduction in ductility of Mg-15Fe is mainly due to the severe clustering of Fe particles in the composite (Figure 4d). Such clustering of particles aids void nucleation and growth [39], as clusters have a high-volume fraction of particles when compared to the surrounding zones. A high localised volume fraction of particles leads to increased stress concentration in the clustered zone and thereby voids initiate under tensile loading conditions and propagate as cracks. For uniform deformation to occur, the high stress concentration regions activate additional slip systems to accommodate the same amount of deformation vis à vis the surrounding matrix, which causes fracture at particle clusters, thereby leading to the premature failure of material [40,41]. Furthermore, the high porosity level in Mg-15Fe (Table 1) accelerates failure. As seen from Figure 5, for all the materials, the engineering stress-strain curves after their yield points show low work hardening until fracture. This behaviour of higher yield stress and low work hardening is primarily due to basal slip [5,42], and activation of non-basal slip systems as twinning is not favoured under tension [42,43]. The deformation behaviour under tension is hence slip dominant.
Compressive Properties
The compressive strength properties of pure Mg and Mg-Fe composites are given in Table 4. Engineering stress-strain curves of pure Mg and Mg-Fe composites under compressive loading are shown in Figure 6. All Mg-Fe composites have a higher compressive yield strength (CYS) and a higher ultimate compressive strength (CTS) than pure Mg, which is due to the presence of Fe particles. The compressive strength properties of Mg- Despite the lack of solid solubility between Mg and Fe, the ductility (i.e., strain to failure, % elongation) improvement in the composites can be attributed to (i) good mechanical bonding between Fe particles and the Mg matrix and (ii) the dissipation of stress concentration present at the crack front by Fe particles [37,38]. However, a drastic reduction in ductility of Mg-15Fe is mainly due to the severe clustering of Fe particles in the composite (Figure 4d). Such clustering of particles aids void nucleation and growth [39], as clusters have a high-volume fraction of particles when compared to the surrounding zones. A high localised volume fraction of particles leads to increased stress concentration in the clustered zone and thereby voids initiate under tensile loading conditions and propagate as cracks. For uniform deformation to occur, the high stress concentration regions activate additional slip systems to accommodate the same amount of deformation vis à vis the surrounding matrix, which causes fracture at particle clusters, thereby leading to the premature failure of material [40,41]. Furthermore, the high porosity level in Mg-15Fe (Table 1) accelerates failure. As seen from Figure 5, for all the materials, the engineering stress-strain curves after their yield points show low work hardening until fracture. This behaviour of higher yield stress and low work hardening is primarily due to basal slip [5,42], and activation of non-basal slip systems as twinning is not favoured under tension [42,43]. The deformation behaviour under tension is hence slip dominant.
Compressive Properties
The compressive strength properties of pure Mg and Mg-Fe composites are given in Table 4. Engineering stress-strain curves of pure Mg and Mg-Fe composites under compressive loading are shown in Figure 6. All Mg-Fe composites have a higher compressive yield strength (CYS) and a higher ultimate compressive strength (CTS) than pure Mg, which is due to the presence of Fe particles. The compressive strength properties of Mg-Fe composites in comparison to those of pure Mg were such that (i) for Mg-5Fe, the yield strength and ultimate strength increased by 53% and 32%, respectively; (ii) for Mg-10Fe, the yield strength and ultimate strength increased by 23% and 5.5%, respectively; and (iii) for Mg-15Fe, the yield strength increased by 12%, and the ultimate compressive strength was retained. The compressive ductility of Mg-5Fe increased slightly by 4%, whereas for Mg-10Fe and Mg-15Fe, the compressive ductility decreased by 11% and 22%, respectively. Amongst the Mg-Fe composites, the Mg-5Fe composite displayed the highest compressive properties. As seen from Figure 6, for all the materials, the engineering stress-strain curves have a sigmoidal shape with an initial upward concave profile (after their yield points) and a subsequent convex profile. As opposed to under tension, significant work hardening under compression in extruded Mg materials is due to deformation by twinning [42,43]. Furthermore, the activation of non-basal slip systems also contributes to work hardening [42], and as a result, the overall compressive strength (with large strains) increases. Fe composites in comparison to those of pure Mg were such that (i) for Mg-5Fe, the yield strength and ultimate strength increased by 53% and 32%, respectively; (ii) for Mg-10Fe, the yield strength and ultimate strength increased by 23% and 5.5%, respectively; and (iii) for Mg-15Fe, the yield strength increased by 12%, and the ultimate compressive strength was retained. The compressive ductility of Mg-5Fe increased slightly by 4%, whereas for Mg-10Fe and Mg-15Fe, the compressive ductility decreased by 11% and 22%, respectively. Amongst the Mg-Fe composites, the Mg-5Fe composite displayed the highest compressive properties. As seen from Figure 6, for all the materials, the engineering stress-strain curves have a sigmoidal shape with an initial upward concave profile (after their yield points) and a subsequent convex profile. As opposed to under tension, significant work hardening under compression in extruded Mg materials is due to deformation by twinning [42,43]. Furthermore, the activation of non-basal slip systems also contributes to work hardening [42], and as a result, the overall compressive strength (with large strains) increases.
Fractography
Fractography of tensile tested samples (Figure 7) indicates the failure mode of samples. Dominant ductile fracture features were observed in Mg-5Fe (Figure 7a), which showed high ductility ( Figure 5). Mixed mode fracture with ductile and brittle features was observed for Mg-10Fe (Figure 7b). Fe particles remained intact in the Mg matrix, and
Fractography
Fractography of tensile tested samples (Figure 7) indicates the failure mode of samples. Dominant ductile fracture features were observed in Mg-5Fe (Figure 7a), which showed high ductility ( Figure 5). Mixed mode fracture with ductile and brittle features was observed for Mg-10Fe (Figure 7b). Fe particles remained intact in the Mg matrix, and debonding from the Mg matrix was not observed for composites (e.g., Mg-10Fe, Figure 7c). In Mg-15Fe, particle cracking was observed, with dominant brittle fracture features (Figure 7d). For all the samples, fracture occurred at ~45° angle with respect to the compression test axis. The presence of shear bands indicates that all composites experienced shear failure. Mg-10Fe and Mg-15Fe (Figure 8b,c) show rough fracture surfaces with a mixed mode of shear and brittle features. Scoring marks on Mg-15Fe (shown by arrows) caused by Fe particle clusters during matrix shearing can be seen in Figure 8d.
Coefficient of Thermal Expansion
The coefficient of thermal expansion is (CTE) is a thermo-physical property that influences the dimensional stability of materials at elevated temperatures. The measured thermal expansion coefficient values of pure Mg and the Mg-5Fe composite are given in Table 5. The composite has a lower CTE value than pure Mg, which implies that the Mg-5Fe composite can retain better dimensional stability at elevated temperatures. The lower CTE value of the composite is primarily due to the contribution by Fe particles that have a lower CTE value (CTE value of Fe: 11.8 × 10 −6 /K) than pure Mg (CTE value of Mg: 28.4 × 10 −6 /K) [44]. The Mg-5Fe composite has a lower CTE value than those of some of the existing high temperature resistant Mg alloys such as QE22, AS21, EZ33, Mg-Sn, and other Mg metal-metal systems such as Mg-5.6Ti and Mg-3.6Mo [5,7,26,[44][45][46] ( Table 5). Although Ti and Mo have CTE values lower than Fe, the CTE values of Mg-5.6Ti and Mg-3.6Mo composites are slightly higher than that of Mg-5Fe, probably due to the clustering of Ti and Mo particles [6,7]. Table 5. Coefficient of thermal expansion values of pure Mg, Mg-5Fe, Mg-Ti, Mg-Mo, and high temperature resistant Mg alloys [6,7,26,45,46].
Impression Creep Behaviour
Impression creep curves of Mg-5Fe at 160 MPa and at 473 K are shown in Figure 9. The figure also shows creep curves for pure Mg, Mg-5.6Ti, and Mg-5Nb. Impression depth as a function of time is shown for the materials in Figure 9a. These materials experience primary creep, i.e., an increase in creep with time, followed by secondary creep, wherein their creep reached steady state. Fracture of samples (i.e., tertiary creep) does not occur in impression creep tests. During the creep process, materials experience work hardening and recovery [21]. Steady state is attained when an equilibrium is reached between these two competing mechanisms [21]. As can be seen from Figure 9a, the impression depth with respect to time is relatively lower for Mg-5Fe than that of pure Mg. Furthermore, Mg-Ti has the least penetration depth, and Mg-Nb has the highest penetration depth. in the composites [50,51]. In the present case, the steady state creep rate of Mg-5Fe is slightly lower than pure Mg. Mg-5.6Ti has a lower creep rate than that of Mg-5Fe, and the creep rate of Mg-5Nb is 3.25 times higher than that of Mg-5Fe. Mg materials have lower creep rates (i.e., higher creep resistance) than Al materials. Lower creep rates in Mg materials are due to the high internal stress of the hcp. structure (with only two independent basal slip systems), when compared to the fcc. structure of Al materials [52]. Pure Mg is known to creep by lattice self-diffusion [53]. Studies on the impression creep behaviour of creep resistant Mg alloys and Mg composites have reported that the materials undergo creep via dislocation climb controlled by a pipe diffusion mechanism (at the temperature and applied stress as used in the present work; 473 K and 160 MPa) [27,48,49,54]. In Mg-5Fe, Fe is a strong, stiff, and high melting point element and thus can impede dislocation motion in the Mg matrix. For dislocation motion to continue, dislocations have to overcome obstacles. At a high temperature, this occurs by dislocation climb via changing of slip planes assisted by vacancy transport [55,56]. In the present work, the test temperature of 473 K is >0.5 Tm of the Mg matrix, which would facilitate vacancy movement and transport, and thereby dislocations climb over obstacles (here, Fe particles). It is hence reasonable to suggest that the creep mechanism in Mg-5Fe would predominantly occur by dislocation climb controlled by pipe diffusion. The impression velocity of the materials (change in impression depth with time, V imp = dh/dt) as a function of impression depth is shown in Figure 9b. Steady states are reached when V imp for the materials becomes constant with depth. Steady state creep rate values are obtained by plotting V imp /2a (minimum creep rate), where V imp is the impression velocity, and 2a is the diameter of the indenter (Figure 9c) [47,48]. From the figure, it is seen that the steady state creep rate of Mg-5Fe is relatively lower than that of pure Mg. Contradictory results on the enhancement of creep resistance of Mg composites with ceramic reinforcements have been reported earlier. While Mg-SiC particle composites and AE42 hybrid composites have shown higher creep resistance (i.e., lower creep rate) than their unreinforced counterparts [27,49], QE22-SiC composites had lower creep resistance than the alloy, due to increased grain boundary sliding and particle/matrix interfacial sliding in the composites [50,51]. In the present case, the steady state creep rate of Mg-5Fe is slightly lower than pure Mg. Mg-5.6Ti has a lower creep rate than that of Mg-5Fe, and the creep rate of Mg-5Nb is 3.25 times higher than that of Mg-5Fe.
Factors Influencing Performance of Mg Composites
Mg materials have lower creep rates (i.e., higher creep resistance) than Al materials. Lower creep rates in Mg materials are due to the high internal stress of the hcp. structure (with only two independent basal slip systems), when compared to the fcc. structure of Al materials [52]. Pure Mg is known to creep by lattice self-diffusion [53]. Studies on the impression creep behaviour of creep resistant Mg alloys and Mg composites have reported that the materials undergo creep via dislocation climb controlled by a pipe diffusion mechanism (at the temperature and applied stress as used in the present work; 473 K and 160 MPa) [27,48,49,54]. In Mg-5Fe, Fe is a strong, stiff, and high melting point element and thus can impede dislocation motion in the Mg matrix. For dislocation motion to continue, dislocations have to overcome obstacles. At a high temperature, this occurs by dislocation climb via changing of slip planes assisted by vacancy transport [55,56]. In the present work, the test temperature of 473 K is >0.5 Tm of the Mg matrix, which would facilitate vacancy movement and transport, and thereby dislocations climb over obstacles (here, Fe particles). It is hence reasonable to suggest that the creep mechanism in Mg-5Fe would predominantly occur by dislocation climb controlled by pipe diffusion.
Microstructure
Distribution of reinforcement particles in the matrix is an important microstructural feature that determines load transfer from the matrix to particles. In the Mg-Fe composites, for Fe content > 5 wt.%, clustering of particles occurred that reduced their strength properties. Furthermore, a change in crystal orientation with increasing Fe content was identified using XRD analysis ( Figure 2). Mg materials deform by basal slip which is the dominant deformation mechanism based on the lower critical resolved shear stress (CRSS) value at room temperature [30,31,43]. Pure Mg and Mg-15Fe have a characteristic fibre texture with {0002} planes perpendicular to the extrusion direction, which reduces their ductility. Texture randomization (i.e., weakening) in Mg-5Fe and Mg-10Fe composites is favourable for slip transition to occur (basal to non-basal/cross slip), which improves their ductility. However, due to clustering of Fe particles in Mg-10Fe, its ductility was lower than that of Mg-5Fe.
Reinforcing Metallic Elements
Nature (i.e., ceramic or metallic) and morphology (i.e., shape and size) of the reinforcing phase in the Mg matrix influence the material response to externally applied loading conditions. Ceramic reinforcements usually cause interfacial reactions at the reinforcement/matrix interface [3,57] that drastically reduce ductility. Reinforcing metallic particles with no or minimal solubility in Mg shows no interfacial products, as seen in Mg-Fe (Figure 4a), Mg-Ti, Mg-Nb, and Mg-Mo composites [5][6][7][8][9]. The absence of any deleterious interfacial products leads to efficient load transfer from the matrix to reinforcement, thereby enhancing strength and ductility (Tables 3 and 4). Hardness, tensile, and compressive properties of Mg-5Fe (the composite that has shown the best performance) and Mg composites having other metallic reinforcements are given in Tables 6 and 7. On a comparative note, the Mg-5Fe outperforms Mg metal-metal composites with other metallic reinforcements in terms of strength and ductility improvement, the reasons being: (i) the uniform distribution of Fe particles without clustering, (ii) random texture evolution, and (iii) Fe particles with rounded corners (i.e., particle corners are not sharp). Some reasons as to the relative lower performance of Mg metal-metal composites with other metallic reinforcements are (i) Mg-5.6Ti, Mg-3.6Mo, and Mg-5Nb had clustered particles [6][7][8], and (ii) Ti particles in Mg-5.6Ti were irregular in shape with sharp corners and were large in size (<140 µm) [6]. Usually, irregularly shaped particles with sharp corners increase dislocation density, which would contribute towards strength improvement. However, they would also aid clustering, which would reduce the ductility of materials (as was observed in Mg-5.6Ti, [6]). In addition, (iii) the strong basal texture in Mg-5.6Ti was unfavourable in terms of ductility [58]. Compared to the rest of the Mg metal-metal composites, Mg-5Nb showed the highest ductility, as Nb is an inherently ductile metal (~50% elongation at room temperature) [59]. (ii) an increase in dislocation density due to thermal residual stresses, which arises due to the difference in the CTE values of the matrix (Mg: 28.4 × 10 −6 /K) and particles (Fe: 11.8 × 10 −6 /K) and increases yield strength; (iii) the Hall-Petch effect of increase in yield strength due to grain refinement; (iv) Orowan strengthening due to the hindrance of dislocation motion by Fe particles; and (v) texture randomization that activates non-basal slip planes, which enhances the work hardening ability. These mechanisms also improve tensile and compressive properties in other Mg metal-metal composite systems such as Mg-Ti, Mg-Nb, and Mg-Mo [6][7][8]. Figure 10 shows steady state creep rate values of Mg-5Fe (shown by arrow) in comparison with those of Mg-5.6 Ti, Mg-5Nb, Mg alloys (AZ31, Mg-5Sn, AE42 [27,48]), Mg composites with ceramic reinforcements (Mg-SiC [49]), and Al alloys (A356, A356 + Sc [65]). Data for Mg alloys, Mg composites with ceramic reinforcements, and Al alloys were referred to from published reports. The data referred to for these materials are from impression creep tests conducted at the same temperature and stress as used in the present work (473 K; 160 MPa). Dominant creep strengthening mechanisms for the different material systems are mentioned in the figure. parison with those of Mg-5.6 Ti, Mg-5Nb, Mg alloys (AZ31, Mg-5Sn, AE42 [27,48]), Mg composites with ceramic reinforcements (Mg-SiC [49]), and Al alloys (A356, A356 + Sc [65]). Data for Mg alloys, Mg composites with ceramic reinforcements, and Al alloys were referred to from published reports. The data referred to for these materials are from impression creep tests conducted at the same temperature and stress as used in the present work (473 K; 160 MPa). Dominant creep strengthening mechanisms for the different material systems are mentioned in the figure. [27,48], Mg composites with ceramic reinforcements [49], and Al alloys [65]. Data for Mg alloys, Mg composites with ceramic reinforcements, and Al alloys were referred to from published reports. The [27,48], Mg composites with ceramic reinforcements [49], and Al alloys [65]. Data for Mg alloys, Mg composites with ceramic reinforcements, and Al alloys were referred to from published reports. The data referred to for these materials are from impression creep tests conducted at the same temperature and stress as used in the present work (473 K; 160 MPa). Dominant creep strengthening mechanisms for the different material systems are mentioned.
Mg materials inherently have lower creep rates when compared to Al materials [49]. In most of the Mg alloys, creep strengthening occurs due to solid solution, precipitation, grain boundaries, and dislocation hardening [21,27,48,52,53]. Solid solution strengthening is the dominant mechanism in the commercial AZ31 alloy. In Mg-Al alloys, at a temperature >443 K, Mg 17 Al 12 precipitates undergo dissolution, and hence precipitation strengthening is absent. Precipitation strengthening is the dominant creep strengthening mechanism in creep resistant Mg alloys (Mg-5Sn and AE42), wherein the presence of thermally stable precipitates such as the Mg2Sn phase in Mg-5Sn and the rare earth metal based precipitate phase in the AE42 alloy improve their creep resistance [27,48]. In Mg composites, the major strengthening mechanisms are: (i) the load bearing capacity by hard reinforcement particles and (ii) substructural strengthening. Substructural features that provide strengthening include: (i) grain refinement, (ii) type (e.g., fibre, particles) and distribution of reinforcement, (iii) increase in dislocation density, and (iv) formation of twins. Dislocation hardening arises due to CTE mismatch and due to the pinning of dislocations by reinforcements. For example, in Mg-SiCp composites, strengthening predominantly occurs due to dislocation hardening and grain boundary strengthening [49].
Solid solution strengthening and precipitation strengthening do not occur in Mg-5Fe, Mg-5.6Ti, and Mg-5Nb, which is due to a lack of solubility between Mg and metallic reinforcements. Rather, mechanisms that contribute to creep strengthening include: (i) Load bearing effect, due to metallic reinforcements that are strong and thermally stable. (ii) Dislocation hardening, due to CTE mismatch between Mg and its metallic reinforcements. (iii) Particle strengthening, due to uniform distribution of Fe particles in Mg, for the case of Mg-5Fe. In the case of Mg-Ti, the irregularly shaped and sharp-cornered reinforcement Ti particles increase dislocation density in the matrix. Mg-5.6Ti has the lowest creep rate amongst the Mg metal-metal composites. (iv) Orowan strengthening, due to the effective pinning of dislocations by metallic reinforcements.
In addition to the above-mentioned mechanisms, the interaction of twins with grain boundaries would lead to back stress and pile-up within grains and also contributes to the creep strengthening of the materials [66].
Conclusions
Magnesium composites containing iron particles (Fe = 5, 10 and 15 wt.%) were synthesized using a disintegrated melt deposition technique followed by hot extrusion. Microstructure and mechanical properties were investigated. The following are the conclusions that could be drawn from the present study: • Addition of Fe particles to the Mg matrix resulted in a metal-metal composite, due to a lack of solid solubility between Mg and Fe. Fe particles act as metallic reinforcement in the Mg matrix. • Fe content influences grain refinement, uniform distribution of Fe particles, and crystallographic orientation of the Mg matrix. • Addition of Fe to Mg significantly improves mechanical properties. Amongst the Mg-Fe composites, Mg-5Fe showed the best hardness, tensile, and compression strength properties, with significant improvement in ductility. Mg-5Fe has better dimensional stability and a lower creep rate when compared to the commercial AE42 alloy and Mg-SiC composite.
•
Mg-5Fe showed the best mechanical performance due to (a) a uniform distribution of Fe particles in the Mg matrix, (b) grain refinement, (c) texture randomization, (d) Fe particles acting as effective reinforcement and contributing to various strengthening mechanisms, and (e) the absence of deleterious interfacial reactions.
Mg-Fe composites, due to their high strength, ductility, and high temperature stability, can be considered for aerospace, automotive, and biomedical applications. Compared to other metallic reinforcement elements, such as Ti, Mo, Nb, etc., Fe is relatively less expensive and would provide cost effectiveness for bulk manufacturing.
|
v3-fos-license
|
2018-06-21T00:15:10.052Z
|
2018-05-18T00:00:00.000
|
49309463
|
{
"extfieldsofstudy": [
"Medicine",
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.dib.2018.05.076",
"pdf_hash": "c9e76e8936f757d62fec8d12cf3ab3654325f44b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45936",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "c9e76e8936f757d62fec8d12cf3ab3654325f44b",
"year": 2018
}
|
pes2o/s2orc
|
Data on eye movements in people with glaucoma and peers with normal vision
Eye movements of glaucoma patients have been shown to differ from age-similar control groups when performing everyday tasks, such as reading (Burton et al., 2012; Smith et al., 2014) [1], [2], visual search (Smith et al., 2012) [3], face recognition (Glen et al., 2013) [4], driving, and viewing static images (Smith et al., 2012) [5]. Described here is the dataset from a recent publication in which we compared the eye-movements of 44 glaucoma patients and 32 age-similar controls, while they watched a series of short video clips taken from television programs (Crabb et al., 2018) [6]. Gaze was recorded at 1000 Hz using a remote eye-tracker. We also provide demographic information and results from a clinical examination of vision for each participant.
a b s t r a c t
Eye movements of glaucoma patients have been shown to differ from age-similar control groups when performing everyday tasks, such as reading [1,2], visual search [3], face recognition (Glen et al., 2013) [4], driving, and viewing static images [5]. Described here is the dataset from a recent publication in which we compared the eyemovements of 44 glaucoma patients and 32 age-similar controls, while they watched a series of short video clips taken from television programs (Crabb et al., 2018) [6]. Gaze was recorded at 1000 Hz using a remote eye-tracker. We also provide demographic information and results from a clinical examination of vision for each participant.
Specifications Table
Subject area Visual science More specific subject area Visual science, Optometry, Statistics Type of data could be used to investigate the relationship between eye movements and vision loss.
Data from visual fields could be used to explore the relationship between glaucoma and eye movements.
Data
Eye movement data were collected to test the hypothesis that age-related neurodegenerative eye disease can be detected in a person's spontaneous eye-movements while watching video clips [6]. Gaze was recorded in 44 glaucoma patients, and 32 age-similar people with healthy vision. All patients had an established clinical diagnosis of chronic open angle glaucoma (COAG): an age-related disease of the optic nerve that can result in a progressive loss of visual function [7,8], Each participant watched three video clips, for approximately 16 min in total, and completed standard clinical tests of visual function (visual acuity, contrast sensitivity, visual field examination). The dataset contains raw gaze data, processed eye movement data, clinical vision test results, and basic demographic information (age, sex) [1][2][3][4][5].
Participants
Forty-four people with glaucoma were recruited from clinics at Moorfields Eye Hospital NHS Foundation Trust, London. All patients had an established clinical diagnosis of chronic open angle glaucoma (COAG) for at least two years and were between 50 and 80 years of age. COAG was defined, following clinical guidelines, by the presence of reproducible visual field defects in at least one eye with corresponding damage to the optic nerve head and an open iridocorneal drainage angle on gonioscopy [9]. The diagnosis was made by a glaucoma specialist. A deliberate attempt was made to recruit a sample of patients with a range of disease severity according to visual field loss. Patients were purposely not recruited if they had any ocular disease other than glaucoma (except for an uncomplicated lens replacement cataract surgery). In addition, at the point of recruitment, patients had slit lamp biomicroscopy performed by an ophthalmologist to further exclude any other concomitant macular pathology, ocular surface disease or any significant problems with dry eye.
Thirty two healthy people (controls), of a similar age to the patients, were recruited from the City University London Optometry Clinic; this is a primary care centre where people routinely receive a full eye examination, which includes measurement of visual acuity, refraction, binocular vision assessment, pupil reactions, slit-lamp assessment of the anterior eye, measurement of intraocular pressure, visual field assessment and indirect ophthalmoscopy of the macula, optic nerve head, and peripheral retina.
Clinical vision tests
All participants underwent an examination of visual function by a qualified optometrist on the day of testing. Corrected binocular visual acuity (VA) was measured using an Early Treatment Diabetic Retinopathy Study (ETDRS) letter chart. All participants had binocular VA of at least 0.18 logMAR (Snellen equivalent of 6/9). Binocular Contrast Sensitivity (CS) was measured with a Pelli-Robson chart. Visual fields were measured monocularly in both eyes using automated static threshold perimetry. This was performed using a Humphrey Field Analyzer (HFA; Carl Zeiss Meditec, CA, USA), with a standard 24-2 grid and the Swedish Interactive Testing Algorithm (SITA). HFA mean deviation (MD) is a standard measure of overall visual field loss, relative to healthy age-matched observers, with more negative values indicating greater loss. The Oculus C-Quant straylight meter (Oculus GmbH, Wetzlar, Germany) was used to measure abnormal light scattering in the eye media, to exclude participants with media opacity and other lens type artifacts. Participants were required to be within "normal limits" for this test. Furthermore, all participants were examined with a modified version of the Middlesex Elderly Assessment of Mental State (MEAMS, Pearson, London, UK), a psychometric test designed to detect gross impairment of specific cognitive skills such as memory and object recognition in an elderly population. All participants passed the MEAMS test. The light scattering and MEAMS tests results are not included in the hosted data; however, VA, CS, and visual field data are included.
Summary measures of these vision tests, such as HFA MD in decibels (dB), visual acuity (VA) in logMAR, and contrast sensitivity (CS) in log units are provided in a single comma-separated file, along with basic demographic information. A sample of these data is shown in Table 1. These data allow Table 1 Sample clinical information of participants. The complete tables for both patients and controls are uploaded in a spreadsheet file. The tables have eight fields: participants' ID, the eye used for the study, age, sex, MD measurements (for both left and right eyes), binocular VA, and CS measurements. Participants were assigned a unique ID, G001 -G044 for patients and C001 -C032 for controls. Shown here are the data from the first five patients. investigating the relationship between different eye movement parameters (such as saccade amplitudes and rates) and common clinical measures of vision. Individual Differential Light Sensitivity (DLS) values [10] for each of the 54 test points in the 24-2 visual field test are provided for every participant/eye. These values are stored in a single row, as shown in Table 2 and can be visualized in visual field map as shown in Fig. 1. These data could be used to investigate the effect of field loss on eye movements; for example, in the past there have been attempts to relate the directions of spontaneous saccades to locations of visual field loss [11,12].
Raw gaze data
Gaze was measured using an Eyelink 1000 eye-tracker (SR Research Ltd., Ontario, Canada). Participants were positioned, using a chin rest, at a viewing distance of 60 cm. The eye-tracker outputs data in a proprietary EDF file format (.EDF; Eyelink Data File). For ease of use, these EDF files were converted into ASC file using a translator program (EDF2ASC) that was supplied by SR-Research. The ASC files contain the entire recorded eye tracking events, including the start and end time of all the eye movement events such as fixations, saccades, and blinks. During fixations and saccades, the eye position (in screen coordinate) was recorded. Other eye tracking Table 2). The vectorization was performed by concatenating sensitivity values starting from first row (top) to the last row (bottom). The same vectorization procedure was applied to sensitivity values of both eyes. information such as calibration and synchronization information were also stored in the ASC files. The gaze data for each participant were stored in individual ASC files (i.e., 44 and 32 ASC files for glaucoma and controls, respectively). Detailed description of the ASC file's format and structure are provided by the manufacturer (SR Research; https://www. sr-research.com).
Processed eye movement data
We processed the raw ASC file to extract fixations and saccades using a bespoke C þ þ program. The program searches for flags that indicate the end of a fixation ('EFIX') or a saccade ('ESACC') in the ASC file. Each fixation end flag contains its start and end time, duration, mean position, and mean pupil size during the fixation. Similarly, a saccade end flag contains its amplitude, velocity, duration, start and end time, and start and end position. The extracted fixation and saccade events have eight and eleven fields, respectively (Table 3). These processed eye movement data were stored in CSV file format. Thus, the dataset contains 44 and 32 CSV files for glaucoma and controls, respectively. It should be noted that due to poor tracking and technical errors, the data from five controls (C019 -C023) and one patient (G010) are incomplete. Their data, however, are included in the dataset for completeness.
Within the data archive, we include a Minimal Working Example MATLAB script ('Sacca-deAmplitudePlot.m') which demonstrates how this processed data can be used (in this case, to plot the distribution of saccade amplitude of each participant). This program can be extended easily to compute other eye movement metrics, such as fixation duration or saccade rate, that are commonly used to quantify the visual behaviors of patients and controls [1][2][3][4][5]. Table 3 Description of fixation and saccade fields contained within the "processed eye-movement data" CSV files. Five events (trial name, eye, start time, end time, and duration) are similar for both fixation and saccade events. Saccade and fixation positions are expressed using four (Start X, Start Y, End X, and End Y) and two (X and Y) fields, respectively. In addition, each saccade has two additional fields that describe the size and speed of the saccade.
Apparatus
Participants viewed sequentially three unmodified TV and film clips (including audio) on a 22 in. monitor (Iiyama Vision Master PRO 514, Iiyama Corporation, Tokyo, Japan) at a resolution of 1600 by 1200 pixels (refresh rate 100 Hz). Monocular eye movements were recorded using an Eyelink 1000 eye tracker (SR Research, Ontario, Canada), while participants watched the video clips monocularly. The eye tracker was configured to detect saccades using velocity and acceleration thresholds of 30°/s and 8000°/s 2 , respectively. The eye giving the best quality pupil detection and corneal reflection was chosen for tracking. The EyeLink proprietary algorithm (nine point calibration) was used to calibrate the eye tracker and was repeated, as required, until the accuracy was judged by the software to be of a "good" quality. Drift correction was also performed prior to each of the three video clips. In cases where a large drift (4 5°) was detected, a recalibration was performed.
Stimuli
One clip (top row in Fig. 2) was an excerpt from an entertainment program (309 s; Dads Army, BBC Television) which covered the full screen (subtending a half-angle of 20.3°by 14.9°). The other two clips (middle and bottom rows in Fig. 2) were taken from a feature film (200 s; The History Boys, 20th Century Fox) and a sport program (436 s; 2010 Vancouver Winter Olympics Men's Ski Cross, BBC Television); both of these clips were recorded at a 16:9 ratio, therefore they contained black rectangles at the top and bottom of the screen (subtended a half-angle of 17.3°by 10.6°). We summarized the characteristics of the three stimuli in Table 4.
Ethics
The study was approved by the Moorfields and Whittington Research Ethics Committee, London and the School of Health Sciences Research and Ethics Committee, City, University of London. Written informed consent, was obtained prior to examination from each participant, and the research was conducted in accordance with the to the tenets of the Declaration of Helsinki.
|
v3-fos-license
|
2016-05-10T04:10:42.628Z
|
2014-06-23T00:00:00.000
|
13740104
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/acel.12239",
"pdf_hash": "5ff0775f61b8960c93e354dfe61ca3a82064a177",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45939",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "8c4d9077c7212967213a8b460d5f920aba8c92fe",
"year": 2014
}
|
pes2o/s2orc
|
Shared signatures of social stress and aging in peripheral blood mononuclear cell gene expression profiles
Chronic social stress is a predictor of both aging-related disease and mortality risk. Hence, chronic stress has been hypothesized to directly exacerbate the process of physiological aging. Here, we evaluated this hypothesis at the level of gene regulation. We compared two data sets of genome-wide gene expression levels in peripheral blood mononuclear cells (PBMCs): one that captured aging effects and another that focused on chronic social stress. Overall, we found that the direction, although not necessarily the magnitude, of significant gene expression changes tends to be shared between the two data sets. This overlap was observable at three levels: (i) individual genes; (ii) general functional categories of genes; and (iii) molecular pathways implicated in aging. However, we also found evidence that heterogeneity in PBMC composition limits the power to detect more extensive similarities, suggesting that our findings reflect an underestimate of the degree to which age and social stress influence gene regulation in parallel. Cell type-specific data on gene regulation will be important to overcome this limitation in the future studies.
The major causes of chronic social stress-low social status, social isolation, and lack of social support-are also linked to higher rates of age-related disease and mortality (House et al., 1988;Shaw et al., 1999;Sapolsky, 2004;Marmot, 2006;Holt-Lunstad et al., 2010). This observation has given rise to the hypothesis that social stress influences the aging process, potentially by affecting the same biological pathways that change during aging. This idea predicts, first, that biomarkers of social stress should also be biomarkers of aging and, second, that the direction of social stress effects on these biomarkers should recapitulate changes with age (Bauer, 2008). Both predictions are supported for a few wellcharacterized biomarkers, such as IL-6 and telomerase protein levels (Epel et al., 2004;Piazza et al., 2010;Needham et al., 2012;Zalli et al., 2014). However, we do not yet know the extent to which these patterns hold more broadly-information that is key for understanding how social stress impacts the aging process.
Here, we compared two previously published data sets to investigate the relationship between chronic social stress and aging for thousands of genes simultaneously. Both data sets measured genome-wide gene expression levels in peripheral blood mononuclear cells (PBMCs). The first, a study of 1240 humans 15-94 years old, captured the effect of age on gene expression (G€ oring et al., 2007;Hong et al., 2008). The second involved experimental manipulation of dominance rank (i.e., social status) in 49 rhesus macaques and allowed us to identify genes associated with the response to rank-induced chronic social stress (Tung et al., 2012). Importantly, the physiological consequences of both aging and social stress in nonhuman primates often parallel those observed in humans (Roth et al., 2004;Sapolsky, 2005), including at the level of gene expression (Somel et al., 2010;Tung & Gilad, 2013). Thus, by comparing gene expression levels between the two data sets, we were able to test whether social stress recapitulates the effects of aging.
To do so, we focused on the set of genes (n = 4252) included in both data sets. Overall, we found a significant enrichment of genes that were either consistently upregulated or consistently downregulated in both older and lower status individuals (odds ratio = 1.37, Fisher's exact test, FET, P = 3.7 9 10 À7 ). This enrichment was even stronger (OR = 2.14, FET P = 3.0 9 10 À7 ) for the 819 genes that were independently and significantly associated with both variables at a 20% false discovery rate (see Data S1 and Table S2 for similar results using alternative FDR thresholds). Interestingly, some of the genes that were identified in this analysis (Table S1) are also known biomarkers of aging [e.g., B2M and NF-IL6: (Ershler & Keller, 2000;Annweiler et al., 2011)]. In contrast, the magnitude of age and rank effects were not significantly correlated, either across all 4252 genes (Spearman's q = 0.065, permutation test P = 0.21) or among the 819 genes significantly associated with both age and social stress (Spearman's q = 0.062, P = 0.36). This observation might be interpreted in two, nonmutually exclusive ways. First, while aging and chronic social stress influence similar genes, their exact impact on these genes may differ. A second likely possibility is that parallels at the level of direction rather than magnitude are more readily detectable across data sets, especially those obtained from different species and using different sampling methods.
In addition to directional similarities at the level of individual genes, social stress and aging could affect similar biological pathways. To test this possibility, we used Gene Ontology (GO) terms (specifically, highlevel 'GO Slim' categories) to identify functionally related sets of genes that were over-represented among significant genes in each of the two data sets (Ashburner et al., 2000). Twenty-nine gene sets were enriched in both cases. Twenty-seven of these 'co-enriched' gene categories were similarly affected by social status and age (i.e., either both associated with upregulation or both associated with downregulation with increasing age and lower social status), which was significantly greater than expected by chance (permutation test P < 0.0001; Table 1). Only two co-enriched categories, 'reproduction' and 'ATPase activity', were enriched in both data sets in a manner inconsistent with our motivating hypothesis, no more than expected by chance (P = 0.79).
Aging Cell
Because the co-enriched categories were quite broad, we also investigated gene sets linked to well-studied aging-related pathways to test whether they, too, were co-enriched across data sets. Specifically, we investigated gene sets connected to known hallmarks of aging, including inflammation, insulin growth factor signaling, mammalian target of rapamycin (mTOR) signaling, RNA processing, telomere maintenance, mitochondrial senescence, and oxidative stress (L opez-Ot ın et al., 2013). We found fifteen co-enriched gene sets that were either both upregulated or both downregulated with aging and chronic social stress (Table S3; permutation test P = 2.8 9 10 À3 ) and none that exhibited the opposite pattern.
We then asked whether genes within each category show concordance in the direction of effects across data sets (i.e., concordantly increased or concordantly decreased shifts with older age and lower dominance rank). We identified significant concordance within individual co-enriched GO Slim categories for seven gene sets (Table 1). Furthermore, genes in 22 of the 27 co-enriched GO Slim categories were more often concordant than discordant (binomial test: P = 1.5 9 10 À3 ). Categories previously linked to aging exhibited a similar pattern (10 of 12, excluding categories with ties; P = 3.9 9 10 À2 ; Table S3).
Thus, in PBMCs, aging and chronic social stress appear to influence a similar set of both broad categories of genes as well as specific pathways previously implicated in aging. However, we consistently found that directional similarities were more common, and/or easier to detect, than correlations in effect size: only seven of the 27 co-enriched GO Slim categories, and none of the 15 aging-related categories exhibited significant effect size correlations in the predicted direction (Tables 1 and S3). This may be because age and social stress do not alter the same genes within pathways affected by both conditions. Alternatively, discordant changes in PBMC composition between aged and socially stressed individuals might mask parallel changes in gene expression within individual cell types (suggesting that some, but not all, aspects of physiological changes with aging and chronic social stress are shared). Indeed, cell-type composition data from the macaque social stress experiment revealed a significant correlation between cytotoxic T-cell proportions and dominance rank (lower ranking individuals had proportionally fewer of these cells: Tung et al., 2012). While T-cell proportions also change with age, they may not do so in a completely parallel manner to that observed with social stress: Depletion of na€ ıve T cells during aging, for example, has been hypothesized to result from accumulated exposure to pathogens over the life course, a mechanism unlikely to be at work in the social stress data set (Larbi et al., 2008).
To test whether differences in PBMC composition affected our analysis, we therefore quantified how uniformly each gene was Downregulated in both Nucleus (CC) 1.46 (P = 4.9 9 10 À5 ) 0.09 (P = 8.4 9 10 À5 ) 1794 Cellular nitrogen compound metabolic process (BP) 1.53 (P = 9.9 9 10 À5 ) 0.09 (P = 6.3 9 10 À4 ) 1308 Translation factor activity, nucleic acid binding (MF) 8.19 (P = 5.4 9 10 À3 ) 0.32 (P = 4.7 9 10 À2 ) 3 9 Nuclear envelope (CC) 3.02 (P = 9.1 9 10 À3 ) 0.33 (P = 9.6 9 10 À4 ) 9 7 Nucleolus (CC) 2.04 (P = 1.4 9 10 À2 ) 0.16 (P = 2.7 9 10 À2 ) 186 RNA binding ( 1.08 (P = 0.35) 610 mRNA processing (BP) 0.51 (P = 0.96) 130 †Co-enriched GO Slim categories for genes that are both upregulated or downregulated with age and low rank. The corresponding GO domain is in parentheses: biological process (BP), cellular component (CC), molecular function (MF). Shaded rows designate categories in which the effects of aging and social stress are more often concordant than not. Categories with significant FET ORs or significant correlations are indicated in bold. ‡Fisher's exact test (FET) odds ratio assessing the directional concordance of effects of aging and chronic social stress on gene expression levels for genes within each coenriched category. §Significant correlations (Spearman's rho) between the effects of aging and chronic social stress on gene expression levels for genes within each co-enriched category. expressed across PBMC cell types. For each gene, we calculated an 'evenness' metric, e, (Haygood et al., 2010) using publicly available gene expression data from each of the five major PBMC cell types in humans (Watkins et al., 2009) (SI). We found that, while the subset of genes that were the most evenly expressed (e > 0.90; n = 555 genes) exhibited strong concordance between the directional effects of aging and social status (OR = 2.45, FET P = 1.1 9 10 À6 ), this pattern was undetectable among unevenly expressed genes (e < 0.90; n = 257 genes, OR = 1.27, FET P = 0.42). Further, genes in co-enriched categories that were both concordant in direction and significantly correlated between data sets were much more evenly expressed than genes in categories that had concordant, but not significantly correlated effects [Kolmogorov-Smirnov (K-S) test, D = 0.061, P = 1.3 9 10 À4 ]. In turn, genes in both of these sets were more evenly expressed than genes in categories that had discordant effects (K-S test, D = 0.058, P = 0.012; Fig. 1; see Fig. S1 for aging-related categories). Thus, cell-type composition may confound the ability to detect parallels between aging effects and social stress effects for genes with higher levels of tissue-specific expression bias.
Together, our findings combine to provide support for the hypothesis that social stress broadly recapitulates the physiological effects of aging, at least at the level of gene expression. Thus, this pattern is not restricted to a small set of well-studied biomarkers, but instead appears to be a more general characteristic of cellular physiology. Our analysis also reveals that cell-type-biased gene expression and tissue heterogeneity are likely to hamper the detection of such shared signals, especially at the level of individual genes. Genomic approaches that incorporate controlled, cell-type-specific analyses that focus on aging and chronic social stress effects in the same species-particularly in humans-should help further uncover physiological changes that link social stress to aging and thus social environmental effects to survival and longevity. The 'evenness score' measures the degree to which a gene is expressed at the same level across cell types, ranging from 0 (the gene is expressed in only one cell type) to 1 (the gene is equally expressed across the 5 PBMC cell types we considered). Genes in concordant and significantly correlated categories were significantly more evenly expressed than genes in concordant, uncorrelated categories (Kolmogorov-Smirnov (K-S) test, D = 0.061, P = 1.3 9 10 À4 ), which were in turn significantly more evenly expressed than genes in discordant, uncorrelated categories (K-S test, D = 0.058, P = 1.2 9 10 À2 ). The x-axis is plotted on a negative log scale.
Supporting Information
Additional Supporting Information may be found in the online version of this article at the publisher's web-site.
Fig. S1
Genes in concordant and significantly correlated co-enriched agingrelated categories (solid line) are the most evenly expressed across tissues.
Table S1 The 472 genes that are significantly associated with older age and lower dominance rank in the same direction. Data S1 Supplemental methods.
Shared signatures of social stress and aging, N. Snyder-Mackler et al. 957
|
v3-fos-license
|
2022-06-16T15:28:38.054Z
|
2022-06-13T00:00:00.000
|
249689591
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/psp.2583",
"pdf_hash": "43c99d28ce56b669636e2c6ff849e799849893e8",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45941",
"s2fieldsofstudy": [
"Geology"
],
"sha1": "ddc6f501b85229cfb54fde8ae19a75a121277735",
"year": 2022
}
|
pes2o/s2orc
|
Driving forces of population change following the Canterbury Earthquake Sequence, New Zealand: A multiscale geographically weighted regression approach
The Canterbury Earthquake Sequence (CES), which includes the 2010 and 2011 Christchurch earthquakes, is one of the deadliest disasters in New Zealand history. Following the CES, displacement of the affected population occurred, leading to an out ‐ migration from affected areas and changes to places of residence. This paper investigates the spatial changes in population following the CES, using a multiscale geographically weighted regression (MGWR) analysis approach to examine if there is a relationship between population change within the Canterbury region, and potential driving forces across two time periods: 2006 – 2013 and 2013 – 2018. The findings of this study could assist in informing future decision making and planning for earthquake events and to increase the effectiveness of land use policy decisions for post ‐ disaster recovery in New Zealand.
| INTRODUCTION
The Canterbury Earthquake Sequence (CES), which includes the devastating 2010 and 2011 Christchurch earthquakes, is considered one of the deadliest and disruptive disasters in New Zealand history.In September 2010, the Canterbury region of New Zealand was struck by a M7.1 earthquake 40 km west of the major city of Christchurch and 10 km deep, causing widespread moderate damage (Newell et al., 2012).This earthquake was followed by a M6.3 earthquake in February 2011, only 5 km deep and centred only 10 km away from the centre of Christchurch city (Ministry for Culture and Heritage, 2021).The 2011 Christchurch earthquake resulted in the deaths of 185 people and extensive damage to homes, properties and infrastructure, compounding on existing damage from the 2010 earthquake (Stats NZ Tatauranga Aotearoa, 2011).Roughly half the housing stock (~100,000 houses) in Christchurch was damaged following the 2011 earthquake, including ~7,000 houses damaged beyond repair (Paton et al., 2014).In addition, over half the road network required replacing, and up to 5,000 businesses in the CBD were displaced (Paton et al., 2014).Damage to housing in Christchurch and in the adjacent Waimakariri region led the central government to establish the Canterbury Earthquake Recovery Authority (CERA) 2011, which implemented a residential colour zone system for allocating resources for repair (Newell et al., 2012).Those residential properties with the most extensive and significant damage were designated 'red zone' areas and deemed uninhabitable, predominantly in flat land in areas susceptible to ongoing liquefaction, along rivers and coasts (such as the Avon River), and in the Port Hills where increased risk of cliff collapse and boulder roll exists (Saunders & Becker, 2015).Those properties with less extensive damage and lower ongoing risk in the greater Christchurch area were zoned green and divided into three technical categories, from most at risk TC1 to TC2 to least at risk TC3 (Saunders & Becker, 2015).Those properties outside land damage mapping were zoned white.
Widespread damage to housing stock and the displacement of people from their homes led to a large out-migration and movement of people from Christchurch city, representing a 17.5% loss (~70,000) of the overall population of 400,000 before the 2011 earthquake (King & Gurtner, 2021;Love, 2011;Newell et al., 2012).
Many people chose to relocate to less damaged and less at-risk smaller urban areas in the wider Canterbury region, and to other parts of New Zealand (King & Gurtner, 2021;Newell et al., 2012;Parker & Steenkamp, 2012).Population growth was recorded for the adjacent regions to Christchurch of the more rural Waimakariri and Selwyn districts for 2011 (Greater Christchurch Partnership, 2016;Stats NZ Tatauranga Aotearoa, 2018), while a western redistribution of the population was recorded for greater Christchurch as people moved away from the significantly damaged eastern suburbs of Christchurch city (Stats NZ Tatauranga Aotearoa, 2018).Concern that unconstrained rural residential development could lead to increased urban sprawl and dispersed population and settlement patterns (Environment Canterbury Regional Council, 2021) led to the central government amending the Christchurch district plan and Greater Christchurch Urban Development Strategy (UDS).In addition, the central government pushed through the statutory document Land Use Recovery Plan (LURP) in 2013 under the emergency earthquake response policy to assign undeveloped greenfield land to future residential and commercial development (Rivera-Muñoz & Howden-Chapman, 2020).These changes were enabled through the establishment of the CERA through the Canterbury Earthquake Recovery Act (CER Act) 2011, which granted powers needed to manage the earthquake recovery for the Christchurch area.The LURP allowed for an increase in the availability of greenfield sites for suburban development, assisting in the housing recovery but leading to rapid advancement of suburbanisation in the Greater Christchurch area and a slowing of redevelopment in Christchurch city.This was compounded by the red zoning of the Christchurch city CBD and closure for months following the 2011 earthquake, with many businesses choosing to relocate to other business hubs outside the CBD, drawing the workforce out of the city (King & Gurtner, 2021;Marsh Risk Management Research, 2014).The availability of single-use lower density housing in the outer suburbs and adjacent districts of Christchurch city, still within commuting distance of the city, may also have drawn property owners and workers out of the inner suburbs of the city (Kusumastuti & Nicholson, 2018).This study will investigate local changes in population for the Canterbury region following the CES, and assess the spatial scale at which potential driving forces of population change may have been operating at.There is little research on the long term effects of the CES on population movement within the wider Canterbury area, and the potential driving forces which influence that movement.An overall assessment of the out-migration of people from Christchurch city following the CES has been carried out by official bodies (Stats NZ Tatauranga Aotearoa, 2011) and consulting firms (i.e.Love, 2011).
The driving housing and economic forces behind population change have been briefly noted in reports (i.e.Wood et al., 2016), and other long term impacts of the earthquake on factors related to population change in the region (i.e.Bond & Dermisi, 2014) (Bond & Dermisi, 2014), crime (King, 2016), transportation network (Yonson et al., 2020), GDP related indices (Yonson et al., 2020), and the effect of land use planning on demographic change (King & Gurtner, 2021).In the wider GWR literature, existing studies of population change and movement have used variables such as population density, ethnicity, urban/rural population, age, education, and employment (Li et al., 2016), distance to nearest metropolitan area, population growth, distance to coast, and environment-related variables such as average rainfall (Gutiérrez-Posada et al., 2017), and variables related to a specific event such as the percentage of crop land under potatoes in the Irish Famine (Fotheringham et al., 2013).
MGWR can be used to investigate the potential drivers of population movement and change before, during and after the CES.MGWR can also be used to assess in the longer term whether these new earthquake-related drivers still influence population movement and people's choice on where to live in the wider Canterbury region, or whether these new drivers have decreased in relevance as the recovery process progresses.Many studies investigating the potential determinants of population change, including those under extreme situations such as disaster events, have used global regression analyses, such as ordinary least squares (OLS) regression (i.e.Fussell et al., 2017).However, these global models do not account for information about the spatial context of data, that is, that the relationship between variables may vary over space.This may result in inaccurate results at the local level, in particular where spatial heterogeneity in relationships is present.GWR (and MGWR) takes this spatial variation into account, allowing for the analysis of local relationships between population change and determinants.GWR, and MGWR, is increasingly used for understanding the processes that underpin changes in population (i.e.population dynamics), ranging from studies on population growth and forecasting (i.e.Chi & Wang, 2017;Gutiérrez-Posada et al., 2017) to understanding patterns of migration (Viñuela et al., 2019), specific case studies such as community resilience of locations in China following an earthquake (i.e.Li et al., 2016), and understanding the driving factors of historic changes in population (i.e.Fotheringham et al., 2013).
Recent advancements in GWR approaches have led to the development of MGWR (Fotheringham et al., 2017), which allows a different bandwidth for each covariate rather than a single bandwidth for all covariates as seen in standard GWR.We will employ MGWR to assess spatial heterogeneity in the population change for the Canterbury region following the CES.
The aim of this study is to investigate local changes in population for the Canterbury region following the CES, and how changes in space and time may be related to socioeconomic, demographic, land use and earthquake-related variables.Data obtained from New Zealand censuses alongside data on land use and housing policy (LURP 2013; CERAct 2011), and earthquake damage will be used in MGWR modelling for two time periods covering before and after the CES: 2006CES: -2013CES: and 2013CES: -2018. .The results of this study will assist in understanding the regional changes in population that occurred within the Canterbury area following a large disaster event, and can be used to assess the impact of land use and housing policy implemented following the CES, and to inform future regional policies for disaster management, housing, land use and land development.To the best of our knowledge, this study is the first to use a MGWR approach to assess spatially changing relationships between population and potential driving forces for the Canterbury area over the long term (12+ years).
The rest of the paper is organised as follows.The next section discusses the data set and variables chosen for this study.A brief explanation of the GWR analysis technique (MGWR) follows, including model specifics for this case study.The results of the MGWR analysis for both time periods and significant parameter estimates are presented in Section 3. The paper concludes with a section discussing the main results and conclusions, considering the wider implications of the research.
| Data and study area
The study area is located on the west coast of New Zealand's South Island, encompassing all of the district of Christchurch, most of the adjacent districts of Selwyn and Waimakariri, and parts of the Hurunui and Ashburton districts in the Canterbury region (Figure 1), covering an area of about 9,613.47 km 2 .The study area extent was determined using the 2013 census area units in which workers lived in and went to work in Christchurch city (Stats NZ Tatauranga Aotearoa, 2020).This gives an indication of likely areas to which workers may choose to move to and still be able to commute into the city for work.This area was chosen as it is expected to encompass most of the population which would have been affected by the CES.
The city of Christchurch and the greater Christchurch area is largely low-lying, flat land prone to liquefaction, with the exception of the higher elevation rocky cliffs of the Port Hills Peninsula (Figure 1).
The demographic and socioeconomic data used in this study were chosen from the 2006, 2013 and 2018 New Zealand population censuses (Stats NZ Tatauranga Aotearoa, 2020), with those initial variables considered to be related to internal population movement within the wider Christchurch city extent and the Canterbury region (Table 1).While the New Zealand censuses usually occur every 5 years, the planned 2011 census was delayed to 2013 due to the CES.
This provides an opportunity to assess changes in population before and after the CES using officially collected data, and to use recent 2018 data to assess long-term changes to population for the region.
Additional variables were created using land use and land cover information, earthquake damage and topographical GIS layers, and LURP policy GIS layers (see Table 1 for further details).
Following an exploratory analysis of correlation and collinearity between these variables, 23 final variables were chosen for use in MGWR modelling, as further described in Table 2.The R function StepAIC (using forward direction) was used for each time period to identify the best covariates for global OLS regression models.The set of best covariates for each model was combined to create a single set of covariates which could be used for both time periods in GWR modelling.This allowed for comparison of GWR model results between time periods, but as the same set of variables was needed for both time periods, the usual step-wise variable selection procedure commonly seen in GWR modelling work (i.e.Comber et al., 2020) was not used here.
| Model specification of MGWR methodology
MGWR was adopted as the main methodology in this study.OLS The GWR model can be defined as in Equation ( 1) (Fotheringham et al., 2017): (1) were u v ( , ) i i is the spatial location of the ith observation (ith SA1), and β j is the jth coefficient for the spatial location of u v ( , ) i i .For a unit area SA1 i, y i is the change in the proportion of total CURPop within a time period (calculated as, e.g.2006-2013, as the proportion of total CURPop in 2013 minus that in 2006), x ij is the jth covariate, and ε i is the random error.
In the above Formula (1), local parameter estimates for each variable in the model at each SA1 i are calculated using weighted least squares as follows, using matrix representation (Fotheringham et al., 2002) as in Equation (2): where X is a matrix of variables used in the model, W u v ( , ) i i is a diagonal spatial weighting matrix which weights each observation based on distance to location u v ( , ) i i , and y is a vector of observations of the dependent variable, change in population.W u v ( , ) i i indicates the weight of SA1 j in regard to SA1 i.
To calculate the weights matrix, a kernel function is applied to the distances (d) between observations, placing emphasis on observations closer in space.For this study, local GWR models are fitted for each SA1 i using a subset of SA1's (i's) weighted using an adaptive bi-square spatial kernel function, where the nearest K neighbours where d ij is the distance from the kernel centre between SA1s i and j, and b is the chosen bandwidth.The bandwidth, b, is used to determine the optimal number of nearest neighbours to each SA1 used in the kernel function.The optimal bandwidth used in the kernel function was chosen using the corrected version of Akaike information criterion (AICc; Fotheringham et al., 2002;following Akaike, 1973).
Unlike standard GWR, which uses a single bandwidth for all point observations and variables' relationship in the model, MGWR allows for a separate bandwidth to be chosen for each X variable.While mixed GWR (MX-GWR), another GWR approach, allows for both global and local scale relationships, MX-GWR only uses a single local bandwidth for all local variables, assuming that all relationships which vary locally operate at the same local scale.MGWR addresses this by allocating a separate bandwidth to each relationship in the model, allowing for variation in the spatial scale of relationships (Fotheringham et al., 2017;Lu et al., 2017).An iterative back-fitting procedure is used for fitting MGWR models, rather than the weighted least squares used to fit standard GWR models.In this study, MGWR is preferred over standard GWR or MX-GWR, as it is expected that some variables (such as binary) will tend to a global spatial scale, while other variables will tend towards local spatial scales, but the local scale may not be the same across all local variables.
The MGWR formula can be defined as in Equation ( 4) (Fotheringham et al., 2017): ∑ where u v ( , ) i i is the spatial location of the ith observation (ith SA1), and bwj indicates the bandwidth used to calibrate the jth spatial (conditional) relationship.Note that as MGWR uses a different bandwidth for each relationship in the model, each relationship at the same location uses a different spatial weighting matrix.
To address the multiple hypothesis testing problem in GWR, once local parameter estimates are obtained from the model, the statistical significance of local parameter estimates is determined using the adjusted critical t-value, as defined in Byrne et al. (2009).Specifically, the adjusted t-values are calculated based on the function gwr.t.adjust in the R package GWmodel (Lu et al., 2014), using the Fotheringham-Byrne procedure.Adjusted t-values are used to assess the significance of the regression coefficients.
Using the variable names in Table 2, the final MGWR model ( 23covariates) used in this study can be formulated as follows: where y i is the dependent variable -change in total proportion of CURPop at SA1 location i, β i0 is the intercept parameter at SA1 location i, β ix is the parameter for the xth regression variable at SA1 location i, and ε i is the random error associated with SA1 location i.
Two MGWR models were fitted, one for each time period: 2006-2013 and 2013-2018.The same covariates were included in models for both time periods, in order for comparisons to be made between time periods.OLS regression models using the same data set and variables were additionally implemented for comparison purposes.
| RESULTS
Coefficient estimates for global regression models are presented in Table 3. (negative significant for both), internet (positive significant for both), medweeklyrent (positive significant for both), TC3bin (negative significant for both), redzonebin (negative significant for both), and diffagri (negative significant for both) being both significant and having the same sign for both models.The statistics of local estimates for the MGWR models are given in Table 4, and the selected bandwidths for each covariate are given in Table 4.Note that the bandwidths for the same variables in the two MGWR models can vary.These differences could lead to different coefficient estimates, and therefore different results for the two time periods despite the same variables being used in the models for both years (Comber et al., 2020;Fotheringham et al., 2017;Oshan et al., 2019).The choice of bandwidth (GWR) or bandwidths (MGWR) is considered the most important decision when fitting these models, as the bandwidth determines the number of observations, as well as their weights, used in each local regression (Comber et al., 2020).When bandwidth is large and close to the total number of observations, for example, 3,172 for the variable of over65 in the
| LOCAL PARAMETER ESTIMATE RESULTS AND DISCUSSION
For those local variables with at least one significant local estimate in both time periods (as seen in Table 4), the variables of youth, over65, ownhome, medweeklyrent and medhouse were chosen for further description.Five of 24 maps of local parameter estimates are shown in Figure 4. Christchurch city (Kusumastuti & Nicholson, 2018).Those areas with greenfield land and new housing developments have seen an increase in the working-age population and those with young children, leaving Christchurch city as an ageing population (King & Gurtner, 2021).
Those over 50 were more likely to stay in Christchurch city, despite earthquake damage, whereas those more likely to relocate following the CES were aged 15-54 (Wood et al., 2016).This change in demographic also led to an increase in those commuting to the city for work from neighbouring regions, and a decrease of workers in the CBD of Christchurch city (King & Gurtner, 2021).The decrease in workers in the CBD may reflect the significant damage that occurred to buildings in the CBD, which was closed completely for months after the CES, and many businesses had to find alternative premises outside of the CBD permanently, or while repairs were being carried . However, current studies are largely descriptive in nature and the spatial scales at which those factors might have affected population change remain unknown.Spatial census data available from before and after the CES, including the most recent 2018 census, allows the use of local spatial analysis approaches such as multiscale geographically weighted regression (MGWR).MGWR enables the investigation of possible influencing factors on population changes in the Canterbury region, such as demographic and socioeconomic factors, land use policy, and earthquake-related factors.MGWR provides a novel approach to assessing changes in population and the effects of different factors on those changes.MGWR enables the assessment of the spatial scale at which possible driving forces may have an effect on population change.This methodology may allow an assessment of the key determinants of population movement within Christchurch city and the wider Canterbury region, as well as the potential impacts of land use and housing policy implemented by the government following the 2011 earthquake on population movement.Population movement and change are usually assessed and forecasted using socioeconomic and demographic variables such as age, ethnicity and median income, for example, but following the CES new drivers of population movement became relevant, in particular those drivers related to earthquake damage and recovery such as land stability, liquefaction potential, land use and housing-related variables.Prior studies describing changes in Christchurch city and the wider Canterbury region following the CES have included variables such as change in house price two main time periods, used to assess the change in population and potential drivers over the three main census years.The change in population was considered over two time periods: 2006-2013 and 2013-2018.Due to changes in the spatial census units between 2013 and 2018, the smallest level of aggregation for the 2018 census, known as Statistical Area 1 (SA1), was used for all three census years.In total, 3,174 SA1 units were used (n = 3,174).The Currently Resident Population count (CURPop) from the censuses was used as the population count measure.To account for the natural birth and death rate, the CURPop was adjusted by subtracting the total count of CURPop under 15 years of age and over 65 years of age. Figure 2 shows the dependent variable (change in proportion of total adjusted CURPop between time periods) for the SA1 units for the study area for both time periods.
Figure 2
Figure2shows the change in the proportion of total CURPop in each SA1 for both time periods, the inset showing the full study area extent.For SA1s which were most damaged in the central red zones, there is a negative change in the proportion of the total CURPop living in those areas, which is seen in both time periods (i.e. higher proportion of total CURPop in 2006 than in 2013).Overall, for the
F
I G U R E 2 Dependent variable, change in proportion of total CURPop for (a) model 1 time period 2006-2013, (b) model 2 time period 2013-2018.Inset map for both (a) and (b) shows the full study site extent.Before standardisation for MGWR.MGWR, multiscale geographically weighted regression.
regression and standard GWR were also implemented for comparison purposes.OLS, GWR and MGWR were all implemented using the software MGWR 2.2(Oshan et al., 2019), and results were visualised in ArcGIS Pro(Version 2.7.1, ESRI).The open source software R (version 3.6.1)was used alongside ArcMap 10.7.1 (ESRI) for data management and processing.A MGWR modelling approach was used to investigate the spatial changes in population and potential determinants due to the advantages of the methodology over traditional OLS regression, and standard GWR.In comparison to global OLS regression, which assumes that relationships between variables in the regression model are the same everywhere (i.e.global), GWR takes into account the spatial context of data, in that relationships between variables may vary over space (i.e.local;Fotheringham et al., 2002Fotheringham et al., , 2013)).GWR can capture spatially varying relationships between changes in population and potential determinants.MGWR extends GWR by accounting for multiple scales, that is, each determinant (X) in the regression model may have a different spatial scale in its relationship with change in population (Y).
(
SA1s) are used in model calibration.A bi-square kernel function can be defined as in Equation (3): F I G U R E 3 Location of technical categories and Land Use Recovery Plan priority residential greenfield areas R 2 and AICc indicate that the model for 2013-2018 performed better than the model for 2006-2013, where covariates explain about 21% and 46% of the variations in the dependent variable, respectively.The significance of covariates varies between the two models, with only the variables over65 For these variables, such as over65 which is negative, this indicates that the change in proportion of total population and the change in proportion of people over 65 will vary in opposite directions.That is, if a SA1 had a greater change in proportion of total population between 2006 and 2013, then it would have a lesser change in proportion of people over 65 over the same time period (i.e. if an SA1 had an increase in the dependent variable over the time period, the proportion of people over 65 in the SA1 would decrease over the time period, and vice versa).As the sign of the estimate is the same between time periods, if a SA1 had a greater level of change in proportion of the over 65 population during 2006-2013, then it would also have a greater level of change in proportion of the over 65 population during 2013-2018.The covariates of level3, unemp, medhouse, whitezonebin and distgrn did not have a significant effect on the changes in population in either model.
Therefore, the bandwidth(s) chosen will directly affect the local coefficient estimates.In the context of this study, the different bandwidths of the same variable in different models could indicate a temporal change in the relationship between population change and the variable under concern.For example, the bandwidth of the variable medhouse changed from 381 in the first model to 44 in the second model.This could indicate higher spatial variations in the impact of the median house price on the change in population in the second time period (smaller bandwidth and more local relationship).
Figure 4a shows the local parameter estimate for the relationship between population change and the proportion of youth (CURPop < 15 years of age), for both time periods.The global parameter estimate is −0.077 (p < 0.05) for 2013-2018.There is a positive cluster in 2006-2013 for those SA1s located in the red zone, and negative clusters are seen in 2013-2018 for SA1s to the northeast, north-west and south of the city, and along the northern coastline.The location of these positive clusters in or near the red zone is to be expected, as following the CES and the establishment of the CERA 2011, all properties located in the red zone were designated to be demolished and all residents living in the red zone areas were required to shift elsewhere voluntarily (McDonagh, 2014).Any properties not bought by the government in these areas would have limited services and support from the council.This would have resulted in a decrease of population overall in these areas.
Figure
Figure 4b shows the local parameter estimates for the relationship between population change and the proportion of over65 (CURPop > 65 years of age), for both time periods.The global parameter estimate is −0.101 (p < 0.05) for 2006-2013, and −0.187 (p < 0.05) for 2013-2018.Only the time period of 2013-2018 exhibited a local scale relationship.A significant negative cluster is seen to the west of the Christchurch city covering the smaller western township of Rolleston, located in the Selwyn district.Following the CES, many people chose to move out of the more earthquake damaged areas in the city (including the inner suburbs and red zone), and move to the outer suburbs of Christchurch city, and to larger towns in the surrounding districts.While some people have chosen to move back into the city in 2013-2018 and more recently, policies implemented following the CES, such as the LURP (2013), opened up greenfield land in the outer suburbs of Christchurch city that had been assigned by the UDS before the CES for long term growth (Greater Christchurch Partnership, 2019).While this supported the recovery process, the LURP and subsequent housing developments on greenfield land encouraged urban sprawl and a rapid advancement of suburbanisation in the Greater out.From an insurance perspective, many landlords and affected properties in the CBD would have had a loss of access to their premises, and denial-of-access claims would have accumulated(Marsh Risk Management Service, 2014).As a result of the CBD closure in Christchurch, most policies in New Zealand have extended out the waiting periods before denial-of-access claims can be made.In the CBD, many landlords may have taken a negotiated settlement for property damage and denial-of-access rents claims and invested in alternative properties outside of the CBD (Marsh Risk Management Service, 2014).Therefore, both business premises and residential properties in the CBD may have decreased in number following the CES, and money from claims invested in buildings and properties outside of the CBD.The level of earthquake damage following the CES had a significant effect on housing-related variables in the greater Christchurch area, including the level of home ownership, the median weekly rent ($), and the median sale house price ($).Figure4c-eshows the local parameter estimates for the relationship between population change and the proportion of CURPop (over 15 years of age) who own or partly own their own home (home ownership), the change in median weekly rent ($), and the change in median house price ($), respectively, for both time periods.For home ownership (ownhome), this relationship can also be interpreted as the proportion of people who are not renting the home they live in.The global parameter estimate for this relationship is −0.039 (p < 0.05) for 2006-2013, but is not significant for 2013-2018.Local parameter estimates for this relationship have a significant negative cluster in the centre of the Christchurch city area in 2006-2013, covering the CBD and parts of the eastern suburbs and red zone.In 2013-2018, the significant negative cluster moves to the south-west of the Christchurch city CBD, and a positive cluster appears on the south western boundary of the district.The negative cluster in 2006-2013includes those houses in the red zone area and those apartments in the CBD which may have also been affected by earthquake damage.This is to be expected, as for these areas, many homes were rented or used for social housing, especially in the eastern suburbs(McDonagh, 2014).For those properties owned privately which were located in the red zone, the government offered a voluntary buy-out, using property valuations from 2007(McDonagh, 2014).Many of those homes affected in the red zone were also government and local authority social housing or affordable privately owned properties, and these houses would likely be replaced by new homes in a different area within Christchurch with higher house price and rent costs(McDonagh, 2014).In the CBD, many landlords and property owners may have chosen to take a negotiated settlement for claims and re-invest in alternative properties outside of the CBD (Marsh Risk Management Service, 2014), therefore decreasing the availability of rental apartments or higher density housing in the CBD.For the change in median weekly rent (medweeklyrent), the global parameter estimate for this relationship is 0.117 (p < 0.05) for F I G U R E 4 Significant local estimates for (a) youth, (b) over65, (c) ownhome (home ownership), (d) medweeklyrent (median weekly rent) and (e) medhouse (median house price) 2006-2013, and 0.127 (p < 0.05) for 2013-2018.A positive cluster of local parameter estimates is seen in 2006-2013 located in the CBD and in the eastern red zone areas near the coastline, where significant earthquake damage occurred.From 2013-2018, positive clusters are seen in and around Kaiapoi, including the northern coastal red zone area, and to the south-west of the city in the Selwyn district.For the change in the median house price (medhouse), the global parameter estimates for both time periods are not significant.Local parameter estimates have a number of significant positive clusters in both time periods, in 2006-2013 to the north of Christchurch city near the northern red zones near Kaiapoi, and in 2013-2018 to the west of Christchurch city and the northern edges of the city.Negative clusters are also seen in 2006-2013 in the CBD of the city, F I G U R E 4 Continued variables used in MGWR modelling and description T A B L E 3 Global regression parameter estimates and goodnessof-fit measures for time periods2006-2013 and 2013-2018 T A B L E 4 Final model local parameter estimates (range, median, SD and bandwidth) Bandwidths chosen from initial MGWR model runs (global bandwidths in bold).Note that different bandwidths can be chosen for the same variables in different MGWR models.Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/psp.2583by University Of Glasgow, Wiley Online Library on [25/11/2022].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License first model, this could indicate that the relationship between over65 and the change in population is global (same relationship for the entire study area).
|
v3-fos-license
|
2018-12-17T17:38:54.870Z
|
2018-10-01T00:00:00.000
|
54553244
|
{
"extfieldsofstudy": [
"Engineering",
"Medicine",
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.spiedigitallibrary.org/journals/journal-of-medical-imaging/volume-5/issue-4/044504/Deep-neural-networks-for-A-line-based-plaque-classification-in/10.1117/1.JMI.5.4.044504.pdf",
"pdf_hash": "ee22a66f2b2c83a7b096a38c2040f5fb9ba49a17",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45943",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Computer Science"
],
"sha1": "ee22a66f2b2c83a7b096a38c2040f5fb9ba49a17",
"year": 2018
}
|
pes2o/s2orc
|
Deep neural networks for A-line-based plaque classification in coronary intravascular optical coherence tomography images
Abstract. We develop neural-network-based methods for classifying plaque types in clinical intravascular optical coherence tomography (IVOCT) images of coronary arteries. A single IVOCT pullback can consist of >500 microscopic-resolution images, creating both a challenge for physician interpretation during an interventional procedure and an opportunity for automated analysis. In the proposed method, we classify each A-line, a datum element that better captures physics and pathophysiology than a voxel, as a fibrous layer followed by calcification (fibrocalcific), a fibrous layer followed by a lipidous deposit (fibrolipidic), or other. For A-line classification, the usefulness of a convolutional neural network (CNN) is compared with that of a fully connected artificial neural network (ANN). A total of 4469 image frames across 48 pullbacks that are manually labeled using consensus labeling from two experts are used for training, evaluation, and testing. A 10-fold cross-validation using held-out pullbacks is applied to assess classifier performance. Noisy A-line classifications are cleaned by applying a conditional random field (CRF) and morphological processing to pullbacks in the en-face view. With CNN (ANN) approaches, we achieve an accuracy of 77.7%±4.1% (79.4%±2.9%) for fibrocalcific, 86.5%±2.3% (83.4%±2.6%) for fibrolipidic, and 85.3%±2.5% (82.4%±2.2%) for other, across all folds following CRF noise cleaning. The results without CRF cleaning are typically reduced by 10% to 15%. The enhanced performance of the CNN was likely due to spatial invariance of the convolution operation over the input A-line. The predicted en-face classification maps of entire pullbacks agree favorably to the annotated counterparts. In some instances, small error regions are actually hard to call when re-examined by human experts. Even in worst-case pullbacks, it can be argued that the results will not negatively impact usage by physicians, as there is a preponderance of correct calls.
Introduction
Cardiovascular disease is the leading cause of death globally, accounting for more than 15% of all deaths in 2015. Coronary atherosclerosis is the process of plaque buildup in the coronary arteries. To relieve narrowing of an obstructed coronary artery, physicians often perform percutaneous coronary interventions (PCIs), which involve revascularization procedures, such as balloon angioplasty and stent treatment. Although x-ray angiography is commonly used to guide such interventions, this imaging technique can only indicate luminal narrowing due to the presence of calcium deposits but does not render any further information about the vessel wall. Nonetheless, intravascular imaging techniques can aid cardiologists in treatment planning for the majority of PCI cases. To aid the physician in such a scenario, we developed a coronary plaque classification system based on intravascular optical coherence tomography (IVOCT) images.
IVOCT is a high-contrast, high-resolution, imaging technique that can be used to characterize various atherosclerotic plaque types and guide stent placement. 1 As compared with intravascular ultrasound, IVOCT has higher resolution, improved imaging through a calcification, and better visual discrimination between fibrous and lipid tissues. 2 In addition, IVOCT is currently the only imaging technique that allows identification of vulnerable thin-cap fibroatheromas, which have been identified as the most susceptible to rupture. 3 IVOCT is also useful for the planning of stent interventions in the presence of significant calcium or lipid deposits. 4 Despite these obvious advantages of IVOCT for treatment planning, physician enthusiasm has been tempered by the need for specialized training to interpret IVOCT images and the overload of image data generated from a single pullback, which often results in >500 images from a single 2-to 5-s scan.
Buoyed by the success and interest in human expert evaluation, research on the machine identification of plaque types has achieved considerable success with the development of both voxel-and A-line-based classification techniques. For example, Ughi et al. 5 developed a voxel-based classification scheme that combined geometric and textural features along with a sliding window approach to estimate the attenuation coefficient value of each voxel directly from IVOCT images. This method has achieved an overall classification accuracy of 81.5%. Athanasiou et al. 6 used a K-means algorithm to obtain an initial clustering of pixels within an image and then extracted various textural and intensity features to classify individual clusters into one of four plaque types, namely, fibrous tissue, lipid tissue, calcium, and mixed tissue. They report a computation time of 40 s per image frame, prohibiting real-time use. Rico-Jimenez et al. 7 devised an A-line-based classification technique by modeling an A-line as a linear combination of a number of depth profiles with the use of morphological features to perform the classification. This approach is useful for plaque classification because physicians are primarily interested in determining the location and length of the stent for a particular case. We believe that the A-line-based classification technique provides sufficient information for the clinician to make both of these decisions simultaneously with ease. However, the method described in this prior report was applied only to fibrotic and lipid plaques without consideration of calcium plaques. In addition to fully automated methods, such as those described already, semiautomated approaches for calcified plaque segmentation have also been developed. For example, Wang et al. 8 described a semiautomatic algorithm that requires user input of start and stop frames within the pullback. Again, A-line-based classification would allow seamless integration into such semiautomatic segmentation methods.
A recent survey conducted by Boi et al. 9 described the large potential of leveraging deep learning techniques for atherosclerotic plaque characterization and subsequent risk stratification using IVOCT. The use of deep neural networks was recently reported by Abdolmanafi et al. 10 for the identification of layers within the coronary artery wall. Although this was the first study to use deep learning in the context of tissue characterization in IVOCT, the analysis was limited to the detection of the coronary artery layers without plaque classification.
In this study, a learning system [convolutional neural network (CNN) and fully connected artificial neural network (ANN)] was applied for the classification of coronary plaques. Rather than using voxel-based classification, A-lines were used as the fundamental unit because the many attributes within an Aline (e.g., sharp transitions at the edge of a calcification and the large signal decay with depth in a lipid region) will likely contribute to the classification of coronary plaques. Because clinical interest is expected to extend to larger regions and classification can be noisy, a fully connected conditional random field (CRF) method 11 was employed to standardize classification over larger regions. The proposed system was trained and tested on a carefully labeled dataset from 48 IVOCT pullbacks, containing nearly 4500 images and more than two million A-lines.
Image Analysis Methods
Image processing and learning techniques, either an ANN or deep CNN, were applied to classify A-lines in IVOCT images as fibrocalcific, fibrolipidic, or other. The naming convention that we will use for the rest of this paper is as follows: fibrocalcific A-line refers to an A-line with a fibrous layer followed by calcification, and fibrolipidic A-line refers to a fibrous layer followed by a lipidous deposit. The algorithm can be broken down into three main steps: (1) preprocessing, which includes lumen boundary detection, alignment of tissues via pixel shifting, and noise reduction; (2) deep neural network for classification of individual A-lines; and (3) classification noise cleaning using a CRF and morphological processing in the en-face (θ; z) classification view.
Preprocessing
Preprocessing steps were applied to raw IVOCT images obtained in the polar (r; θ) domain. First, the lumen boundary was located on the image using a dynamic programming approach previously developed by our group. 8 Briefly, the edges along the radial direction were identified by filtering the image, and then, dynamic programming was used to identify the contour with the greatest cumulative edge strength along the angular direction θ, as the lumen boundary. Second, the position of the guidewire shadow was located using a previously described method, 12 and the A-line values within the guidewire region were set to zero. Third, A-lines were pixel shifted along the radial direction so that the first pixel in each row corresponded to the first pixel after the lumen boundary on the original image. This step was added to properly align tissues in images acquired with an eccentrically located catheter. Fourth, only the first 200 pixels (∼1 mm) of the vessel wall for each Aline were used, and the others were cropped, because IVOCT has limited penetration into tissue. Fifth, a log transformation was applied to convert multiplicative speckle into an additive form. Sixth, speckle was reduced by filtering with a Gaussian kernel with a size of (7, 7) and standard deviation of 1. The baseline subtraction and roll-off correction step as described in Ref. 13 was not performed because the goal of this method was to classify A-lines rather than estimate the attenuation coefficients of tissues.
Convolutional Neural Network and Artificial Neural Network Systems
The deep CNN used individual A-lines as input and output probability values for each class. The architecture contained seven layers, as shown in Fig. 1. Layer 1 padded the front and back edges of the input A-line (along r) by 5 pixels by replicating the first and last pixel. Such padding allows for the subsequent convolution operation to generate a feature map of the same size as the input A-line, which is important because the fibrous caps of a vulnerable plaque can be thin (<65 μm or 13 pixels). In layer 2, a convolutional layer was employed that learns 32 filters, each with a length of 11 pixels. Layer 3 performs a maximum pooling operation with a pool size of 2 pixels and stride of 2 pixels. Layer 4 uses another convolutional layer that learns 64 filters, each with a length of 9 pixels. Layer 5 consisted of another maximum pooling operation with a pool size of 2 pixels and stride of 2 pixels. Layer 6 consists of a fully connected network of 100 units with the rectified linear unit as the nonlinear activation function. Layer 7 is a fully connected layer of three units with a softmax activation function to generate probability values for each class that sum to 1. In addition, a fully connected ANN was applied, and its performance was compared with that of the CNN. The ANN takes each individual pixel in an A-line as input. Two hidden layers of 100 and 50 units were connected to a fully connected output layer of three units corresponding to the three classes of interest. The rectified linear unit function is used as the activation function for both hidden layers. The final output layer uses softmax activation. The architecture of the ANN is described in Table 1. As described in Sec. 3.2, some modifications were made to these basic networks and standard methods were utilized to aid in training.
To better understand the most important features for classification, saliency maps were created using the guided backpropagation method described by Springenberg et al. 14 to identify the pixels in an A-line that were most responsible for the output for a specific class. The reconstructed saliency map is thus both class and image specific. Briefly, the method computes a forward pass of the image (A-line in this case) through the trained classification network and then performs a backward pass, that is, computes the gradient of the class activation with respect to the input image. A large magnitude of the gradient indicates that a small change to such pixels would have a large impact on the class activation value and, therefore, the network prediction. Maps were created for individual A-lines and grouped across all A-lines in an image to create a visualization over a full two-dimensional (2-D) IVOCT image.
Classification Noise Cleaning
Because individual A-line classification results are noisy when viewed across a pullback, cleaning of classification noise was employed as a postprocessing step. A method to integrate network outputs to a fully connected CRF is described in Ref. 15. Here, a set of classified A-lines across consecutive frames within an IVOCT pullback is defined as a lesion segment. For each lesion segment, an en-face 2-D "image" of classification results in (θ; z) was constructed where each pixel contains the vector of class probabilities for the corresponding A-line. The task of the CRF is to reduce noise by generating a new labeling that favors assigning the same label to pixels that are closer to each other spatially (both in θ and z) using the probability estimates generated by the neural network.
A CRF is an undirected graphical model that encodes a conditional distribution over the target variable Y given a set of the observed variable X. This method maximizes the distribution PðYjXÞ, which is expressed as a Gibbs distribution over a random field. The fully connected CRF described in Ref. 11 computes the maximum a posteriori label by minimizing the energy function as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 2 . 3 ; 3 2 6 ; 6 7 5 where l is a particular label assignment for all pixels in the image; θ i ðl i Þ ¼ − log Pðl i Þ is the unary potential, where Pðl i Þ is the probability estimate of label l at pixel i computed by the neural network; θ i;j ðl i ; l j Þ is the pairwise edge potential that connects all pixel pairs in the image i, j and is defined as a linear combination of Gaussian kernels as shown as follows: where the label compatibility function μðl i ; l j Þ ¼ 1 if l i ≠ l j and zero otherwise; p i and p j refer to the spatial positions of pixels i and j, respectively; I i and I j indicate the intensity vectors of pixels i and j, respectively; w 1 and w 2 are the weights of the appearance and smoothness terms, respectively; and σ α , σ β , and σ γ control the degree of interaction either in the spatial or intensity dimensions. Because pixels in the en-face image do not have a specific intensity value, this pairwise potential was modified by dropping the appearance kernel term (w 1 is set to zero), which then leaves the smoothness kernel, yielding the following pairwise potential term: E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 2 . 3 ; 3 2 6 ; 3 6 6 where kp i − p j k is the spatial distance between pixels i and j, and σ γ controls the size of the smoothness kernel. A mean field approximation was used for inference that minimizes the Kullback-Leibler-divergence between PðYjXÞ and a fully factorable distribution Q. The message passing step within the iterative update scheme can be expressed as a Gaussian filtering rendering the algorithm computationally efficient. Three free parameters are left with for the CRF: the size of the smoothness kernel, σ γ , weight of the smoothness kernel w 2 , and the number of iterations, n. Overall, for each pixel in the en-face A-line classification view, the CRF takes in probability estimates of each class as input and outputs its final class ownership. Finally, three iterations of an area opening operation with an area threshold of 10 pixels are performed serially on the en-face view images. Each iteration considers one of the three classes as the background class and the remaining two as the foreground class. This step closes small holes within fibrocalcific and fibrolipidic chunks and removes small islands containing these plaques. Ground truth annotations were obtained by consensus of two expert IVOCT readers who were trained in the Cardiovascular Imaging Core Lab of the Harrington Heart and Vascular Institute (Cleveland, Ohio), a laboratory that has conducted numerous studies requiring expert reading of IVOCT images. The definitions in the consensus document were used to determine the ground truth labels, as described in Ref. 1. For example, a fibrocalcific plaque appears as a high backscattering and relatively homogeneous region (fibrous tissue) followed by a signalpoor region with sharply delineated front and/or back borders (calcium) on IVOCT images. A fibrolipidic region was defined as fibrous tissue followed by fast signal drop-off with diffuse borders corresponding to the presence of a necrotic core or lipid pool. The additional class "other" was used to include all A-lines that did not meet the criteria of the former two categories. Some example annotations are shown in Fig. 2.
Network Training and Testing
A 10-fold cross-validation procedure was used to measure classifier performance. Of 48 annotated pullbacks, 38 were randomly selected for training, 5 for validation, and 5 for testing in each iteration. The last iteration consisted of 40 pullbacks for training, 5 for validation, and 3 for testing. In this manner, each pullback was assigned into the test (leave-out) set exactly once. Mean and standard error of classification accuracy over the 10 iterations are reported.
We used the categorical cross-entropy function as the loss function that was minimized during network training. For a given example, categorical cross entropy was evaluated as L ¼ − P i∈C y i logðŷ i Þ, where y is a one hot vector representation of the ground truth labels andŷ is the vector of probabilities computed by the neural network over C different classes. A class weighting scheme was employed during network training to account for class imbalance. Class weights were computed as the inverse of the class proportions in the training set and were used to weight the loss function. The weights are usually around 4, 4, and 1 for the fibrocalcific, fibrolipidic, and other classes, respectively. Network optimization was performed using the Adam optimizer 17 with a learning rate of 1 × 10 −4 . In addition, because deep networks tend to overfit when trained for a large number of epochs, a validation set was used. Training was stopped when the loss of the validation dataset did not improve by more than 0.01% for 5 consecutive epochs or when the network was trained for 100 epochs, whichever occurred first. The model with the least validation loss during training was used to make predictions on the test set.
Preprocessing steps were performed using MATLAB ® R2016a software (MathWorks, Natick, Massachusetts). The Keras functional application programming interface with the TensorFlow machine learning framework as backend was used to implement, train, and test the neural networks with the given dataset. Network training was performed using two NVIDIA Tesla P100 graphics processing unit (GPU) cards.
The neural network architectures described in Sec. 2.2, namely, CNN b and ANN b , were used as baseline architectures. Changes in the classifier performance for no data standardization, sample-wise standardization, and feature-wise standardization were analyzed using the baseline architectures. The sampleand feature-wise standardization methods involved subtraction of the mean followed by division of the standard deviation calculated either per sample or feature, respectively. Additionally, we modified network parameters, such as kernel size, in the convolutional layer of the CNN and number of hidden units in the hidden layer of ANN to visualize impact on classifier performance. We finally used the best performing neural network in both cases and applied CRF postprocessing. Parameters for the CRF algorithm included the size of the smoothness kernel, σ γ , weight of the smoothness kernel, w 2 , and the number of iterations, n, which were optimized in an ad hoc fashion.
Results
The steps in the preprocessing procedure of a representative IVOCT frame are described in Fig. 3. All images are shown after log e compression for improved visualization. Pixel shifting successfully aligned the subsurface tissue regions, which improved the subsequent noise reduction filtering, as it reduced filtering across edges of tissue regions.
Next, the role of neural network processing parameters on classifier performance was investigated using the baseline classifiers ANN b and CNN b , and the following three different data standardization schemes were applied: no data standardization, sample-wise standardization, and feature-wise standardization (Fig. 4). Although there were no large effects, feature-wise standardization worked best for the ANN, whereas eliminating the standardization step was equivalent to feature-wise standardization for CNN. The CNN also tended to have a higher classification accuracy than the ANN for any class and any data standardization method. To determine the potential sensitivity of the network design, experiments were performed with varying network parameters for the ANN and CNN (Figs. 5 and 6, respectively). In the case of the ANN, the number of hidden units in the two hidden layers hidden 1 and hidden 2 was modified. We also experimented with the addition of another hidden layer hidden 3 . There was no consistent trend, and the results of In the case of the CNN, the baseline kernel size of the first convolutional layer of 11 pixels was increased and decreased to determine the effect on classification accuracy. There was no consistent trend, and the results of CNN b were reasonable. In general, changes in network design had relatively little effect on class-wise accuracy. Hence, the baseline networks ANN b and CNN b were used for all subsequent processing. It was highly desirable to clean the A-line classification from both networks (Fig. 7). Following noise cleaning, the classification results compared favorably with the annotated labels. We optimized CRF parameters (figure legend) in an ad hoc fashion. In general, the CRF results were not very sensitive to parameter optimization. Importantly, despite some remaining errors following noise cleaning, this 7.6-mm vessel segment was clearly dominated by a fibrolipidic lesion, which would be of interest to the clinician. Similar results were obtained using ANN b (not shown).
Confusion matrices for both networks with and without noise cleaning are reported in Tables 2 and 3, respectively. The noise cleaning step improved the class-wise classification accuracy by 10% to 15%. For the fibrocalcific, fibrolipidic, and other classes, the CNN had comparable or better sensitivity and specificity than the ANN (0.80 versus 0.78, 0.85 versus 0.85, and 0.84 versus 0.81, respectively, and 0.95 versus 0.93, 0.92 versus 0.91, and 0.92 versus 0.92, respectively). The types of errors are important. For example, a lipidous lesion should not be misinterpreted as a calcification because it would be undesirable to perform a procedure to modify a calcification (e.g., atherectomy) on a lipidous lesion. As compared with ANN, CNN reduced this type of error by 40%.
Overall, we found that the CNN performs statistically significantly better than the ANN for this task. A two-tailed paired t-test was conducted with the null hypothesis that the means of the error rates of both learning algorithms are equal. A p-value of 0.00027 was obtained, allowing us to reject the null hypothesis. This test was conducted on the classification results prior to noise cleaning by the CRF. We also found that the ANN had higher error rate as compared to the CNN across all folds. Results are from 10-fold cross validation without classification noise cleaning. In the case of the ANN, feature-wise standardization gives somewhat better results than other methods. For the CNN, not performing a data standardization step was just as good as performing a feature-wise standardization. See text for details on standardization methods.
Journal of Medical Imaging 044504-5 Oct-Dec 2018 • Vol. 5 (4) As shown by the classification results in Fig. 8, both ANN and CNN perform well, but close examination showed that the CNN agreed more favorably to the annotated labels. The large calcification [Figs. 8(a) and 8(d)] demonstrates the visual characteristics described in the consensus document, 1 that is, the calcium plaque had sharp front and back edges and appears as a signal-poor region. Similarly, the fibrolipidic region [Figs. 8(b) and 8(e)] had clear characteristics of lipids, that is, diffuse borders with high absorption. The results of the CNN classifier on a few erroneous frames are shown in Fig. 9. Class saliency maps show regions most discriminative for lipid and calcium (Fig. 10). The characteristic edges of the calcification are highlighted in Fig. 10(b), and the diffuse signal decay due to lipids is shown Fig. 10(e).
Discussion
Although overall A-line classification accuracy is about 82%, the learning methods applied to IVOCT could lead to clinically useful results. During an intervention, a cardiologist is interested in plaque deposits much larger than single A-lines. For example, Fig. 6 Sensitivity of classification accuracy on CNN design. The baseline kernel size of the first convolutional layer (11 pixels) was increased and decreased to determine the effect on classification accuracy. The results shown are prior to noise cleaning. There is no consistent trend, and CNN b gives good results. the interventional cardiologist must identify large circumferential calcifications that might hamper stent deployment so that she can apply an appropriate plaque modification strategy. Alternatively, the presence of a large lipidous plaque requires the use of a longer stent that does not end in a lipidous lesion. Neither of these scenarios require high-resolution accuracy, motivating us to use CRF noise cleaning on the neural network results. Typically, results similar to those in Fig. 7 are obtained where there is no ambiguity in a fibrolipidic segment even though the A-line accuracy of this segment was 87.31%. Processing time suggests that live-time clinical usage is possible. Computation time per frame was 0.3 s (preprocessing), 0.02 s (classification), and 0.002 s (classification noise cleaning). Thus, the total processing time was <1 s using a standard laptop with a GPU (HP Pavilion 15t, NVIDIA GTX 950M). Therefore, the overall computation time for a 500-frame pullback will be completed within a few seconds with the use of a more powerful computer system. A-lines provide a natural way to analyze IVOCT data that should aid the performance of a learning system. First, the IVOCT system acquires data in a radial fashion, one A-line at a time. Analyzing A-lines avoids the issue of spatially dependent interpolation effects that arise in Cartesian (x; y) images. Second, tissues are naturally ordered with respect to distance from the lumen, for example, a fibrocalcific region consists of a fibrous layer followed by calcification and a normal region consisting of three layers: the intima, media, and adventitia. This also provides motivation for employing an ANN for this task. Shown are ground truth labels (top), probability maps for each class from the CNN b classifier (middle), and output of dense CRF processing (bottom). The number of pixels along θ is determined by the number of A-lines collected by the system in one complete rotation of the light probe. The size of the smoothness kernel used was (19, 5) in (θ; z) and the number of iterations n was set to 5. The color code is red (fibrocalcific), green (fibrolipidic), blue (other), and black (guidewire shadow). We used 55 consecutive annotated frames. In this case, CNN b classification accuracy was improved from 65.95% to 90.31% with classification noise cleaning. Third, the order of an A-line captures the reduction of backscattered light due to absorption of light in tissue structures. Fourth, classifying A-lines simplifies the learning task to categorize an A-line into one of three classes, which should reduce the number of required training samples as compared with that of a more complex learning problem, for example, semantic segmentation of all voxels in a pullback. Fifth, A-line classification is useful to identify the extent of a lesion across the pullback volume. It will be interesting to join this classification scheme with traditional image segmentation methods. A cross-validation procedure over held-out pullbacks was adopted to assess classifier performance. This method is much superior to grouping all images together and leaving out images from the dataset for testing. The latter method gives unnaturally good performance as there can be considerable correlation between images of a single lesion. Additionally, a , and (f)] following classification noise cleaning. In this (x; y) view, classification (inner ring) is compared to ground truth labels (outer ring). The color code is red (fibrocalcific), green (fibrolipidic), blue (other), and black (guidewire shadow). Fig. 9 Example frames where the CNN b classifier mispredicts class ownership for a group of A-lines. Experts identified that there are labeling errors in (a), i.e., A-lines from 8 o'clock to 11 o'clock in (a) are indeed fibrocalcific. Additionally, they have identified that Alines from 9 o'clock to 12 o'clock in (b) are hard to call between fibrocalcific and fibrolipidic classes and could be better labeled as a mixed class. Finally, an example of a true error (classifying other A-lines as fibrolipidic) is shown in (c). In this (x; y) view, classification (inner ring) is compared to ground truth labels (outer ring). The color code is red (fibrocalcific), green (fibrolipidic), blue (other), and black (guidewire shadow).
Fig. 10
Class saliency maps showing image regions most discriminative for fibrocalcific (a, b, and c) and fibrolipidic (d, e, and f) plaques to the CNN b classifier. Images shown here are raw IVOCT image in (x; y) view (a, d), saliency maps overlaid in red on the IVOCT images (b, e), and final algorithm predictions and ground truth labels overlaid as inner ring and outer ring, respectively, on the image (c, f). Note high saliency at telltale edges of calcification and diffuse edges of lipidous lesion. cross-validation procedure gives a better approximation of generalization error than with a single held-out test dataset.
Few motivations exist for comparing an ANN and CNN at this task. First, since our method classified one A-line at a time, the number of input pixels was reasonable to use a fully connected ANN. Second, and as mentioned earlier, tissues have a natural order along the depth of an A-line. We anticipated that an ANN would capture this order by considering individual pixel values as features. Third, since the thickness of the fibrous layer in plaques was variable within the dataset, we hypothesized that a CNN would perform well, due to the spatial invariance property of the convolution operator.
There were some interesting observations regarding the learning system structure, processing, and types of errors. Changes in the design of the CNN or ANN did not largely impact the performance (Figs. 5 and 6), indicating that the initial architectures were relatively stable on this dataset. Because the dataset was obtained from the same site and IVOCT system, the standardization step was unnecessary. However, when working with datasets from different sites and systems, it might be useful to standardize the datasets separately. In regard to the type of errors made by the classifier, it is clinically desirable to have fewer false positive calls for the fibrocalcific class. The reason is that the treatment strategy for fibrocalcific plaques is to use an atherectomy device that grinds through plaque, potentially damaging fibrolipidic and other regions. We found that the CNN made fewer such false positive calls as compared to the ANN.
Saliency maps for calcium and lipid plaques show that the network learns features that are consistent with those described in the consensus document 1 for IVOCT image interpretation. Qualitatively, a calcium plaque has a signal-poor region with sharp front and/or back edges, whereas lipid plaques have highly attenuating regions with diffuse borders. The saliency map for fibrocalcific plaque shows that the pixels belonging to the front and back edges of a calcium plaque were most responsible for calling the A-line as fibrocalcific. Similarly, the pixels belonging to the blurred edge of the lipid plaque along with pixels far away from the plaque boundary contribute the most to calling an Aline as fibrolipidic. Although IVOCT can delineate regions of the vessel wall into several other categories, such as macrophage accumulation, intimal vasculature, and thrombus, this analysis was restricted to the above-mentioned class types. However, it is possible to extend this methodology to class types other than those mentioned in this paper.
Labeling is time-consuming, and some tissues are difficult to call, likely leading to noisy A-line labeling, which would degrade performance metrics and potentially degrade the learned model. With this in mind, we are greatly encouraged by these results. Interestingly, we identified cases (e.g., Fig. 9) where the CNN results could lead experts to change their original labels, suggesting the possibility of active learning with a second pass of the dataset to possibly modify the labels.
Disclosures
No conflicts of interest, financial or otherwise, are declared by the authors.
|
v3-fos-license
|
2022-12-01T06:17:52.880Z
|
2022-12-01T00:00:00.000
|
254094101
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.5588/ijtld.22.0337",
"pdf_hash": "8c9bf45efcd81c0e375cbef97430f0438d1d6d4a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45944",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "06968792dd939c37a1fee627b06d77d437b26a13",
"year": 2022
}
|
pes2o/s2orc
|
Association between chronic obstructive pulmonary disease and biomass smoke in rural areas
Dear Editor, In lowand medium-income countries (LMICs), chronic obstructive pulmonary disease (COPD) creates a high burden of morbidity and mortality in nonsmokers.1 Because there is significant exposure to biomass burning in rural areas, the COPD burden could be more significant than in urban areas.2 Unfortunately, there are no health policies specific to rural areas designed to prevent, diagnose or treat COPD. The report presented here is part of a crossover study, with data from respiratory health campaigns carried out annually by the Tobacco and COPD Research Department of Instituto Nacional de Enfermedades Respiratorias (INER), Mexico. The campaigns were held in rural areas of Mexico from 2013 to 2019 in seven villages of the Valle Region of Oaxaca (Santa Catarina Ixtepeji, Nuevo Zoquiapam, San Miguel del Rı́o, San Pablo Macuiltianguis, San Bartolome Quialana, San Juan Guelavia, San Pablo Guila). The campaigns targeted women over 40 years old from the Zapotec indigenous group, who are exposed to biomass burning while cooking using firewood. The main objective of the campaigns was to detect COPD in these women, and to make them aware of the dangers associated with indoor air pollution. The campaign includes health education talks and educational materials (comics), in which the symptoms and damage caused by cooking with firewood are graphically represented. The comics were made under the supervision of medical researchers of the Tobacco and COPD Research Department. The campaigns in Oaxaca benefitted from being accompanied by health personnel who spoke their native dialect, which allowed clear messaging in the communities. COPD tests such as spirometry were performed by social service doctors (certified by the National Institute for Occupational Safety and Health Spirometry Training Programme). Pre and post bronchodilator spirometry was performed on each patient and only those with acceptable quality criteria were reported. Pulmonologists interpreted the spirometry data. Sociodemographic data on the following variables were collected: comorbidities, history of exposure to biomass using a respiratory questionnaire in which the cumulative exposure to biomass smoke was expressed in hour-years (the product of number of years of exposure, and average hours per day of exposure). When a person had a spirometry result consistent with COPD, she was given a brief educational talk about her condition, treated with a bronchodilator and referred to a health institution. Ethical approval for our study was not required, because the results were part of a retrospective investigation. Stata v14.0 (Stata Corporation, College Station, TX, USA) software was used for statistical analysis. From 2013 to 2019, 680 women attended the campaigns, 581 of whom underwent spirometry; 498 (86%) women who met the spirometric quality criteria were analysed. The mean age was 60 6 11 years (P1⁄40.011), with mean height of 1.44 6 0.07 m, mean weight of 61 6 12 kg and mean body mass index (BMI) of 29 6 5 kg/m2. There was no difference between women with COPD (WCOPD) and women without COPD (NONCOPD). The mean biomass exposure index was 228 6 157 hours/year. Using forced expiratory volume in 1 sec/forced vital capacity (FEV1/FVC) ,0.70 as cut-off, COPD prevalence was 9.6%; based on the lower limit of normality (LLN), this increased to 11.6%. Of the WCOPD, 50% were classified as GOLD (Global Initiative for Chronic Obstructive Lung Disease) I, and 83% as group B.3 There was no difference in comorbidities such as asthma (4%), tobacco smoking (1%), high blood pressure (24%) or obesity (40%) between WCOPD and NONCOPD participants, except for diabetes mellitus, which showed higher prevalence in NONCOPD women (P 1⁄4 0.022). In general, WCOPD had more respiratory symptoms, higher mMRC (modified Medical Research Council) score and CAT (COPD Assessment Test) score than NONCOPD women (Table). WCOPD presented more wheezing than NONCOPD (P , 0.001). Using logistic regression analysis and after adjusting for age and BMI, only the history of dyspnoea (OR 2.4, 95% CI 1.13–5.10; P1⁄40.022) was found to be a risk factor for COPD. These respiratory health campaigns in rural and marginalised areas indicate a high prevalence of COPD among women (either based on FEV1/FVC fixed ratio, or on the LLN). In comparison to PLATINO,4 the prevalence of COPD in these rural areas was almost twice that reported in Mexico City, where the prevalence was 7.8% with FEV1/FVC fixed ratio and 5.7% with LLN, and higher than the
Dear Editor, In low-and medium-income countries (LMICs), chronic obstructive pulmonary disease (COPD) creates a high burden of morbidity and mortality in nonsmokers. 1 Because there is significant exposure to biomass burning in rural areas, the COPD burden could be more significant than in urban areas. 2 Unfortunately, there are no health policies specific to rural areas designed to prevent, diagnose or treat COPD. The report presented here is part of a crossover study, with data from respiratory health campaigns carried out annually by the Tobacco and COPD Research Department of Instituto Nacional de Enfermedades Respiratorias (INER), Mexico.
The campaigns were held in rural areas of Mexico from 2013 to 2019 in seven villages of the Valle Region of Oaxaca (Santa Catarina Ixtepeji, Nuevo Zoquiapam, San Miguel del Río, San Pablo Macuiltianguis, San Bartolome Quialana, San Juan Guelavia, San Pablo Guila). The campaigns targeted women over 40 years old from the Zapotec indigenous group, who are exposed to biomass burning while cooking using firewood. The main objective of the campaigns was to detect COPD in these women, and to make them aware of the dangers associated with indoor air pollution. The campaign includes health education talks and educational materials (comics), in which the symptoms and damage caused by cooking with firewood are graphically represented. The comics were made under the supervision of medical researchers of the Tobacco and COPD Research Department. The campaigns in Oaxaca benefitted from being accompanied by health personnel who spoke their native dialect, which allowed clear messaging in the communities. COPD tests such as spirometry were performed by social service doctors (certified by the National Institute for Occupational Safety and Health Spirometry Training Programme). Pre and post bronchodilator spirometry was performed on each patient and only those with acceptable quality criteria were reported. Pulmonologists interpreted the spirometry data. Sociodemographic data on the following variables were collected: comorbidities, history of exposure to biomass using a respiratory questionnaire in which the cumulative exposure to biomass smoke was expressed in hour-years (the product of number of years of exposure, and average hours per day of exposure). When a person had a spirometry result consistent with COPD, she was given a brief educational talk about her condition, treated with a bronchodilator and referred to a health institution. Ethical approval for our study was not required, because the results were part of a retrospective investigation. Stata v14.0 (Stata Corporation, College Station, TX, USA) software was used for statistical analysis.
From 2013 to 2019, 680 women attended the campaigns, 581 of whom underwent spirometry; 498 (86%) women who met the spirometric quality criteria were analysed. The mean age was 60 6 11 years (P ¼ 0.011), with mean height of 1.44 6 0.07 m, mean weight of 61 6 12 kg and mean body mass index (BMI) of 29 6 5 kg/m 2 . There was no difference between women with COPD (WCOPD) and women without COPD (NONCOPD). The mean biomass exposure index was 228 6 157 hours/year. Using forced expiratory volume in 1 sec/forced vital capacity (FEV 1 /FVC) ,0.70 as cut-off, COPD prevalence was 9.6%; based on the lower limit of normality (LLN), this increased to 11.6%. Of the WCOPD, 50% were classified as GOLD (Global Initiative for Chronic Obstructive Lung Disease) I, and 83% as group B. 3 There was no difference in comorbidities such as asthma (4%), tobacco smoking (1%), high blood pressure (24%) or obesity (40%) between WCOPD and NONCOPD participants, except for diabetes mellitus, which showed higher prevalence in NONCOPD women (P ¼ 0.022). In general, WCOPD had more respiratory symptoms, higher mMRC (modified Medical Research Council) score and CAT (COPD Assessment Test) score than NONCOPD women (Table). WCOPD presented more wheezing than NONCOPD (P , 0.001). Using logistic regression analysis and after adjusting for age and BMI, only the history of dyspnoea (OR 2.4, 95% CI 1.13-5.10; P ¼ 0.022) was found to be a risk factor for COPD.
These respiratory health campaigns in rural and marginalised areas indicate a high prevalence of COPD among women (either based on FEV 1 /FVC fixed ratio, or on the LLN). In comparison to PLATINO, 4 the prevalence of COPD in these rural areas was almost twice that reported in Mexico City, where the prevalence was 7.8% with FEV 1 /FVC fixed ratio and 5.7% with LLN, and higher than the PREPOCOL study (8.4%). 5 The high prevalence of COPD in the Valle of Oaxaca also contrasts with the low prevalence of COPD associated with biomass burning in a suburban area of Mexico City. 4 This suggests that the use of biomass, alongside increased levels of poverty, results in more cases in rural areas. Prevalence estimates based on LLN are generally lower than the fixed ratio. In this study, the higher COPD prevalence based on LLN may be explained by the fact that relatively younger patients (age ,50 years) were included. Recent data suggest that the high prevalence of COPD in many LMICs reflects the high prevalence of risk factors such as exposure to biomass smoke. 1,4-6 Even after adjusting for income variations, the prevalence of COPD in LMICs based on LLN was marginally higher in women (7.4%) than in men (7.1%). 4 This is a gender-associated disease, because women spend longer by the stove in poorly ventilated spaces.
Both in our study and in the PLATINO research, 4,7 a high proportion of symptoms was observed; however, WCOPD in this study presented more symptoms (dyspnoea, wheezing, cough) than in the PLATINO study. 7 For example, in our study 47% of women reported dyspnoea, whereas only 30% reported this in the PLATINO study. Unfortunately, women did not associate their symptoms, particularly dyspnoea, with exposure to biomass smoke in either of these studies. In the absence of spirometers in rural areas, our report shows how the respiratory symptoms questionnaires helped to identify people who might have COPD associated with biomass use by physicians in rural clinics. 8 There have been some isolated actions regarding educational campaigns focused on COPD and tobacco smoking in rural areas. 6,9 However, exposure to biomass smoke was not considered within the scope of the campaign. Our results show that health campaigns can diagnose COPD in a high proportion of women in rural areas and make them aware of the respiratory symptoms and risks associated with chronic exposure to wood smoke. In these campaigns, almost 700 women received health educational materials and educational talks, and underwent a respiratory health and spirometry check-up. COPD resulting from biomass smoke is among the most neglected diseases globally, receiving little attention from healthcare providers. Although some initiatives to address COPD have emerged, 10 6 F. CRUZ-VICENTE, 7
|
v3-fos-license
|
2022-01-09T16:17:25.469Z
|
2022-01-01T00:00:00.000
|
245819909
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-4418/12/1/126/pdf",
"pdf_hash": "f04c2f616915e895fd72a24e88ffc86208d2cb89",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45946",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "660cf334b6440d46e1818a46a6403cc5b9e0ec35",
"year": 2022
}
|
pes2o/s2orc
|
Deep-Learning Segmentation of Epicardial Adipose Tissue Using Four-Chamber Cardiac Magnetic Resonance Imaging
In magnetic resonance imaging (MRI), epicardial adipose tissue (EAT) overload remains often overlooked due to tedious manual contouring in images. Automated four-chamber EAT area quantification was proposed, leveraging deep-learning segmentation using multi-frame fully convolutional networks (FCN). The investigation involved 100 subjects—comprising healthy, obese, and diabetic patients—who underwent 3T cardiac cine MRI, optimized U-Net and FCN (noted FCNB) were trained on three consecutive cine frames for segmentation of central frame using dice loss. Networks were trained using 4-fold cross-validation (n = 80) and evaluated on an independent dataset (n = 20). Segmentation performances were compared to inter-intra observer bias with dice (DSC) and relative surface error (RSE). Both systole and diastole four-chamber area were correlated with total EAT volume (r = 0.77 and 0.74 respectively). Networks’ performances were equivalent to inter-observers’ bias (EAT: DSCInter = 0.76, DSCU-Net = 0.77, DSCFCNB = 0.76). U-net outperformed (p < 0.0001) FCNB on all metrics. Eventually, proposed multi-frame U-Net provided automated EAT area quantification with a 14.2% precision for the clinically relevant upper three quarters of EAT area range, scaling patients’ risk of EAT overload with 70% accuracy. Exploiting multi-frame U-Net in standard cine provided automated EAT quantification over a wide range of EAT quantities. The method is made available to the community through a FSLeyes plugin.
Introduction
Epicardial adipose tissue (EAT) is a visceral fat depot surrounding the heart between the myocardium and the pericardium [1]. Its volume quantification holds potential as a novel biomarker for risks of coronary heart disease [2]. Pericardial fat, merging EAT and paracardial (PAT) fat, has been studied in the past in association with atherosclerotic disease [3] but these results have since been heavily criticized [4]. The inclusion of two fat depots as one single entity may not reflect the separate functions and clinical implications of each adipose tissue. Indeed, recent studies focusing on separating EAT and PAT concluded that EAT alone was involved in the corresponding disease [5,6]. Indeed, EAT is a metabolically active adipose tissue [1] compared to PAT. Its accumulation and subsequent inflammation add to cardiovascular risks, potentially impacting left ventricle (LV) diastolic dysfunction [7,8]. Even more recently, EAT overload has raised concern as a risk factor in generalized inflammation from COVID-19 [9,10]. It is now recognized that
Study Population
A retrospective mono-centric database was defined totaling 153 subjects, out of which 100 exams could be exploited. The 100 enrolled subjects including healthy controls, type-2 diabetic patients, and non-diabetic obese patients were selected based on 4Ch orientation and the absence of severe artifacts as shown in Figure 1. Patients were defined as having type 2 diabetes mellitus if they fulfilled any of the WHO criteria: HbA1c ≥ 6.5%, FBG level ≥ 7.0 mmol/L, oral glucose tolerance test result ≥ 11.1 mmol/L, or current treatment with antidiabetic agents. Obese non-diabetic patients were defined as the absence of any WHO criteria and a BMI ≥ 30 kg/m 2 . All enrolled subjects had normal left ventricular function, no history of heart failure or coronary heart disease.
MRI Acquisition
All subjects underwent cardiac MRI including the acquisition of a full stack of shortaxis slices and a single slice four-chamber cine on a 3-T MRI system (Magnetom Verio, Siemens Healthineers, Erlangen, Germany) with a dedicated cardiac 32-channel coil array (Invivo, Gainesville, FL, USA). The cine series were acquired with a retrospectively ECGgated balanced steady-state free precession (bSSFP) sequence with in-plane image resolution varying from 1.3 × 1.3 mm 2 to 1.8 × 1.8 mm 2 (depending on subjects), slice thickness of 6 mm, TE/TR = 1.2/3.2 ms, GRAPPA 2 (24 auto-calibration signal lines), temporal resolution of 28-35 ms, with 25 frames reconstructed. Further details of the cardiac MRI protocols were previously described [20,[28][29][30][31]. N4 bias field correction [32] was applied to all image series before further processing.
EAT Segmentation
For reference, EAT volume was segmented by expert readers provided with full stack short-axis series using Argus viewer (Siemens Medical Solutions, Erlangen, Germany). In an independent session, two expert readers were provided with full 4Ch series and performed blinded segmentation of three labels using the FSLeyes viewer [33] (version 0.31, Patients were defined as having type 2 diabetes mellitus if they fulfilled any of the WHO criteria: HbA1c ≥ 6.5%, FBG level ≥ 7.0 mmol/L, oral glucose tolerance test result ≥ 11.1 mmol/L, or current treatment with antidiabetic agents. Obese non-diabetic patients were defined as the absence of any WHO criteria and a BMI ≥ 30 kg/m 2 . All enrolled subjects had normal left ventricular function, no history of heart failure or coronary heart disease.
MRI Acquisition
All subjects underwent cardiac MRI including the acquisition of a full stack of shortaxis slices and a single slice four-chamber cine on a 3-T MRI system (Magnetom Verio, Siemens Healthineers, Erlangen, Germany) with a dedicated cardiac 32-channel coil array (Invivo, Gainesville, FL, USA). The cine series were acquired with a retrospectively ECG-gated balanced steady-state free precession (bSSFP) sequence with in-plane image resolution varying from 1.3 × 1.3 mm 2 to 1.8 × 1.8 mm 2 (depending on subjects), slice thickness of 6 mm, TE/TR = 1.2/3.2 ms, GRAPPA 2 (24 auto-calibration signal lines), temporal resolution of 28-35 ms, with 25 frames reconstructed. Further details of the cardiac MRI protocols were previously described [20,[28][29][30][31]. N4 bias field correction [32] was applied to all image series before further processing.
EAT Segmentation
For reference, EAT volume was segmented by expert readers provided with full stack short-axis series using Argus viewer (Siemens Medical Solutions, Erlangen, Germany). In an independent session, two expert readers were provided with full 4Ch series and performed blinded segmentation of three labels using the FSLeyes viewer [33] (version 0.31, Paul McCarthy, University of Oxford, UK): heart ventricles (HV) (including both ventricle muscles and blood pools), epicardial (EAT), and paracardial (PAT) adipose tissues. EAT was defined as hyperintense signal within the pericardium around the ventricles. Peri-atrial fat was not included as it has been shown that peri-ventricle EAT alone had a stronger correlation with coronary diseases than total EAT [26]. All isles of periventricular fat were included to form EAT area. PAT was defined as fat adjacent but outside the pericardium. Segmentations were performed on three cardiac phases determined by readers having the entire series at their disposal: first phase, peak systole, and late diastole. The three segmented masks were propagated to the remaining frames using an automatic label propagation algorithm based on non-linear registrations, as previously described [34] resulting in 25 images segmented per subjects. Series in the test dataset were segmented by both readers, and reader 1 repeated blinded segmentations 6 weeks later.
Network Architecture
Two different fully convolutional networks (FCNs) were investigated: U-Net [35] with 48 filters for the first layer and FCN developed by Bai et al. [36] with 48 filters for the first layer, later referenced as FCNB. These networks are based on an encoder-decoder structure but differ in their decoder structure. The encoder part processes an image of arbitrary size as input and applies convolutional layers for extracting image features while the decoder upsamples and combines low-resolution featured map to the original input resolution. The absence of a dense layer allows these networks to process images of various sizes.
The U-Net [35] has been the most popular 2D segmentation network for biomedical images and a fundamental component of many state-of-the-art cardiac image segmentation approaches [37][38][39]. The specificity of the U-Net is to employ skip connections between encoder and decoder to recover spatial information lost in downsampling layers as shown in Figure 2.
muscles and blood pools), epicardial (EAT), and paracardial (PAT) adipose tissues. EAT was defined as hyperintense signal within the pericardium around the ventricles. Periatrial fat was not included as it has been shown that peri-ventricle EAT alone had a stronger correlation with coronary diseases than total EAT [26]. All isles of periventricular fat were included to form EAT area. PAT was defined as fat adjacent but outside the pericardium. Segmentations were performed on three cardiac phases determined by readers having the entire series at their disposal: first phase, peak systole, and late diastole. The three segmented masks were propagated to the remaining frames using an automatic label propagation algorithm based on non-linear registrations, as previously described [34] resulting in 25 images segmented per subjects. Series in the test dataset were segmented by both readers, and reader 1 repeated blinded segmentations 6 weeks later.
Network Architecture
Two different fully convolutional networks (FCNs) were investigated: U-Net [35] with 48 filters for the first layer and FCN developed by Bai et al. [36] with 48 filters for the first layer, later referenced as FCNB. These networks are based on an encoder-decoder structure but differ in their decoder structure. The encoder part processes an image of arbitrary size as input and applies convolutional layers for extracting image features while the decoder upsamples and combines low-resolution featured map to the original input resolution. The absence of a dense layer allows these networks to process images of various sizes.
The U-Net [35] has been the most popular 2D segmentation network for biomedical images and a fundamental component of many state-of-the-art cardiac image segmentation approaches [37][38][39]. The specificity of the U-Net is to employ skip connections between encoder and decoder to recover spatial information lost in downsampling layers as shown in Figure 2. Networks' optimized architecture. The two networks evaluated in this study: U-Net and fully-convolutional network (FCNB) architectures included a first 3D convolution layer to allow multiple cardiac frames as input. Following 2D convolution layers encoded images from 48 features up to 768 features. Eventually, the decoder targeted three labels for segmentation in the central input frame: epicardial adipose tissue (EAT), paracardial adipose tissue (PAT), and heart ventricles (HV). Networks' optimized architecture. The two networks evaluated in this study: U-Net and fully-convolutional network (FCNB) architectures included a first 3D convolution layer to allow multiple cardiac frames as input. Following 2D convolution layers encoded images from 48 features up to 768 features. Eventually, the decoder targeted three labels for segmentation in the central input frame: epicardial adipose tissue (EAT), paracardial adipose tissue (PAT), and heart ventricles (HV).
The second network investigated is the FCN developed by Bai et al. [36], later referred to as FCNB. FNCB has demonstrated excellent segmentation performances on the largest available cardiac MR dataset (UK-Biobank [40] Its specificity is based on the decoder that only consists of the concatenation of all featured maps, upsampled to the original resolution, as shown in Figure 2. In their original papers, the cross-entropy loss was used to train those networks. However, this loss has shown limits to address class imbalance. In our study, regions of interest (ROI) were sparsely represented compared to the background and cross-entropy loss is inadequate to handle it. Thus, the loss function was defined as the mean dice between the probabilistic label map without background and the manually annotated label map.
Training
Specifically, optimized FCNB and U-Net were trained on three consecutive cine frames for segmentation of the central frame, providing a crucial temporal information often necessary for the experts to segment EAT. Input images were normalized to the range of [0,1] with fixed size (256 × 192 × 3), mask zero-padding or cropping was applied when needed.
For each batch (N = 30), on-the-fly data augmentation was performed using rotational transformation and/or image scaling before feeding them to the network. Both data augmentation were set using a random clipped normal distribution spanning from −30 • /0.4 up to 30 • /1.6 for rotational transformation and image scaling respectively. The Adam optimization [41] was used for minimizing the dice loss function with a constant learning rate of 1e-3. It took approximatively 35 min to train either the U-Net or FCNB on a Graphics Processing Unit (GPU) (NVidia Tesla K80).
The networks investigated were implemented using Python within the TensorFlow 2 framework. The FCNB model was adapted from the original implementation [42], whereas U-Net was custom-designed. To adapt to the proposed multi-frame approach, both 2D networks were modified to accept 2D+t inputs, considering the cardiac time dimension as a third dimension with limited horizon. Thus, the first convolution layer of each network was replaced with a 3D convolution layer with valid padding. The following layers were kept identical, processing extracted features independently of the input dimensions.
To perform a robust evaluation, networks were trained using cross-validation and evaluated on an independent dataset: the database was split in five subsets (500 images/20 subjects each reflecting our database populations distribution: 4 healthy controls, 13 type 2-diabetics, 3 nondiabetic obese patients). One subset (500 images) was used as a test set whereas the 4 other subsets were used for stratified cross-validation training, resulting in a 4-fold cross-validation. Thus, a single subset is retained as validation (500 images) whereas the 3 others (1500 images) are used for training, ensuring that validation and training dataset reflects the database population distribution.
Evaluation Metrics
Segmentation performances were evaluated for accuracy, propinquity, and surface estimation error. Dice similarity coefficient (DSC) measured segmentation accuracy from the overlap between the manual and automatic segmented surfaces (S M and S A ), defined as The mean surface distance (MSD) calculated the propinquity between segmentations as is the mean distance (in mm) between segmented contours, defined as To evaluate the clinical final purpose, which is the quantitative measurement of EAT area, absolute relative surface error (RSE) was utilized, defined as To further assess accuracy, positive predicted value (PPV) which is an indicator of over-segmentation (PPV << 1) was calculated on the entire database, defined as
Statistical Analysis
Statistical analysis was conducted using R (version 3.6.3) [43]. Analysis of linear regression was used to study the correlation between manually evaluated EAT volume and 4Ch area. The metrics' distribution normality was assessed using the Shapiro-Wilk test. Wilcoxon signed rank and Wilcoxon rank sum tests were used to investigate significant differences for each metrics between intra-inter observers and FCNs. To account for segmentation difficulty and clinical relevance [44] that scale with the quantity of EAT, networks' performances were assessed per quartile of manually segmented EAT area (Q 1 < 8.22 cm 2 ≤ Q 2 < 12.70 cm 2 ≤ Q 3 < 15.55 cm 2 ≤ Q 4 ).
Corresponding EAT areas as measured on 4Ch views correlated well with total EAT volume measured from the stack of short-axis cine ( Figure 3) with a slightly higher correlation in systole (Pearson r = 0.77) than in diastole (Pearson r = 0.74). Thus, a wide range of EAT 4Ch areas was available from 1.2 cm 2 to 37.2 cm 2 , with a lower range for healthy subjects from 2.5 to 13.7 cm 2 , from 1.2 cm 2 to 23.2 cm 2 for non-diabetic obese subjects and from 5.3 cm 2 to 37.2 cm 2 for type 2 diabetic patients.
As shown in Table 2, intra and inter-observer DSC confirmed excellent reproducibility for HV segmentation (DSC Intra = 0.98 and DSC Inter = 0.96 resp.). EAT and PAT differed between the two observers (DSC Inter = 0.76 and 0.78 for EAT and PAT resp.), although segmentations performed twice by the same observer proved to be more reproducible (DSC Intra = 0.83 and 0.85 for EAT and PAT resp.). Intra-observer DSC and MSD were significatively lower (p < 0.05) concerning EAT segmentation in the diastolic frame compared to the segmentation in the systolic frame. For inter-observer bias, differences in DSC, MSD, or RSE metrics were not statistically significant between diastolic and systolic frames. Diagnostics 2022, 12, x FOR PEER REVIEW 7 of 15 Figure 3. Comparison of reference total epicardial fat volume andproposed EAT area measured on four-chamber cine. EAT area was measured in end-systolic or end-diastolic frame across the 100 subjects' database. The three cohorts merged for the database were identified by markers color.
As shown in Table 2, intra and inter-observer DSC confirmed excellent reproducibility for HV segmentation (DSCIntra = 0.98 and DSCInter = 0.96 resp.). EAT and PAT differed between the two observers (DSCInter = 0.76 and 0.78 for EAT and PAT resp.), although segmentations performed twice by the same observer proved to be more reproducible (DSCIntra = 0.83 and 0.85 for EAT and PAT resp.). Intra-observer DSC and MSD were significatively lower (p < 0.05) concerning EAT segmentation in the diastolic frame compared to the segmentation in the systolic frame. For inter-observer bias, differences in DSC, MSD, or RSE metrics were not statistically significant between diastolic and systolic frames.
FCNB and U-Net segmentations performance measured by DSC, were significantly lower (p < 0.05) than intra-observer bias for all labels (for EAT: DSCIntra = 0.83, DSCU-Net = 0.77, DSCFCNB = 0.76). Both networks provided equivalent DSC, MSD, and RSE performance than inter-observer bias for all labels (for instance PAT: DSCInter = 0.78, DSCU-Net = 0.80, DSCFCNB = 0.78). Across the four quartiles of data defined by equally populated ranges of EAT areas, both networks provided reliable segmentation of the heart ventricles (HV, FCNB: DSCQ1-Q4 = 0.97-0.96, U-Net: DSCQ1-Q4 = 0.97) as shown in Table 3. Interestingly, the network performances to segment EAT strongly depended on the population quartile. Indeed, U-Net DSC was significantly higher (p < 0.001) for upper quartiles as observed using U-Net: DSCQ4 = 0.83 > DSCQ3 = 0.80 > DSCQ2 = 0.76 > DSCQ1 = 0.69 as illustrated in Figure 4. DSC Figure 3. Comparison of reference total epicardial fat volume andproposed EAT area measured on four-chamber cine. EAT area was measured in end-systolic or end-diastolic frame across the 100 subjects' database. The three cohorts merged for the database were identified by markers color. FCNB and U-Net segmentations performance measured by DSC, were significantly lower (p < 0.05) than intra-observer bias for all labels (for EAT: DSC Intra = 0.83, DSC U-Net = 0.77, DSC FCNB = 0.76). Both networks provided equivalent DSC, MSD, and RSE performance than inter-observer bias for all labels (for instance PAT: DSC Inter = 0.78, DSC U-Net = 0.80, DSC FCNB = 0.78).
Across the four quartiles of data defined by equally populated ranges of EAT areas, both networks provided reliable segmentation of the heart ventricles (HV, FCNB: DSC Q1-Q4 = 0.97-0.96, U-Net: DSC Q1-Q4 = 0.97) as shown in Table 3. Interestingly, the network performances to segment EAT strongly depended on the population quartile. Indeed, U-Net DSC was significantly higher (p < 0.001) for upper quartiles as observed using U-Net: DSC Q4 = 0.83 > DSC Q3 = 0.80 > DSC Q2 = 0.76 > DSC Q1 = 0.69 as illustrated in Figure 4. DSC and RSE metrics demonstrated a gap of segmentation quality between the lower two quartiles and the upper two quartiles for both PAT and EAT segmentation (for EAT FCN: RSE Q4 = 15.60, RSE Q3 = 15.87 < RSE Q2 = 21.91 < RSE Q1 = 27.98). Across all quartiles, both networks had more difficulty separating PAT from EAT than identifying total pericardial fat (EAT+ PAT) in the image (with U-Net, RSE EAT + PAT << RSE EAT or RSE PAT for all quartiles). Over the database and for all labels, U-net outperformed (p < 0.0001) FCNB for segmenting accurately (DSC), nearer to the ground truth (MSD), thus providing a more reliable (i.e., accurate) measurement (RSE).
FCNB and U-net performed significantly better (p < 0.05) for segmenting EAT area on the systolic frame compared to the diastolic frame (DSC UNet-diastole = 0.76 DSC UNet-systole = 0.80). These differences were not significant in PAT (see Appendix A Figure A1).
Classification of our database split by quartile of EAT burden was observed by confusion matrices. From Figure 5, the confusion matrices diagonal (in green) gave a measure of correct classification (66% for FCNB and 71% for U-Net), whereas the subdiagonal and the superdiagonal (in yellow) allowed evaluating a misclassification by one quartile (32% for FCNB and 27% U-Net) and the second subdiagonal and superdiagonal (in red) gave an estimate of a misclassification by two quartiles (2% for FCNB 2% for U-Net). As shown by subdiagonal confusion matrices and confirmed by PPV, FCNB significantly over-estimated EAT area compared to U-Net (PPV FCNB = 0.73 < PPV U-Net = 0.75, p < 0.0001).
Discussion
This study aimed at providing a rapid and fully integrable evaluation of epicardial fat burden. To achieve this evaluation, automated segmentation of the EAT layer was performed on four-chamber cine MRI series using Deep Learning approaches.
Four-Chamber-View Intrapericardial Fat Area Is a Relevant Measure of EAT
Confirming previous literature [24,25], the correlation found in this work between EAT area and volume across a wide range of EAT volumes (from 29 to 376 cm 3 ) comforted the relevant use of four-chamber EAT area as a rapid but realistic measure of EAT burden. Already in past studies, the 2D EAT area has been linked to left ventricular diastolic dysfunction [22,26], hypertension and severity of insulin resistance [25], and non-alcoholic fatty liver disease patients [27]. Thus, four-chamber view holds potential as a surrogate to quantify EAT in routine clinical practice. Moreover, in four-chamber view, the pericardium beyond the apex of the heart could be visualized with more reliability. However, our database gathered retrospective studies in which EAT volume segmentation had been measured in short-axis views by different investigators over the years, which could lead to unaccounted volume imprecision. Ideally, the gold standard CCT EAT volume quantification would have been preferred but this examination is not commonly indicated for metabolic patients.
A Specific Database with Possible Extensions
This work leverages a unique database that combines a population spanning a large range of EAT quantity and manual segmentation of EAT on cine series. The strength of our dedicated database stands in its diversity in BMI, sex, age, health condition across many subjects (n = 100) ( Table 1). Despite a large diversity of subjects, a disparity of age remains between younger healthy subjects and diabetic and/or obese patients. The addition of data from older healthy subjects, as well as elderly subjects (>65 years) would benefit the current database to reinforce our network training as elderly have been shown to be significantly more EAT burdened than younger individuals [45]. Our database could also be extended by including image sets from different MRI scanner types. Currently, this is a monocentric study and database. As a result, the trained models might not adapt well on datasets from scanners of different vendors and field strengths. Nevertheless, the database was made up of multiple protocols acquired over a decade, which already featured a variety of acquisition parameters and image quality levels. To further leverage the number of annotated data (2500 ground-truth, 25 images segmented per subject), generative adversarial network could be explored to extend beyond proposed data augmentation [46]. Another challenge are recurrent artifacts (aliasing, dark bands, flux artifacts) commonly observed in 3T bSSFP cine-MRI images, particularly pronounced in obese patients. This might preclude EAT segmentation and disturb networks accuracy. Training networks on artifacted images is another important addition to strengthen models for them to be ready for the clinic.
Challenge of EAT Segmentation
Experts and networks provided excellent results on large structures such as heart ventricles (DSC ≥ 0.96) and pericardial fat (DSC ≥ 0.88). However, one major challenge for the segmentation of EAT on cine MRI is to distinguish between burdening EAT and its extrapericardial neighbor PAT. The pericardial fascia that separates those two fat compartments is about 2 mm thick [47,48] which is of the same order of magnitude as the image resolution (1.3-1.8 mm). This explains why both networks were able to segment combined EAT + PAT pericardial fat with appreciable precision, but the identification of individual fat was less satisfying. Nevertheless, FCN networks provided segmentation results on par with experts' precision. Additionally, since cardiac contraction pulls onto the pericardium, its visualization improves in peak-systole [22], making this frame more suitable for the measurement of EAT when compared to diastole (p intra (DSCdia/DSCsys) = 0.0282).
One novelty has been to input multiple cardiac frames from the cardiac cycle to networks using a 3D first convolutional layer. It could be interesting in future work to enhance temporal information which is essential to detect the pericardial fascia. A map of cardiac deformations could enhance input images to be supplied to the network. It would be also interesting to investigate other network architectures, such as recurrent neural network, that could memorize information from adjacent slices to improve inter-slices coherence [49], but these extensions fall outside the scope of this work.
Comparing FCNs Performances
Specific complementary metrics (DSC, MSD, and RSE) have been chosen to evaluate EAT area segmentation and quantification. Alternatively, the Hausdorff distance metric is a common choice to evaluate segmentation performance [50], measuring the maximal pixel distance error between segmentations. However, EAT region is sparsely distributed around the heart, thus the Hausdorff distance was not considered in this work since it might range rapidly high, even when comparing two segmentations with similar areas.
From chosen metrics, U-Net outperformed FCNB for all labels, thus appearing preferrable to quantify EAT 4Ch area. Alternative semi-and fully automatic methods have been proposed for the EAT quantification on MRI-cine. Cristobal-Huerta et al. [51] developed an automatic pipeline composed of Law texture filters, snakes and K-cosine curvature analysis to partially quantify EAT volume, albeit on 10 subjects only. In a semi-automatic processing, Fulton et al. [52] applied landmarks on short-axis images from 12 subjects to unroll images into polar coordinates before employing a neural network for detection of epicardial fat contours. We were unable to compare our results with those previous works as segmentation metrics (e.g., DSC metric or Jaccard similarity index) were not provided. Recently, automatic total pericardial fat quantification has been developed in 4Ch cine MRI. Bard, Raisi-Estabragh et al. [23] obtained segmentation performances (DSC EAT+PAT = 0.8) very similar to ours (DSC EAT+PAT = 0.88) on their respective test-set. In their study, only the end-diastolic frame had been segmented while we segmented the full 4Ch cine MRI and trained on three consecutive cine frames to leverage cine temporal information. Finally, the optimized multi-frame U-Net was integrated in a FSLeyes plugin made available to the community [53] allowing comparison with further work and providing clinicians with a rapid EAT area segmentation (see Appendix A Figure A2).
Performances across Quartiles
Splitting the database in quartiles of EAT enabled to differentiate segmentation performances depending on EAT area. Indeed, segmentations quality from FCNs proved to be degraded in group Q1, in which EAT (as well as PAT) was thin and sparse as illustrated in Figure 4. However, EAT segmentations were on a par with inter-observers' manual segmentation for the three upper quartiles and remained relevant for identifying patient at risk (Q 2 , Q 3 , Q 4 ≥ 8.22 cm 2 ) by measuring their EAT burden within 14% and 18% precision for U-Net and FCNB respectively.
Conclusions
This study provides a methodology for fully automated segmentation of epicardial fat on multi-frame cardiac cine MRI, demonstrated across 100 subjects exhibiting low to high EAT quantities. EAT is often overseen in diagnosis but has received increasing attention as a relevant biomarker of cardiac risk. Automatic EAT evaluation could help to identify patients at risk, especially for diabetic patients. The comparison with EAT volume supports the potential of four-chamber cine EAT area as a surrogate for clinical evaluation, with higher segmentation robustness in systolic frame. Between the two FCNs investigated, the optimized U-Net was better suited to provide EAT area estimation with a 14.2% precision for the clinically relevant upper three quarters of targeted EAT range. EAT evaluation on cine, leveraging multi-frame information, could be further integrated to explore both retrospective and prospective cardiac studies without the need for a specific acquisition thanks to publicly provided automatic EAT area segmentation.
|
v3-fos-license
|
2015-04-01T06:09:59.000Z
|
2015-04-01T00:00:00.000
|
1963819
|
{
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2078-2489/6/3/375/pdf",
"pdf_hash": "5d657355379732eb66a120f4cbe07ef6652f31a7",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45947",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "1319596534617a69781ff6d1732221f205da3796",
"year": 2015
}
|
pes2o/s2orc
|
Controlled Remote State Preparation via General Pure Three-Qubit State
The protocols for controlled remote state preparation of a single qubit and a general two-qubit state are presented in this paper. The general pure three-qubit states are chosen as shared quantum channel, which are not LOCC equivalent to the mostly used GHZ-state. It is the first time to introduce general pure three-qubit states to complete remote state preparation. The probability of successful preparation is presented. Moreover, in some special cases, the successful probability could reach unit.
Introduction
Quantum teleportation (QT for short) is the first quantum information processing protocol presented by Bennett et al. [1] to achieve the transmission of information contained in quantum state determinately. Many theoretical schemes have been proposed later [2,3,4,5,6]. It has also been realized experimentally [7,8,9,10,11,12,13,14]. Latter, to save resource needed in the process of information transmission, Lo put forward a scheme for remote preparation of quantum state (RSP for short) [15]. Compared with QT, in RSP the sender does not own the particle itself but owns all the classical information of the state he or she wants to prepare for the receiver, who is located separately from the sender. The resource consumption is reduced greatly in RSP, as the sender do not need to prepare the state beforehand. The RSP has already attracted many attentions. A number of RSP protocols were presented, such as RSP with or without oblivious conditions, optimal RSP, RSP using noisy channel, low-entanglement RSP, continuous variable RSP and so on [16,17,18,19,20,21,22,23,24,25,26]. Experimental realization was also proved [27,28].
In RSP protocols, all the classical information is distributed to one sender, which may lead to information leakage if the sender is not honest. In order to improve the security of remote state preparation, controllers are introduced, which is the so called controlled remote state preparation (CRSP for short), and it has drawn the attention of many researchers. In contrast to the usual RSP, the CRSP needs to incorporate a controller. The information could be transmitted if and only if both the sender and receiver cooperate with the controller or supervisor. CRSP for an arbitrary qubit has been presented in a network via many agents [29]. A two-qubit state CRSP with multi-controllers using two non-maximally GHZ states as shared channel is shown in [30]. CRSP with two receivers via asymmetric channel [31], using POVM are presented [32,33]. The five-qubit Brown state as quantum channel to realize the CRSP of three-qubit state is elaborated in [34]. Most of the existing schemes chose to use the GHZ-type state, W-type state, Bell state or the composite of these states as the shared quantum channel. However in this paper, we choose the general pure threequbit state as quantum channel, which is not LOCC equivalent to the GHZ state. And for some special cases, the probability for successful CRSP can reach unit.
In [35], the authors proved that for any pure three-qubit state, the existence of local base, which allows one to express a pure three-qubit state in a unique form using a set of five orthogonal state. It is the called generalised Schmidt-Decomposition for three-qubit state. Using the generalised Schmidt-Decomposition, Gao et al. [36] proposed a controlled teleportation protocol for an unknown qubit and gave analytic expressions for the maximal successful probabilities. They also gave an explicit expression for the pure three-qubit state with unit probability of controlled teleportation [36]. Motivated by the ideas of the two papers, we try to investigate the controlled remote state preparation using the general pure three-qubit states and their generalised Schmidt-Decomposition.
The paper is arranged as follows. In Sec. 2, the CRSP for an arbitrary qubit is elucidated in detail. We find that the successful probability is the same as that of controlled teleportation for qubits with real coefficients. In Sec. 3, the CRSP for a general two-qubit state is expounded. For two-qubit state with four real coefficients. The corresponding successful probability is the same as that of controlled teleportation of a qubit. In Sec. 4, we conclude the paper.
CRSP for an arbitrary qubit
Suppose that three separated parties Alice, Bob and Charlie share a general pure three-qubit particle |Φ cab , the particle a belongs to Alice, b to Bob and c to Charlie, respectively. The distribution of the three particles are sketched in Fig.1. In figure 1, the small circles represent the particles, the solid line between two circles means that the corresponding two particles are related to each other by quantum correlation. According to [35], the general pure three qubit state has a unique generalised Schmidt-Decomposition in the form where a i ≥ 0 for i = 0, · · · , 4, 0 ≤ µ ≤ π, ∑ 4 i=0 a 2 i = 1. The a i and µ in Eq.(1) are decided uniquely with respect to a chosen general pure three qubit state. Now Alice wants to send the information of a general qubit to the remote receiver Bob under the control of Charlie. Alice possesses the classical information of this qubit, i.e. the information of α and β , but does not have the particle itself. Next, we make three steps to complete the CRSP for |ϕ .
Step 1 The controller Charlie firstly makes a single qubit measurement under the base |ε 0 c = cos where θ ∈ [0, π], η ∈ [0, 2π]. The choice of θ and η could be flexible according to the need of the controller. If θ = π and η = 0, |ε 0 c and |ε 1 c will be the |± base. Then Charlie broadcasts his measurement outcomes publicly to Alice and Bob using one classical bit. Using Eq.(2), the quantum channel can be rewritten as where p 0 = sin 2 θ 2 + a 2 0 cos θ + a 0 a 1 cos(µ − η) sin θ , If the result of Charlie's measurement is 0, the whole system collapses to |Ω 0 ab with probability p 0 while collapses to |Ω 1 ab with probability p 1 for the result 1. To ensure that the particle c entangles with the whole system, we assume that a 0 > 0 and a 2 , a 3 , a 4 are not equal to 0 at the same time. This is equivalent to p 0 > 0 and p 1 > 0 at the same time. Note that Step 1 is actually similar to that of controlled teleportation in [36]. We arrange it here to keep the integrity of the paper. More detailed calculation can be found in [36].
Step 2 Without loss of generality, we assume that the result of Charlie's measurement is 0. Then the whole system collapse to |Ω 0 ab . Using the Schmidt-Decomposition of two-qubit system, there exists bases {|0 , |1 } a and {|0 , |1 } b for particle a and b respectively, such that |Ω 0 ab can be expressed as where [36]. On receiving the result of Charlie's measurement, the sender Alice prepares a projective measurement utilizing the classical information of |ϕ in the following form: Then |Ω 0 ab could be reexpressed as Next we first discuss the case for real coefficients, i.e. α, β are real. Then Eq.(6) will be Alice measures her qubit under base {|µ 0 , |µ 1 } a and gets the outcome 0 and 1 with probability λ 00 α 2 + λ 01 β 2 and λ 00 β 2 + λ 01 α 2 respectively. And Alice sends her measurement result to Bob by 1 classical bit. The receiver Bob's system will collapse to respectively.
Step 3 We assume that Alice's measurement result is 0. Now according to Charlie and Alice's result, Bob wants to recovery the state |ϕ on his side. Bob needs to introduce an auxiliary particle in initial state |0 b , then he makes a unitary operation U 0 bb on his particle b and the auxiliary particle b , and his state changes to |ω 0 bb , where After the unitary operation, Bob makes a measurement on his auxiliary particle b under the base {|0 , |1 } b . The probability for Bob to get measurement result 0 is λ 00 /(λ 00 α 2 + λ 01 β 2 ), and he can recovery state |ϕ successfully. But if the result is 1, the scheme fails.
Similarly, if Alice's measurement result is 1, Bob also introduces an auxiliary particle in initial state |0 b . But the unitary operation is U 1 bb , and the system after the unitary operation is |ω 1 bb , where The probability for Bob to successfully reconstruct the state |ϕ is λ 00 /(λ 00 β 2 + λ 01 α 2 ).
Then continuing to use the last 2 steps as those in Charlies measurement result is 0, we can get that the successful probability for Bob to produce the desired state is 2p 1 λ 10 . As a result, for the real case, Alice can prepare the qubit |ϕ at Bob's position under the control of Charlie with probability 2(p 0 λ 00 + p 1 λ 10 ), which is the same as that of controlled teleportation in [36]. But the consumption of classical bits is reduced to 2 cbits for the whole process.
Next we discuss the case for complex coefficients.
According to the discussion of [36], the maximally probability for controlled teleportation will reach unit if and only if the shared channel is As for the controlled remote state preparation for a qubit using the above channel, the successful probability can also reach one for the real case, and 1/2 for the complex case.
CRSP for a two-qubit state
In the CRSP for a two-qubit state, there are also three parties Alice, Bob and Charlie. They share a quantum channel which is the composite of |Φ cab and the Bell state, the distribution of particles in the shared quantum channel is displayed in Fig.2, the meaning of symbols is the same as in Fig.1.
the particle c belongs to Charlie, a, a to Alice and b, b to Bob. Now the sender Alice possesses the classical information of a general two qubit state |ϕ , Step 1 This step is the same as that of Step 1 in Section 2. Charlie makes a projective measurement {|ε 0 c , |ε 1 c } on his particle c, and gets the measurement result 0 and 1 with probability p 0 and p 1 respectively. The whole system collapses to |Ω 0 ab |φ + a b and |Ω 1 ab |φ + a b respectively. He broadcast his measurement result using 1 cbits.
Step 3 Assume that the measurement result of Alice is 0 in Step 2. Then according to the result, Bob introduces an auxiliary particle b a in the initial state |0 b a , and makes unitary operation The state after Bob performing the unitary operation is .
Conclusions
In this paper, protocols for controlled remote state preparation are presented both for a single qubit and two-qubit state. We utilize the general pure three qubit states as the shared quantum channels, which are not LOCC equivalent to the GHZ state. We discuss protocols for both states with real and complex coefficients, and find that the general pure three-qubit states can help to complete CRSP probabilistically. More than that, in some spacial cases, the CRSP can be achieved with unit probability, which are deterministic CRSP protocols. This overcomes the limitation that most of the existing quantum communication protocols are completed with GHZ-, W-or Bell states, or the composition of these states. Moreover, due to the involvement of controller and multi-partities, this work may have potential application in controlled quantum communication, quantum network communication and distributed computation.
|
v3-fos-license
|
2019-04-15T13:05:36.905Z
|
2016-07-26T00:00:00.000
|
113760899
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1061/(asce)st.1943-541x.0001649",
"pdf_hash": "2720a757353cd01806c0e6dda7fe2e7f7d672d52",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45950",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "ce978ddc0e911f483ad278cbc80e76613d6e8004",
"year": 2017
}
|
pes2o/s2orc
|
Measuring Ground Reaction Force and Quantifying Variability in Jumping and Bobbing Actions
: This paper investigates variability in bobbing and jumping actions, including variations within a population of eight test subjects (intersubject variability) and variability on a cycle-by-cycle basis for each individual (intrasubject variability). A motion-capture system and a force plate were employed to characterize the peak ground reaction force, frequency of the activity, range of body movement, and dynamic loading factors for at least first three harmonics. In addition, contact ratios were also measured for jumping activity. It is confirmed that most parameters are frequency dependent and vary significantly between individuals. Moreover, the study provides a rare insight into intrasubject variations, revealing that it is more difficult to perform bobbing in a consistent way. The paper demonstrates that the vibration response of a structure is sensitive to cycle-by-cycle variations in the forcing parameters, with highest sensitivity to variations in the activity frequency. In addition, this paper investigates whether accurate monitoring of the ground reaction force is possible by recording the kinematics of a single point on the human body. It is concluded that monitoring the C7th vertebrae at the base of the neck is appropriate for recording frequency content of up to 4 Hz for bobbing and 5 Hz for jumping. The results from this study are expected to contribute to the development of stochastic models of human actions on assembly structures. The proposed simplified measurements of the forcing function have potential to be used for monitoring groups and crowds of people on structures that host sports and music events and characterizing human-structure and human-human interaction effects. DOI: 10.1061/(ASCE)ST.1943-541X.0001649. This work is made available under the terms of the Creative Commons Attribution 4.0 International license, http://creativecommons.org/licenses/by/4.0/.
Introduction
Vibration serviceability assessment under human-induced dynamic loading is one of the most challenging aspects of structural engineering design due to inherent randomness in frequency, duration, and type of human actions to which the structure could be exposed. Walking, running, rising up from sitting, sitting down from standing, swaying, jumping, and bobbing are frequently observed human activities that generate three-dimensional dynamic forces (Bachmann and Ammann 1987). This paper is concerned with two activities, rhythmic jumping and bobbing, which are of most interest in the design of stadia and concert venues (Jones et al. 2011b). Only the largest (i.e., vertical) component of the force, whose peak value could be up to seven times larger than the body weight (Bachmann and Ammann 1987), will be considered. The vertical force is often responsible for excessive vibrations that could lead to damage of nonstructural components, unwanted noise, discomfort, or panic among occupants, overstressing the structure, and, in rare cases, compromising the structural integrity.
The key difference between jumping and bobbing is that a jumping cycle consists of a contact and a flight phase, while bobbing can be seen as akin to an attempt to jump while maintaining continuous contact with the ground (Jones et al. 2011b). Distinctions are often made between two different styles of bobbing: bouncing and jouncing. Bouncing is a more controlled action during which heel contact is maintained with the ground at all times (Sim et al. 2005). In this case, the majority of the movement is caused by the subject bending their knees. Jouncing is a more energetic alterative in which the subject rises on to their toes, breaking contact between their heels and the floor (Jones et al. 2011b). The action of jouncing is more complex as maintaining balance while rising on to the toes requires the engagement of additional muscles and joints. While jumping is more severe in terms of loading amplitude (Ellis and Ji 1994), bobbing is more common at concerts and other events, as it requires less energy (and less space) to be maintained (Racic et al. 2013a;Dougill et al. 2006). In addition, bobbing is an important loading case as the person is more likely to feel the structural movement and potentially react, consciously or unconsciously, by synchronizing with it, which could lead to a prolonged and excessive vibration response.
Body kinematics while jumping and bobbing, as well as the resulting dynamic force, varies significantly within a human population. Moreover, a single individual produces different kinematic and kinetic outputs when performing nominally the same activity on different occasions, or even from one jumping/bobbing cycle to another (Sim et al. 2008;Racic and Pavic 2010a). Tackling limited understanding of dynamic loads with unnecessarily conservative and uneconomical designs is no longer acceptable in the age of performance-based design and expectation of minimum use of natural resources. Hence, modern design is somewhat contradictory: it favors structures that are lightweight and slender (and inherently vibration sensitive) while expecting vibration serviceable design solutions. Anticipating and preventing vibration serviceability failures requires detailed characterization of human actions and the intrinsic randomness in both the population and an individual's dynamic loading, neither of which is currently readily available to structural designers.
Typically, jumping can be performed at frequencies between 1 and 4 Hz (Rainer et al. 1988;Pernica 1990). The peak value of the generated ground reaction force (GRF) is usually 2.0-4.5 times larger than the weight of the person jumping (Sim et al. 2008). This GRF results from trunk motion characterized by peak-to-peak displacements around 10-30 cm and acceleration around 15-35 m=s 2 (McDonald 2015). Bobbing can be performed at frequencies up to 6 Hz, albeit the upper limit of 4.0-5.0 Hz is more frequently encountered (Yao et al. 2004). Typical peak-to-peak trunk displacements while bobbing are about two times lower than while jumping as body movement is limited by the continuous contact with the ground. As a consequence, the GRF generated while bobbing is lower compared to jumping.
In design applications, the force is traditionally decomposed into the main harmonics using Fourier transform. The harmonics are then normalized by the body weight to calculate a dimensionless quantity called dynamic loading factor (DLF). For jumping, the upper limit of the DLF is typically 1.8 for the first, 1.0 for the second, and 0.3 for the third harmonic (Rainer et al. 1988), while the bobbing DLFs are typically about two times lower. The strength of the forcing harmonics normally reduces with an increase in harmonic frequency. As a result, the structures considered to be at risk from excessive vibrations are those with a natural frequency of up to 6 Hz (IStructE 2008). Accurate modeling of this low-frequency content of the force is considered to be most important for vibration serviceability assessment. Use of traditional models for this purpose has been found to be inadequate in vibration-prone structures as they cannot model the narrow band nature (i.e., randomness) of human-induced dynamic loading (Brownjohn et al. 2004). Sim et al. (2008) were among first researchers to include cycleby-cycle variations when modeling the force amplitude, contact time, and the frequency of people jumping. They proposed a cosine-squared function to represent each jumping cycle and therefore could not describe asymmetry and local irregularities in the forcing signal regularly encountered in measured data. Racic and Pavic (2010a) overcame these limitations by first modeling asymmetry in the force waveform as a sum of two Gaussian functions. They then extended their approach to a detailed modeling of local irregularities by utilizing a sufficiently large number of Gaussian functions and achieved excellent agreement (in both time domain and frequency domain) between simulated force waveforms and those seen in experiments (Racic and Pavic 2010b). Similar successful detailed modeling has been recently achieved for dynamic loading during bobbing (Racic and Chen 2015). These advanced studies, however, rarely report the degree of variability in parameters describing the jumping and bobbing forces and its influence on the structural response-the information that would be of direct interest to structural designers. Some modeling studies also rely on use of publically unavailable databases of measured force time histories to inform the choice of the modeling parameters. Providing direct insight into the variability of the individual parameters would contribute to bridging the gap between traditional deterministic modeling, featuring parameters with clear physical meaning, and more sophisticated and accurate, but also more complex, stochastic modeling approaches. This study aims to contribute towards characterizing variability in both jumping and bobbing actions and evaluating the importance of parameter variability for design. To achieve these aims, GRFs measured using a force plate will be utilized.
Another aim of this paper is to investigate the possibility of measuring the force in such a way as to overcome some limitations associated with other methods. Force plates are usually small (i.e., 400 × 600 or 600 × 600 mm) and normally limited to laboratory environments. The former makes targeting the landing area while jumping challenging, potentially leading to an unnatural jumping action that, in turn, affects the GRF waveform. Racic et al. (2013b) overcame these issues by utilizing a motion capture system. This system consists of a number of cameras that monitor either active or passive markers systematically attached to anatomical landmarks of the test subject's body, with the aim of recording the body kinematics during the observed activity. The GRF is then indirectly found by summing up the inertia forces of individual body segments (Thorton-Trump and Daher 1975;Racic et al. 2013b). This procedure, however, requires the use of a large number of markers, which may be logistically challenging, especially when monitoring several individuals simultaneously. Instead, it would be more convenient if the kinematics of the body center of mass (BCoM) could be directly measured. In this case, the acceleration of BCoM could be multiplied by the test subject's mass to derive the GRF. The challenge with this approach is that the BCoM is not directly accessible for monitoring as it is normally located within the test subject's trunk during jumping and bobbing. A potential solution is to identify a point on the human body whose kinematics closely resembles that of the BCoM. Identifying this point is the aim of the second part of this study.
The paper outline is as follows. The experimental procedure is described first. Then the variability in the jumping and bobbing actions is characterized. This is followed by the identification of a single point on the human body suitable for force measurement. Finally, the discussion and conclusions are presented.
Experimental Procedure
Experiments were conducted in the Gait Laboratory at the University of Warwick, Coventry, U.K. The laboratory is equipped with a Vicon (Oxford Metrics, Oxford, U.K.) motion capture system consisting of 12 infrared cameras (Nexus), two digital video cameras and a force plate OR6-7-2000(AMTI 2007. Eight test subjects (TSs), four males and four females, volunteered to take part in the experiments. Their basic anthropometric properties are presented in Table 1. The TSs were instrumented with 17 reflective markers, positioned at locations with the potential to represent the movement of the BCoM well (Fig. 1). The markers were attached using double-sided tape. The TSs wore tight clothes as well as a muscle wrap around the lower trunk [highlighted in Fig. 1(d)] to minimize soft tissue artefacts, i.e., relative displacement between the markers and underlying bones (Racic et al. 2013b).
Six markers were placed on a TS's back: B1, B2, and B3 on the L5th, L3rd, and L1st vertebrae on the lower back, respectively, B4 on the T11th vertebrae, B5 on the T6th vertebrae on the middle back, i.e., between shoulder blades, and B6 on the C7th vertebrae at the base of the neck. Four markers were positioned on the hips: RH1 and LH1 on the anterior superior iliac spine on the right and left hip, respectively, while RH2 and LH2 were attached at the position of the greater trochanter on the right and left hip, respectively ( Fig. 1). Seven markers were placed on the front of the TS (F1-F7 in Fig. 1) and spaced evenly up the torso. The majority of the markers were placed on the trunk as the BCoM locations for jumping and bobbing postures are known to be within this part of the human body. The TSs were first asked to jump on the force plate at metronome-controlled frequencies of 1, 2, and 3 Hz. The frequency range was chosen to expose TSs to a wide range of jumping styles: extremely slow jumping (1 Hz) and relatively fast jumping (3 Hz) as well as comfortable jumping at 2 Hz (Yao et al. 2006). The frequency of 2 Hz was also chosen as it is often embedded in pop music (Ginty et al. 2001) acting as an aural stimulus for crowd actions at concert venues. After completion of these experiments, TSs performed bobbing at 1, 2, 3, and 4 Hz. The frequencies of 1-3 Hz were chosen to enable a comparison between bobbing and jumping activities at nominally the same frequencies, while the frequency of 4 Hz was added to reflect the ability of TSs to bob at frequencies above 3 Hz. Before each trial, the TSs were given enough time to familiarize themselves with the metronome beat and the test procedure, which was followed by 20 s of data collection. Between successive trials the TSs were given a few minutes to rest. During this break, the attachment of the markers was checked as were the force time histories, to ensure the TS did not miss the target force plate area. In rare cases when either of the two issues occurred, the previous trial was repeated. A test session with one TS, including preparation and briefing, lasted about 1 h.
Each trial consisted of simultaneous recording of the vertical component of the GRF (using the force plate) and the threedimensional displacements of the markers (using the motion capture system). The force plate signal was sampled at 1,000 Hz while the marker displacements were recorded at 200 Hz. The high sampling rates (which were the maximum sampling rates available in the measurement systems) were chosen to ensure that the time step was sufficiently small, and therefore, the numerical errors negligible, in subsequent numerical time-domain simulations (Chopra 1995). All signals were filtered using a low-pass fifth order Butterworth filter in MATLAB. The cut-off frequency of the filter was 1 Hz above the frequency of the third forcing harmonic or 7 Hz, whichever was larger. This cut-off frequency was chosen to allow analysis of the first three harmonics that contain the most excitation energy, as well as all harmonics up to 6 Hz that have potential to strongly excite grandstand structures (IStructE 2008). The force plate signal was decimated to 200 Hz in those analysis cases requiring comparison with the marker signals. The 20-s-long signal consisted of 20-80 cycles, depending on the frequency of the activity. In total 24 trials of jumping and 32 trials of bobbing were recorded.
The experiments were approved by the Biomedical and Scientific Research Ethics Committee at the University of Warwick. The test protocol and health and safety details were explained to TSs initially in the recruitment phase, and then repeated just before the testing took place. Before commencing the tests, TSs completed a physical readiness questionnaire and signed a consent form. Only TSs with no known relevant health problems at the time of testing were allowed to take part in the experiments.
Characterizing Jumping and Bobbing
A typical time history of the low-pass-filtered force induced by jumping is shown in Fig. 2 The force waveforms while bobbing are more diverse than those induced while jumping due to two distinct bobbing styles: bouncing and jouncing. Video footage of the TSs' heels was used to determine which style was preferred by each TS at each frequency. Table 2 shows that three TSs (TS2, TS3, and TS8) preferred the jouncing style while TS4 preferred to bounce at all four frequencies. The other four TSs made use of both styles, as a means of adapting to the imposed bobbing frequency. Interestingly, there is no correlation between the chosen style and the bobbing frequency across the population of four TSs. Overall, jouncing was encountered more frequently, i.e., in 22 out of 32 trials (69%). Fig. 2(b) shows bobbing force profiles at the nominal frequency of 3 Hz: the solid line represents TS1 bouncing while the dashed line depicts TS3 jouncing. The latter activity is usually more energetic resulting in a larger force compared with bouncing. In addition, the jouncing waveform can be similar to that of jumping, with troughs approaching zero (despite the fact that jouncing does not include a flight phase), as can be seen in the figure. However this observation cannot be generalized as low-energy jouncing and high-energy bouncing can also occur.
In the remainder of this section, the intersubject and/or intrasubject variations in the frequency, BCoM displacement, peak force, and DLFs for jumping and bobbing are shown and compared. Then the contact time, a parameter typical of jumping activity, is presented. At the end of the section, the vibration response is calculated and its sensitivity to randomness in the force investigated.
Parameters were extracted on a cycle-by-cycle basis to calculate the average value and the coefficient of variation (CoV) for each time history. To describe variability in the two statistical parameters (average and CoV) across the population of eight test subjects, their mean and standard deviation (SD) were also determined.
Activity Frequency
The frequency of each jump/bob cycle is calculated as the reciprocal value of the period T denoted in Fig. 2. The (actual) average frequency of each TS is plotted against the target frequency in Fig. 3(a), while the CoV is shown against the (actual) average frequency in Fig. 3(b). Similar information is shown in Figs. 3(c and d) for bobbing. The overall mean and the mean AE1 SD from the population of eight TSs are presented as solid lines and dashed lines, respectively. Star and triangular symbols are used to distinguish between male and female TSs, while solid and hollow symbols [in Figs. 3(c and d)] denote jouncing and bouncing styles of bobbing, respectively.
Figs. 3(a and c) demonstrate that, when the average frequency value of each trial is considered, TSs were most successful in matching the slow frequency of 1 Hz for both jumping and bobbing actions. For all other beat frequencies, the difference between the target and the actual values was larger, with both overestimations (up to 21%) and underestimations (up to 14%) of the target frequency possible. While test subjects seem best able to target the Frequency TS1 TS2 TS3 TS4 TS5 TS6 TS7 TS8 , and that both are larger than the maximum CoV of 3% observed when walking at a normal speed (Dang and Živanović 2015). For consistency, the template for data presentation used in Fig. 3 will also be utilized for the remaining parameters, whenever possible.
Body Center of Mass Kinematics
The peak-to-peak vertical displacement of each jumping/bobbing cycle was found using the measured displacement trajectory for the B6 marker, which is a good representation of BCoM kinematics (as shown later in the "Ground Reaction Force" section). The average peak-to-peak displacements decrease with an increase in the jumping frequency [ Fig. 4(a)]. The CoV values exhibit a global minimum at the most comfortable activity rate around 2 Hz [ Fig. 4(b)]. Similar conclusions can be drawn for bobbing [Figs. 4(c and d)]. The range of movement while jumping can be more than two times larger than during bobbing. Bobbing on a cycle-by-cycle basis is less consistent than jumping (i.e., the CoV for bobbing is consistently larger, with maximum value being 30% compared with 16% for jumping). It can also be seen that the range of movement is larger when jouncing, compared with bouncing, due to the breaking of contact between the heels and the ground [ Fig. 4(c)]. Displacement also varies more while jouncing [Fig. 4(d)]. In addition, the female TSs seem to be more energetic than male TSs while bobbing, resulting in a wider peak-to-peak body displacement [ Fig. 4(c)] and, at the same time, achieving a better consistency [Fig. 4(d)]. This pattern is not present for jumping.
Peak Force
The average peak forces, normalized by TSs' weight W, and their CoV for both jumping and bobbing are shown in Fig. 5. On average, the lowest forces occur at 1 Hz in both cases, as this is a slow, relatively mild action with respect to the dynamic force generated [Figs. 5(a and c)]. The force increases with increase in activity frequency. It reaches a maximum value at the most comfortable frequencies (2 Hz for jumping, and 3 Hz for bobbing), and then it starts to decrease slowly at faster, less-conformable, activity rates. It can be seen that the mean value of the peak force for jumping [ Fig. 5(a)], which is a more vigorous activity, is about 1.6 times larger than that for bobbing [Fig. 5(c)] at the same frequency (average bobbing peak forces range between 1.3 and 2.5, compared to 2.4-4.0 while jumping). In general, the peak forces were larger from jouncing than from bouncing [Fig. 5(c)].
As in the case of the previous two parameters, the peak force on a cycle-by-cycle basis varies more while bobbing [Fig. 5(d)] than while jumping [ Fig. 5(b)]. The only exception is jumping at 2 Hz producing not only the largest force but also the largest jumpby-jump variation, the latter due to ability of test subjects to vary jumping style at this comfortable frequency.
Dynamic Load Factors
The DLFs were calculated for the first three harmonics for activities performed at 2, 3, and 4 Hz. For jumping and bobbing at 1 Hz, the first six harmonics were analyzed as they all can cause resonance for structures with a natural frequency below 6 Hz (IStructE 2008). A fifth-order band-pass Butterworth filter was used to extract each harmonic on the average frequency AE3 SD bandwidth for the harmonic considered. The cycle-by-cycle amplitudes of the filtered force time histories were then found and their average value reported as a representative DLF value. Fig. 6 shows DLFs for each activity frequency. Large stars and triangles denote bobbing, while the small symbols denote jumping. The DLFs for jumping at 1 Hz follow an unusual pattern [ Fig. 6(a)], namely the first harmonic was smaller than the second, i.e., odd harmonics were lower than the even harmonics. This behavior is caused by time separation of the landing and launching actions, not present when jumping at quicker rates. It results in a double-peak force profile within a jumping cycle, causing the second harmonic (at 2 Hz), and more generally all even harmonics, to dominate. Consequently, separate relationships for even and odd harmonics are presented in Fig. 6(a). Bobbing at 1 Hz results in numerical values of DLFs that are generally lower than those values recorded for weaker (odd) harmonics caused by jumping at the same frequency. The DLF values for all three harmonics with respect to jumping at 2 Hz were similar to those due to jumping at 3 Hz and they decreased with increasing harmonic number [ Figs. 6(b and c)]. In both cases the DLF values for bobbing were around two times lower. DLFs for bobbing at 4 Hz [ Fig. 6(d)] are similar to those recorded while bobbing at 3 Hz.
Contact Ratio for Jumping
The contact ratio (i.e., contact time divided by period of the jumping cycle) is an important parameter that indicates the severity of the dynamic action. A shorter contact ratio implies a sharper force impact and therefore a larger dynamic force (Bachmann and Ammann 1987;BRE 2004). This inverse relationship between peak force and contact ratio for each jump is shown in Fig. 7(a).
The figure also shows that jumping at a frequency of 3 Hz (black symbols), which is largest frequency used in this study, does not necessarily result in the shortest contact ratio, which is sometimes assumed in analytical force models (Bachmann and Ammann 1987). Fig. 7(b) reinforces this finding. In addition, it shows that the average contact ratio ranges from approximately 0.5 (at 2 Hz), to a large value of 0.8 (at extremely slow frequency of 1 Hz). There is a large variation in the contact ratio, especially for the frequency of 2 Hz. Fig. 7(c) shows that the jump-by-jump variability in the contact ratio, as well as the contact ratio scatter within the population of TSs, increases with increase in the frequency.
All the contact ratios observed in the tests were greater than 0.4 and 95.2% were above 0.5, similar to the findings of Sim et al. (2008) and Yao et al. (2002). A full 97.5% of contact ratios were greater than 0.72 for 1 Hz jumping, 0.46 for 2 Hz, and 0.51 for 3 Hz
Vibration Response
To get an insight into the actual vibration response potentially generated by an individual jumping or bobbing, the response of the structure, modeled as a single-degree-of-freedom (SDOF) system representing a relevant mode of vibration, to the measured force is presented in this section. In addition, sensitivity of the vibration response to the randomness in the force is evaluated. The frequency of the SDOF system f n is varied in steps of 0.1 Hz from 0.5 to 10 Hz. The lowest damping ratio that is realistically encountered in grandstand structures is around 1%, and it occurs in both steel-frame stands (e.g., Salyards and Hanagan 2007) and reinforced concrete structures [e.g., Manchester City stadium described in detail by Jones et al. (2011a)]. The lowest modal mass reported in literature is about 10,000 kg (Parkhouse and Ward 2010). These low, but feasible, values of damping ratio and modal In addition, the chosen value of the modal mass allows for quick calculation of the vibration response for structures having an arbitrary modal mass by simply multiplying the calculated response by a scaling factor (i.e., 10,000 divided by the actual modal mass).
Actual Acceleration of the Structure
The measured force was cut to an integer number of cycles and then extended by repeating itself to reach a duration of about 40 s. The force was then applied to the SDOF model of the structure to calculate the vibration response. The 40-s signal duration provides sufficient time to achieve a representative response time history. The Newmark numerical procedure (Chopra 1995) was used to solve the equation of motion. An envelope of the peak response to the measured force while jumping is shown as a solid line in Fig. 8. The responses are dominated by the first forcing harmonics in the 2-3 Hz range and are about two times larger than the responses due to second harmonic (4-6 Hz frequency range). For bobbing (dashed line), the responses to the first harmonic (2-4 Hz) are occasionally more than three times larger than those from the second harmonic of the force (4-8 Hz). For the first two harmonics, the response to jumping is consistently larger than that from the bobbing action at the same frequency, in line with the DLF relationships observed in Fig. 6. An exception is at 1 Hz, where similar response values for both jumping and bobbing occur. The responses above 7 Hz (mainly due to the third harmonics) are similar for bobbing and jumping actions.
In addition to the extremely low modal mass of 10,000 kg (hereafter referred to as Case 1), the results will be discussed for a modal mass of about 35,000 kg (Case 2), which is more frequently encountered in practice. Some examples of the latter are two vibration modes of the Manchester City stadium (Jones et al. 2011a) and two grandstands described by Parkhouse and Ward (2010). Fig. 8 shows that the largest acceleration response to jumping is just above 6.0 m=s 2 for Case 1 and is about 1.7 m=s 2 (i.e., 3.5 times lower) for Case 2, while for bobbing it is 4.2 and 1.2 m=s 2 , respectively. According to ISO (2007), humans are most sensitive to vibrations in the frequency range 4-8 Hz. In this frequency region the limiting (equivalent peak) vibration level for comfort on stadia is 1.4 m=s 2 , while for safety (i.e., preventing panic) it is 2.8 m=s 2 . The limits are less strict for frequencies below 4 Hz and above 8 Hz. For example, at 2 Hz, they reach 2.2 and 4.5 m=s 2 , respectively. Fig. 8, therefore, demonstrates that a single person could cause vibrations that are mainly within the stated comfort level on a stand having modal mass of 35,000 kg, while the same activity on a stand of 10,000 kg could approach or even exceed the limit set for panic events. These calculations serve to illustrate excitation potential by a single person on an empty stand (a loading scenario that is more likely to occur before or after, rather than during, a sports or music event), and therefore the acceleration limit for panic events is used here to illustrate severity of the vibrations, rather than imply actual panic happening. The antinode of the vibration mode was assumed to be both the excitation and the response point in the simulations.
Influence of Cycle-by-Cycle Randomness on Vibration Response
Having characterized variability in the dynamic force and having an insight into the possible structural response level, it is interesting to determine how sensitive the vibration response is to the variability in the forcing parameters. The sensitivity is investigated for jumping activity in relation to the three parameters: peak force, frequency, and contact ratio.
The actual acceleration response to the measured forces has already been calculated in the previous section. To determine the significance of cycle-by-cycle variability in the force, three artificial force profiles have been created for each measured force. In each case, variability in one of the three parameters considered was removed from the force record: 1. Constant peak force: The average peak force was calculated for each time history, as explained in the "Peak Force" section. The forcing profile for each individual cycle was then scaled to enforce this value of the peak force; 2. Constant period (i.e., frequency): The average period (corresponding to the average frequency reported in the "Activity Frequency" section) was enforced on each cycle by applying appropriate scaling factor along the time axis, therefore stretching or compressing the force profile along this axis; and 3. Constant contact ratio: The average contact ratio from the "Contact Ratio for Jumping" section was enforced on each cycle by applying an appropriate scaling factor, while keeping the period unaltered.
In the latter two cases, the scaling along the time axis altered the original time step in the scaled sections of the force time history. To ensure equidistant time steps and preserve specific features of the newly formed time histories, the last two cases were resampled at 5,000 Hz using linear interpolation. The measured and the three artificially created force signals were then applied to all SDOFs analyzed in the previous section. The peak acceleration responses to the artificial forces were divided by the peak response to the measured forces to calculate the ratios shown in Fig. 9. Fig. 9 reveals that the response is least sensitive to the randomness in the peak force [ Fig. 9(a)], with almost all responses being within AE20% error boundaries. The sensitivity to the variability in the contact ratio is slightly higher [ Fig. 9(c)], but still the majority of the responses exhibit an error up to AE20%. Finally, Fig. 9(b) shows that the response is most sensitive to variations in the frequency and that neglecting this type of variability could result in both under-and over-estimation of the actual response, with a factor of 2 being readily possible.
Results of this study suggest that the cycle-by-cycle variability in the activity frequency should be included in the force modeling in order to improve the accuracy in the response prediction. Neglecting this type of variability could result in an overestimation of the vibration response when the average pacing frequency coincides with the structural frequency, as well as underestimations of the out-of-resonance response, similar to findings concerned Fig. 8. Envelopes of peak acceleration response to jumping and bobbing with walking activity (Brownjohn et al. 2004;Van Nimmen et al. 2014).
The variations in the peak force and contact ratio are less influential. However, they still should be considered in response simulations to avoid an accumulation of errors in the response prediction, especially when the structure under analysis is on the verge of failing vibration serviceability requirements.
Ground Reaction Force
The vertical component of the time-domain acceleration signal for the ith marker, a i ðtÞ, was calculated by differentiating the measured displacement twice. The corresponding vertical component of the GRF, F i (t), hereafter referred to as the indirect force, was then calculated by multiplying the test subject's mass by the net vertical acceleration where g = acceleration of gravity. The accuracy in measuring this force was then evaluated by comparing it with the benchmark force recorded using the force plate, hereafter referred to as the direct force.
The coefficient of determination R 2 (Draper and Smith 1985) was used to quantify how well the indirect force correlated with the direct force in the time domain. Then the influence of the measurement error on the structural response was evaluated.
Measuring Force Using the Kinematics of a Single Point
The average values of the R 2 coefficients, across the population of eight test subjects, are calculated for all 17 markers and for each activity frequency. The mean and SD values for each marker across all tests are also found. The results are presented in Table 3 for both jumping and bobbing activities.
It can be seen that the markers in the hip and back groups perform noticeably better than the front markers. This is expected given that the front markers are most prone to soft tissue artefacts. Table 3 reveals that marker B6 performed best across all frequencies and for both activities. Higher force accuracy was achieved for jumping. The same conclusion was reached when comparing the forces in the frequency domain (McDonald 2015).
The quality of the force measured using the B6 marker is shown in Fig. 10 for two trials. Figs. 10(a and b) demonstrate an excellent agreement, in both the time domain and frequency domains, between the indirectly and directly measured forces for a 2-Hz jumping trial. Figs. 10(c and d) show an example of poorer agreement for a 4-Hz bobbing trial, where overestimation of both the second and third harmonics can be seen.
Error in Indirectly Measured Force
The indirectly measured force contains errors from several sources: measurement error of the motion capture system, error due to the assumption that a single point on the body surface has the same kinematics as the body centre of mass, and error due to soft tissue artefacts. While it is known that the first error is small (up to 1 mm in measured displacement trajectories), the other two errors are not possible to evaluate individually. What is of interest in this study is 0 (a) (b) (c) Fig. 9. Ratio between the peak acceleration responses to artificial and measured forces, with the artificial force having a constant: (a) peak force; (b) period; (c) contact ratio evaluation of the success of the indirect force measurement by gaining an insight into the accumulated (total) error. To achieve this, the acceleration response to the force measured using the B6 maker is calculated for the SDOF systems using a damping ratio of 1% and natural frequency values in the 0.5-7.0 Hz range. The ratio r A between the peak vibration response to the indirectly measured force A indirect and the peak response to the directly measured force A direct is calculated where r A > 1 indicates that the indirect force results in an overestimation of the structural response, whereas r A < 1 relates to an underestimation of the actual response. The response ratio as a function of the natural frequency of the structure for jumping at 1, 2, and 3 Hz is shown in Fig. 11. The response ratio is between 0.8 and 1.2 for natural frequencies f n ≤ 3 Hz, for jumping at 1 Hz [ Fig. 11(a)]. Therefore, the structural response to first three harmonics of the force is within a AE20% error band. In fact, the error is outside this band only occasionally even at the higher frequency range of 3-5 Hz. At and around the natural frequency of 6 Hz, the error in the indirect force is much larger, with the response ratio ranging from 0.4 to 1.8. For jumping at 2 Hz [ Fig. 11(b)], the response is largely within AE20% error band for f n ≤ 4 Hz, suggesting that the first two harmonics of the indirect force are reasonably good representations of the actual dynamic excitation. At and around 6 Hz, the error increases significantly, with the ratio ranging from 0.6 to 2.5. Finally, the response ratio to jumping at 3 Hz is largely within AE20% error band for f n ≤ 6 Hz [Fig. 11(c)].
It can be concluded that the accuracy of the force measurement using the B6 marker is excellent for force components that excite structures with f n ≤ 3 Hz. This includes the first three harmonics for jumping at 1 Hz and the first harmonic for jumping at 2 and 3 Hz. For structures with a natural frequency between 3 and 5 Hz, the accuracy decreases; however, the response prediction is still largely within the AE20% error. Finally, for structures with a natural frequency between 5 and 7 Hz, the response error becomes substantial, for all but jumping at the fastest rate of 3 Hz.
Therefore, using the force derived from the kinematics of the B6 marker results in a very good estimate of the first harmonic of the dynamic force for all frequencies, and adequate estimate of the second harmonic of the force. The error becomes large for the third harmonic (apart from jumping/bobbing at 1 Hz). These results suggest that using the B6 marker to capture the frequency content of the force up to 5 Hz is acceptable. Fig. 12 shows the response ratio information for bobbing. A greater spread of r A values in Fig. 12 suggests that the error in force measurement while bobbing is greater than while jumping at nominally the same frequencies. However, the error in the response for structures with a natural frequency up to 4 Hz (which normally exhibit the largest response to bobbing, Fig. 8) is still contained within 20% error in 92% of trials (McDonald 2015). For structural frequencies above 4 Hz, the response ratio can be as low as 0.6, underestimating the actual structural response, and as large as 3, significantly overestimating the actual response. Therefore, using the kinematics of the B6 marker to derive the bobbing force results in good estimates of the first four harmonics for bobbing at 1 Hz, the first two harmonics for bobbing at 2 Hz, and the first harmonic when bobbing at higher frequencies.
Summary of Experiments
The GRFs for eight test subjects jumping and bobbing in the Gait Laboratory at the University of Warwick were recorded using a force plate. In parallel, kinematics of the body was monitored using a motion capture system. Jumping was performed with the aid of a metronome at three frequencies: 1, 2, and 3 Hz, while bobbing included an additional frequency of 4 Hz. The peak force and the frequency of the GRF profiles, as well as the peak to peak displacement of the human body, were extracted on a cycle-by-cycle basis for the two activities. In addition the contact ratios were extracted for the jumping trials. The DLFs were calculated for at least three harmonics of the measured forces.
Displacement of Trunk
The peak to peak displacement of the trunk while jumping was 15-25 cm at the slowest frequency of 1 Hz and around 8 cm at the fastest frequency of 3 Hz. For bobbing the displacements were 2-15 cm at 1 Hz and 2-3 cm at 4 Hz. Larger jumping displacements are expected due to the more energetic nature of this activity. The intersubject variability in the displacement while bobbing is much larger, especially for lower frequencies [ Fig. 4(c)]. The cycle-by-cycle variations, on the other hand, range between 4 and 16% for jumping and between 5 and 30% for bobbing, suggesting that consistent bobbing is more difficult than jumping.
Ground Reaction Force
The movement of the human body generates the dynamic GRF, and it is expected that the more energetic activity of jumping results in the largest forces. This is confirmed in the present study: the peak force ranges between 2.5 and 4.0 times the body weight for jumping, compared with 1.3-2.5 times the body weight while bobbing. In both cases the intersubject variations are significant; especially at the midfrequency range [Figs. 5(a and c)]. The cycle-by-cycle variations are again larger for bobbing (3-11%) compared with jumping (3-7%, with one trial reaching an atypical value of 10%).
DLFs are functions of the frequency of the activity (Fig. 6), however only the maximum values achieved in the experiments will be quoted here and only for frequencies that are more likely in practice (i.e., frequencies above 1 Hz). For jumping, these values go are up to 1.7 for the first harmonic, up to 1.0 for the second and up to 0.35 for the third. The equivalent values for bobbing are 1.1, 0.4, and 0.1, respectively. While larger DLFs for jumping are in line with previous findings on the body kinematics and peak forces, it is interesting that the differences in DLFs are more pronounced at higher harmonics. Overall, bobbing DLFs are around two times lower than those induced by jumping, but greater than the DLFs for walking (Dang and Živanović 2015).
Variability in Activity Frequency
The study shows that cycle-by-cycle variation in the activity frequency ranges mainly between 2 and 6% for both jumping and bobbing. Only one trial exceeded this range for jumping (8%), and three trials (6.5, 9, and 10.5%) exceeded it for bobbing. However, on average the variability is larger for bobbing, which is in line with the findings for the other parameters analyzed. The variability in jumping and bobbing is therefore more than two times greater than when walking at a normal speed (Dang and Živanović 2015). Despite this fact, modeling the randomness in the walking force is more frequently studied and better developed than for jumping and bobbing activities.
Variability in Contact Ratio
The contact ratios for jumping activity were found to vary between values of 0.45 and 0.82, suggesting that the old BS 6399 specification of 0.25-0.67 underestimated the actual range, resulting in a shorter duration of the force and a potentially overconservative force model. These recommendations have since been removed from the standard (BSI 2010), emphasizing the need for further research and development of force models. Finally, the jump-byjump variations in the contact ratio range from low values of 1-3% at 1 Hz to larger variations of 4-9% at 3 Hz.
Sensitivity of Vibration Response to Force Variability
Stated variations in the force parameters can be used in the structural design phase to investigate the sensitivity of the vibration response to human actions. Such a sensitivity study has been performed for jumping in this paper. It has been shown that the response is most sensitive to cycle-by-cycle variations in the activity frequency. Neglecting this type of variation could lead to both underestimations (by up to 70%) and overestimations (by up to 140%) of the actual response [ Fig. 9(b)]. Given that the bobbing forces were found to generally exhibit larger cycle-by-cycle variation compared to jumping, the importance of modeling the force variability is even greater in this case. The variations in the peak force and contact ratio influenced the vibration response to a lesser degree, but they still should be considered in modeling to avoid an accumulation of errors in the response prediction. To monitor body kinematics, 17 points on the human body were tracked using a motion capture system; 13 markers were systematically attached to the trunk of the test subject, while the remaining four markers were attached to the hips. The ultimate aim was to identify the points that best represent the BCoM kinematics and the resulting GRF. It was found that monitoring the C7th vertebrae at the base of the neck was suitable for accurate measurement of the frequency content of the force up to 5 Hz for jumping and up to 4 Hz for bobbing activity.
Utilizing C7 in Future Research
The proposed simplified measurement method has potential to be used for monitoring of groups and crowds during sports and music events. However the use of the motion capture system currently limits the method to mostly laboratory environments. Further research may facilitate the monitoring of the C7th vertebrae in situ by using markerless video recording-a methodology that has already been successfully explored for monitoring body kinematics by tracking head movement (Hoath et al. 2007). Alternatively, utilizing wireless technologies, such as those consisting of inertial (Van Nimmen et al. 2014) and magnetometer sensors could be employed. The latter application would be limited to studies with prerecruited participants due to the need to instrument them with wireless devices. Regardless of this limitation, both (video and wireless) approaches could prove to be extremely valuable for data collection on as-built structures.
Monitoring of the C7th vertebrae could be used not only in future measurements of the low-frequency content of the force and the intrasubject variations in the key parameters, but also for observing human-human interaction (e.g., potential for synchronization). These measurements are of particular interest for jumping and bobbing on slender, perceptibly oscillating (i.e., nonrigid) structures. It is known that humans jumping and bobbing in groups interact with each other as well as with oscillating structures and that there is a need for event-based evaluation of the structural vibration response that could accurately account for these interactions (Yao et al. 2004(Yao et al. , 2006Parkhouse and Ward 2010;Jones et al. 2011a;Catbas et al. 2010;Salyards and Hua 2015). However, detailed insights into the influence of structural vibration on the actual kinematics of humans bobbing and jumping and into humanhuman interactions within groups and crowds are not available. This paper identifies body location suitable for monitoring and characterizes the variability of jumping and bobbing activities on rigid surfaces. These outputs are expected to inform measurements on in situ structures, to be used as a benchmark for evaluating the potential influence of structural vibration on human jumping and bobbing actions and to facilitate development of stochastic loading models in future research.
Data Availability
Electronic format of the data collected in this research can be downloaded free of charge from the University of Warwick webpage http://wrap.warwick.ac.uk/80470/.
|
v3-fos-license
|
2023-04-08T15:09:54.132Z
|
2023-04-06T00:00:00.000
|
258023060
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/smsc.202300006",
"pdf_hash": "06343c817d08e5387dd9e3e482662b13cc960e0a",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45951",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "f7331fcf80b884f081d05a4c7f252bb6ccb81b2d",
"year": 2023
}
|
pes2o/s2orc
|
A Honeycomb‐Structured CoF2‐Modified Separator Enabling High‐Performance Lithium−Sulfur Batteries
Sulfur cathode materials in lithium–sulfur chemistry suffer from poor electronic conductivity and shuttle of lithium polysulfides during charging and discharging. Serious shuttle effects and the sluggish redox reaction kinetics of polysulfides severely limit the development of lithium–sulfur batteries with high sulfur loading, impeding the practical process of lithium–sulfur batteries. Herein, a honeycomb73x02010;structured CoF2@C is introduced as a functional layer adhered to the separator, achieving rapid lithium‐ion transport, high catalytic activity, and suppressed shuttle effect simultaneously. As a result, the cell with CoF2‐modified separator presents satisfactory cycle stability with a capacity decay of 0.076% per cycle within 300 cycles at 1 C rate with the sulfur loading of 2.0 mg cm−2. A low‐capacity decay of 0.088% per cycle for 200 cycles at 0.2 C is also achieved with sulfur loading of 3.0 mg cm−2. In addition, a high‐capacity retention of 697.5 mA g−1 is achieved with sulfur loading of 4.0 mg cm−2 and the electrolyte volume/sulfur mass (E/S) ratio of 8 μL mg−1.
Introduction
Traditional lithium-ion batteries (LIBs) are reaching their bottlenecks due to the theoretical capacity density limitations, which cannot meet the growing demands for electric vehicles and mobile power devices. [1][2][3][4] Lithium-sulfur batteries have been prioritized as the alternative for the development of next-generation highenergy energy-storage systems, attributed to the low cost, abundant supply, and environmental friendliness of sulfur, as well as its extremely high theoretical specific capacity of 1675 mAh g À1 and theoretical energy density of 2800 Wh kg À1 . [5][6][7][8][9][10] However, a series of issues remain to be solved, such as poor conductivity of sulfur materials, shuttle effect of dissolved lithium polysulfide, and structural destruction of the cathode material caused by volume changes during the cycle, which would result in dramatical capacity decay, low Coulomb efficiency, and poor rate performance of lithium-sulfur batteries. [11][12][13][14][15] During the past decades, numerous efforts have been made to design and modify the functional sulfur cathode materials. Typically, carbon-based materials, metal oxides, metal sulfides, and metal nitrides are introduced as the sulfur host materials to enable high electrochemical performance of Li-S batteries. [16][17][18][19][20][21][22] The sulfurcarbon composite can effectively inhibit the shuttle effect via the physical adsorption of lithium polysulfide and greatly reduce the electrochemical impedance of the battery, ascribed to the good conductivity and high porosity of the carbon material itself. [23][24][25] Additionally, polar compounds, such as metal oxides, metal fluorides, and metal sulfides, have been developed to enhance the chemical anchoring of lithium polysulfides and promote its redox kinetics during cycling, hence improving the usage of active ingredients. [26][27][28][29][30][31][32] However, the strategy of using the host materials usually would lead to the limited sulfur loading, excessive electrolyte usage, and high manufacturing cost. Recently, the functional separator and carbon interlayer separator have been considered as highly effective methods to suppress the diffusion of soluble lithium polysulfides. [33,34] First, the functional layer is used as a physical barrier to hinder the shuttle effect of polysulfides. Furthermore, the designed functional layer can adsorb the soluble polysulfide during the discharge and charge process so that its capacity can be reused, and some functional layers can also catalyze the conversion of polysulfide and improve its redox kinetics. For example, Su et al. reported an insertion of the electrolyte-permeable microporous carbon paper (MCP) between the separator and cathode. [35] This design of cell can effectively decrease the resistance of cathodes, resulting in an enhancement of active material utilization. Liu et al. reported a nano-SiO 2 blending polyetherimide separator modified with acetylene black/poly(vinylpyrrolidone) coating layer. [36] The produced coating layer demonstrated excellent adsorption capacity on polysulfides and accelerated redox reaction among polysulfides. Xiao et al proposed that coating the surface of a C-S cathode with a graphene/TiO 2 film traps and suppresses the DOI: 10.1002/smsc.202300006 Sulfur cathode materials in lithium-sulfur chemistry suffer from poor electronic conductivity and shuttle of lithium polysulfides during charging and discharging. Serious shuttle effects and the sluggish redox reaction kinetics of polysulfides severely limit the development of lithium-sulfur batteries with high sulfur loading, impeding the practical process of lithium-sulfur batteries. Herein, a honeycombstructured CoF 2 @C is introduced as a functional layer adhered to the separator, achieving rapid lithium-ion transport, high catalytic activity, and suppressed shuttle effect simultaneously. As a result, the cell with CoF 2 -modified separator presents satisfactory cycle stability with a capacity decay of 0.076% per cycle within 300 cycles at 1 C rate with the sulfur loading of 2.0 mg cm À2 . A low-capacity decay of 0.088% per cycle for 200 cycles at 0.2 C is also achieved with sulfur loading of 3.0 mg cm À2 . In addition, a high-capacity retention of 697.5 mA g À1 is achieved with sulfur loading of 4.0 mg cm À2 and the electrolyte volume/sulfur mass (E/S) ratio of 8 μL mg À1 . dissolution of polysulfides, alleviating the undesirable shuttle effect. [37] Ma et al. reported that the separators modified with polypyrrole nanotubes, polypyrrole nanowires, and reduced graphene oxide, respectively, were used for Li-S batteries. [38] The results provided that all the conductive materials for the separator surface decoration inhibited the migration of lithium polysulfides in the electrolyte and decreased the polarization of sulfur cathodes. For this reason, designing a functional layer between the separator and cathode is promising to achieve polysulfide adsorption and catalysis.
Herein, we develop a functional separator modified by the honeycomb structured CoF 2 @C for lithium-sulfur batteries. The advantages of the functional separator are as follows.
1) The 3D channels of conductive honeycomb-structured carbon substrate facilitate more uniform and fast lithium-ion transport during charging and discharging. 2) CoF 2 embedded on the surface can chemically trap the soluble lithium polysulfide via Lewis acid-base interaction, thus confining lithium polysulfide to the side of the cathode. Moreover, the CoF 2 nanoparticles can simultaneously exhibit the electrocatalytic effect and enhance the conversion kinetics of lithium polysulfide. 3) There is so little CoF 2 to modify separator, which greatly reduces the impact on the mass energy density of the battery. In addition, using graphite and sulfur composites as working electrodes reduces costs and meets the demands of high-surface-loading standards for industrial production. As a result, the honeycomb structured CoF 2 @C as a functional layer allows for a high initial capacity of 899.5 mA g À1 at 0.2 C with the sulfur loading of 3.0 mg cm À2 and excellent cycle performance with a capacity fading of 0.076% per cycle for 300 cycles at 1 C. In addition, a high-capacity retention of 697.5 mA g À1 is also achieved with the sulfur loading of 4.0 mg cm À2 and the electrolyte volume/sulfur mass (E/S) ratio of 8 μL mg À1 . Figure 1 reveals the synthesis route of the honeycomb structured CoF 2 @C composite. The polyvinylpyrrolidone (PVP) and the cobalt nitrate with the mass ratio of 1:1 were introduced to form a gel after the mixed solution was completely dried under 90°C.
Results and Discussion
Subsequently, the carbonization and fluorination were carried out. As a result, the CoF 2 nanoparticles homogeneously embedded in the conductive honeycomb structured carbon matrixes (CoF 2 @C) were obtained. Finally, the honeycomb structured CoF 2 @C was dispersed in the separator by the coating process method.
The morphology of the honeycomb structured CoF 2 @C was observed by scanning electron microscopy (SEM). Figure 2a-g shows the morphology and element distribution of CoF 2 @C, and the CoF 2 nanoparticles are uniformly distributed across the honeycomb structure carbon surface with a side length of about 2 μm. Figure 2h demonstrates the X-ray diffraction (XRD) patterns of the final product CoF 2 @C. CoF 2 phase is verified by the four typical peaks at 26.7°, 34.0°, 39.1°, and 52.0°, which agree with PDF card of CoF 2 (no. 33-0417). In the Brunauer-Emmett-Teller (BET) specific surface area test (as shown in Figure 2i), the honeycomb structured CoF 2 @C powder provides a high specific surface area of 144 m 2 g À1 . These are multiple honeycomb channels which are favorable for lithium polysulfides adsorption in the Li-S battery. The powder was weighed after carbonization and fluorination, and this weight difference was used to calculate the fluorine content, demonstrating 82.5 wt% of CoF 2 in the composite material.
In order to explore more details of the CoF 2 @C composite, high-resolution transmission electron microscope (HRTEM) studies were carried out on the sample to study the morphology, size, shape, and distribution of the nanoparticles. The (BF)-and high-angle annular dark field (HAADF)-scanning transmission electron microscopy (STEM) images exhibit a projection of the honeycomb structure. The BF-, HAADF-STEM, and HRTEM images reveal that the honeycomb wall consists of carbon nanosheets embedding isolated CoF 2 that is sphere-like nanoparticles with size of 10-20 nm (Figure 3a,b,d). The HAADF-STEM image combined with energy-dispersive spectrometry (EDS) elemental maps (Figure 3e-h) show the Co, C, and F distribution within the CoF 2 @C composite, further proving that CoF 2 nanoparticles are uniformly embedded in the carbon matrix.
Polysulfides adsorption experiments were conducted to investigate the interaction between honeycomb CoF 2 @C and lithium polysulfides. Honeycomb structured CoF 2 @C and G (graphite) are physically mixed with Li 2 S 6 in DOL/ DME solution for 24 h, after which the liquid supernatant is collected for further examination. From the optical images (as shown in Figure 4b), it is demonstrated that the color of Li 2 S 6 solution could be obviously changed from brown into colorless, which indicates that lithium polysulfides are sufficiently absorbed by honeycomb CoF 2 @C after 24 h. By UV spectral analysis, the intensity of the characteristic absorption peak of Li 2 S 6 would be the weakest after lithium polysulfide interacts with honeycomb CoF 2 . This means that honeycomb structure CoF 2 @C has a strong adsorption effect on lithium polysulfide. The binding geometric models and binding energy between Li 2 S X (1bindinand CoF 2 @C or G were studied by density functional theory (DFT) calculations. As shown in Figure 4c-j, CoF 2 provides stronger adsorption capacity on LiPSs with the binding energies of À2.41, À2.18, À1.92, and À2.33 eV than graphite with the binding energies of À0.48, À0.29, À0.34, and À0.17 eV, respectively. This result is consistent with the conclusion of the visual adsorption experiment in Figure 4b and the corresponding UV/vis adsorption spectra in Figure 4a. As shown in Figure 5, the visual experiment of polysulfides shuttling was designed to verify whether CoF 2 @C-modified separator has a blocking effect on lithium polysulfide. The photograph in Figure 5a shows the change in the u-tube solution after 24 h. The analysis in X-ray photoelectron spectroscopy (XPS) spectra shows the interaction of Li 2 S 6 and CoF 2 . The C 1s spectrum are shown in Figure 5b, presenting two distinct characteristic peaks located at 288.5 and 290.7 eV, corresponding to C-F and C-F 2 in CoF 2 @C. After the adsorption of lithium polysulfide with CoF 2 , the characteristic peaks of 288.5 and 290.7 eV are significantly reduced. The high-resolution F 1s spectrum of the modified separator exhibits two F 1s distinct characteristic peaks located at 684.9 and 687.9 eV. After the adsorption of lithium polysulfide with CoF 2 , the characteristic peaks of 684.9 eV are distinctly improved. This phenomenon indicates the production of more Li─F bonds. This can also explain the decrease in the percentage of peaks represented by the C─F bond in the C 1s. Figure 5d shows that the Co 2p spectrum of the CoF 2 exhibits three doublets centered and the characteristic peaks Co 2p 3/2 and Co 2p 1/2 of CoF 2 respectively. An adsorption peak representative of Co─S bond appears at 779.43 eV and 794.58 eV for CoF 2 -LiPS after the incorporation www.advancedsciencenews.com www.small-science-journal.com of lithium polysulfides. At the same time, the percentage of the peak area of the characteristic peaks of Co-F decreases obviously, which means that Co─S bond is formed in the reaction and indicates the strong interaction between CoF 2 and Li 2 S 6 . In the S 2p core level of CoF 2 -LiPS, there are two S 2p 3/2 core levels with the ratio of 1:2 at 162.3 and 163.8 eV, respectively, corresponding to the terminal sulfur (S T ) and bridging sulfur (S B ). It could be found that after adsorption the Li 2 S x (1 ≤ x ≤ 4) compounds are detected on the surface of CoF 2 (binding energy of 164, 162À160 eV). [39] Meanwhile two additional peaks corresponding to Co-S interaction are observed at 162.7 and 163.9 eV for CoF 2 -LiPS. All the above results indicate that the lithium polysulfide is adsorbed on the side of honeycomb structured CoF 2 @C and the shuttle effect is suppressed. Sulfur-based composite electrodes with various sulfur loading (2.0-4.0 mg cm À2 ) are prepared to evaluate electrochemical performance. The cell with the modified separator first lap discharging reaches 899.5 mAh g À1 . In comparison, the cell with PP separator first lap discharge with the same loading capacity was only 393.2 mAh g À1 . The capacity of the cell with PP separator gradually increases during the first few turns of the cycle, indicating that the active material is not fully utilized. This is demonstrated that the separator modified with honeycomb CoF 2 @C facilitates the diffusive transport of lithium ions. From a complete charge and discharge cycle (as shown in Figure 6b), the higher-capacity utilization of the cell with modified separator is primarily reflected in the second platform of lithium-sulfur cells charge and discharge. This is due to the adsorption of lithium polysulfides on honeycomb CoF 2 , which inhibits the shuttle effect of lithium-sulfur cells, allowing the active substance to be fully utilized. Meanwhile, the activation energy required for Li 2 S oxidation during the charging process is lower than that in the cell with PP separator. Furthermore, Figure 6c,d shows that the polarization of the cells with the modified separator is significantly lower than that in the cell with PP separator. The difference in capacity is stark. In conjunction with the analysis in Figure 6e, the honeycomb CoF 2 @C material contributes almost no capacity in the same discharge-charge voltage range of lithium-sulfur chemistry, demonstrating that the honeycomb CoF 2 @C material only serves as an auxiliary in lithium-sulfur batteries and does not participate in the capacity contribution. As a result, after 200 discharge-charge cycles, the capacity of the cell with modified separator capacity still provides 740.4 mAh g À1 , while the cell with PP separator demonstrates a specific capacity of 441 mAh g À1 . The cell with the modified separator capacity retention rate reaches 82.3%, while the cell with PP separator battery capacity retention rate is only 76.2%. www.advancedsciencenews.com www.small-science-journal.com The galvanostatic charge-discharge tests at 1 C were further investigated. At a sulfur loading of 2.0 mg and an E/S of 15 μL mg À1 , the first cycle of charging and discharging cell capacity is 687.8 mAh g À1 at 1 C (as shown in Figure 7a). Meanwhile, the capacity of the cell with PP separator is only 274.3 mAh g À1 . Such a contrast is striking. After 300 cycles, the Li-S cells with CoF 2 @C-modified separator still deliver a discharge capacity of 542.4 mAh g À1 with a capacity retention of 77.2%. As shown in Figure 7b, with higher sulfur loading of 4 mg cm À2 and the E/S of 10 μL mg À1 , the cells could continue to function and maintain a capacity of about 818.1 mAh g À1 at 0.2 C and 690 mAh g À1 at 0.5 C. Concerning the rate performance of the cells, CoF 2 @C-modified separator promotes discharge capacities of 1096, 819.5, 672.2, 547.2, 386.3, and 737.1 mAh g À1 at various rates from 0.1-2 C, which are superior than those of the cell with PP separator. The second discharge plateau capacity of the cell is significantly higher than that of the control group, according to the charging and discharging curves. Additionally, the cell with modified separator can continue to operate steadily when the discharge rate is 2 C, whereas the cell with PP separator capacity is virtually nonexistent. When the discharge rate is switched back to 0.2 C rate, the capacity is recovered to 718.5 mAh g À1 , suggesting the good stability and reversibility of the cell with CoF 2 @C-modified separator in the dischargecharge process. All the above results present that the separator modified with honeycomb structured CoF 2 @C in Li-S cell improves all aspects of cell performance.
In the cyclic voltammetry (CV) tests, Figure 8a shows that the cell with the modified separator has a higher response current during the charging and discharging process. It can also be seen that the three peaks of the charging and discharging process of the cell with the modified separator are 1.972, 2.304, and 2.464 V, respectively, which has a lower redox potential than the cell with PP separator, implying that the honeycomb CoF 2 @C can reduce the activation energy of the reaction and facilitate the transformation of lithium polysulfides. Then we have performed an electrochemical analysis of the catalytic effect of honeycomb CoF 2 @C on lithium polysulfide. [40] A certain amount of honeycomb CoF 2 @C or graphite are coated on the carbon cloth and then the produced electrodes are assembled into a symmetrical cell solution before and after contacting graphite or CoF 2 @C for 24 h. c-f ) Binding geometric models and binding energy between the G and Li 2 S X (1 ≤ x ≤ 6). g-j) Binding geometric models and binding energy between the CoF 2 and Li 2 S X (1 ≤ x ≤ 6). with Li 2 S 6 solution. Cell charge and discharge tests were conducted in the voltage range of À1 to 1 V. CoF 2 @C demonstrates a larger response current, as shown in Figure 8b. The clear distinction suggests that the honeycomb CoF 2 @C has a far stronger catalytic impact on lithium polysulfide than graphite. This is in line with the results of CV. According to electrochemical impedance spectroscopy (EIS) tests, the cell with modified separator electrical resistance is significantly lower than that of the cell with PP separator simply. The small values of the impedance indicate the rapid lithium-ion transport in the cell with CoF 2 @C-modified separator. Meanwhile, CV tests under different sweep rates were carried out. As presented in Figure 8d,e, the lithium-ion diffusion coefficients (D Liþ ) of the lithium-sulfur battery cathodes were studied. [41] The relationship between the peak current density and scanning rate can reflect the value of D Liþ . The calculation process is based on the Randles-Sevick equation given below where I p corresponds to the peak current (A). n is the number of electrons transferred in the reaction. A is the electrode area (cm À2 ). D is the diffusion coefficient of lithium ion. C represents the lithium-ion concentration (molL À1 ) and v is the scan rate (V s À1 ). First, each current value of the cell with the modified separator (peak A) is significantly larger than the cell with PP separator (peak B) at the same sulfur loading. The linear fitting curves of the normalized peak current to the square root of www.advancedsciencenews.com www.small-science-journal.com the scanning rate and the calculated D value are shown in Figure 8f. Obviously, the slope of peak A is steeper than peak B, indicating a higher diffusion coefficient of lithium ion. In summary, the honeycomb CoF 2 @C-modified separator significantly improves the kinetic of sulfur redox reaction. The oxidation and reduction of Li 2 S was investigated through potentiostatic charging experiments (as shown in Figure 9). The decomposition capacity by the quantity of electric charge is much higher for CoF 2 @C-modified separator than the regular separator, suggesting the effective oxidation of Li 2 S on carbon paper surface with the modified separator. Meanwhile, SEM images show that the precipitated Li 2 S still exists on carbon paper surface with a regular separator surface after oxide reaction (Figure 9e), while it almost disappears on carbon paper surface with a modified separator (Figure 9h), suggesting the facilitated decomposition of Li 2 S on carbon paper surface with the modified separator during the charging process. The results clearly demonstrate that CoF 2 @C promotes the precipitation and decomposition of Li 2 S.
The morphologies of cycled electrodes and modified separator were studied. We have analyzed the SEM characterization of the cycled modified separator (as shown in Figure 10). The CoF 2 @C of the modified separator still has good morphological retention after 300 cycles in Figure 10a-d. More lithium polysulfide deposited especially on the surface of the carbon wall due to the adsorption of polysulfide on the CoF 2 . Meanwhile, we analyzed the XRD of the cycled modified separator. XRD result shows that CoF 2 still has good stability after 200 cycles in Figure 10k. CoF 2 phase is verified by the four typical peaks at 26.7°, 34.0°, 39.1°, and 52.0°, which agree with PDF card of CoF 2 (No. 33-0417). The first three peaks are from the cell separator. [42] The results show that CoF 2 still has good stability of morphological retention and phase after long-term cycling. Figure 11 shows the surface of cathodes in the cells after 200 cycles, respectively. As shown in Figures 11a-c, the sulfur cathode in the modified cell has a clean surface, indicating that the shuttle of the polysulfides is alleviated in cycles, which is favorable to reduce the unfavorable precipitation of lithium polysulfide. This should be attributed to the insertion of the honeycomb structured CoF 2 @C-modifed separator. In contrast, the visible species precipitation from lithium polysulfides is found on the conventional cells (Figure 11d-f ). The SEM images of the front and side views of the cycled lithium foil are demonstrated in Figure 12, showing the extent of lithium metal corrosion. The front side of the lithium foil sheet in Figure 12a,b is more compact and smoother with the modified separator. While lithium foils without the CoF 2 @C-modified separator (as shown in Figure 12e,f ) show uneven lithium deposition with many patches and pores on the surface, this indicates that the modified separator can effectively reduce the polysulfide shuttle in the battery reaction, reducing the reduction of lithium polysulfide on the Li surface and causing the smooth and dense surface on the cycled Li metal. This conclusion is consistent with the above results from the SEM images of cycled cathodes. From the side view of the lithium foil after cycling, the formed lithium dendrite (as shown in Figure 12c,d) is much smaller than that from Figure 12g,h. A severe lithium dendrite phenomenon can lead to short circuiting of the battery by puncturing the separator during charging and discharging. This apparent difference indicates that the modified separator is very effective in suppressing the lithium dendritic growth. The separator modified by the honeycomb structure CoF 2 @C shows protective properties of the anode and catalytic performance of accelerating the Li-S conversion reaction kinetics. www.advancedsciencenews.com www.small-science-journal.com
Conclusion
We have developed a honeycomb structured CoF 2 @C with a 3D conductive network to modify the separator in Li-S battery. Honeycomb structured CoF 2 @C-modified separator not only reduces the electrochemical impedance of the cell, but also provides excellent adsorption and electrocatalytic activity toward soluble intermediate polysulfide species during battery storage and cycling processes. In the meantime, the honeycomb structured CoF 2 @C-modified separator with inner pore spaces achieves rapid and uniform lithium-ion transport. Due to the combined effects, the Li-S cells with a modified separator demonstrate higher rate capability and cycle stability. Furthermore, a honeycomb structured CoF 2 @C-modified separator can promote high electrochemical performance under high sulfur loading conditions and low E/S ratios. In contrast to the complex synthesis of nanocomposite materials as hosts, we use a wide and inexpensive source of metal fluoride materials produced by a simple method. It is very significant to estimate prospectively practical application by fitting Li-S pouch cells with a high S loading and low E/S ratios.
Experimental Section
Preparation of Graphite/Sulfur Composites: The graphite and sulfur were added to the agate mortar at a mass ratio of 1:3, and absolute ethyl alcohol was added for homogeneous mixing. After fully grinding and mixing, the graphite/sulfur composites were washed three times with deionized water and ultrasonic processing. Finally, this powdered material was drying by a drying machine at 45°C for 24 h.
Preparation of Honeycomb CoF 2 @C: The 0.5 g cobalt nitrate and 0.5 g poly(vinylpyrrolidone) (PVP) were added to the beaker, with 100 mL deionized water for homogeneous mixing. The solution was completely dried to form a PVP-cobalt nitrate composite cluster. The cluster was carbonized and fluoridized for 6 h at 750°C under argon-protected atmosphere and 3 h at 280°C under argon and NF 3 atmosphere, respectively. Finally, the honeycomb-structured CoF 2 @C powder, PVDF, and super-P (quality ratio is 8:1:1) were dispersed in a separator by the coating process method.
Adsorption Experiment of Lithium: Li 2 S 6 solution (50 Â 10 À3 M) was prepared by mixing Li 2 S and sulfur powder(molar ratio, 1:5) into DOL/DME solvent (v/v, 1:1) and magnetic stirring at 65°C for 72 h until dissolved completely. 50 mg amount of graphite powder and CoF 2 @C powder was added to Li 2 S 6 solution (5 Â 10 À3 M) diluted by DOL/DME solvent. Adsorption was performed for 24 h, followed by UV testing. All operations were completed in an Ar-filled glovebox.
Electrochemical Measurement: The working electrodes consisted of 80 wt% as-prepared composite, 10 wt% super P, and 10 wt% polyvinyldifluoride (PVDF). The powder mixed was dispersed in N-methyl-pyrrolidone (NMP). The slurry was stirred in agate mortar, coated onto Al foil, and then dried at 40°C overnight. The modified Celgard 2400 was coated by a layer of slurry which consisted of 80 wt% honeycomb CoF 2 @C, 10 wt% super P, and 10 wt% PVDF. Finally, the working electrodes were punched into disks with a diameter of 12 mm, and the Celgard 2400@CoF 2 @C was punched into disks with a diameter of 19 mm. The sulfur content of low loading electrodes was 1.5-2.5 mg cm À2 , and high-areal sulfur loading electrode of 3.0-5.0 mg cm À2 was produced. The electrolyte/sulfur ratio was about 10-20 μL mg À1 for the tests. The electrolyte consisted of 1.0 m LiTFSI with 2 wt% LiNO 3 in 1,2-dimethoxyethance (DME) and 1,3-dioxolane (DOL) (v/v, 1:1). The galvanostatic discharge/charge tests were carried out with LAND-CT3001A instruments in the potential range of 1.7-2.8 V. CV, data management, and electrochemical impedance analysis were performed using Gammry workstation (Reference 600þ, Gamry Instruments, USA).
DFT: First-principle calculations based on DFT method were performed using the Vienna Ab initio Simulation Package (VASP). The cutoff energy for the planewave expansion of the PAW basis set was set to be 450 eV, and 3 Â 3 Â 1 Γ-centered k-point grids were used for Brillouin zone integrations. The exchange-correlation functional with a Gaussian smearing width term of 0.05 eV was used.
Structure Characterization: Micrographs was conducted with SEM (TESCAN MIRA) at an accelerating voltage of 5 kV. BF-and HAADF-TEM images were conducted with a transmission electron microscope (Talos-S) at an accelerating voltage of 20-200 kV. The crystal structure of the prepared materials was studied using an X-ray diffractometer (Empyrean 2) within a 2θ range of 10°-80°. The surface composition of the CoF 2 @C was investigated using XPS analysis recorded on Thermo Scientific ESCALAB250Xi with Al Kα radiation (hν = 1486.6 eV). The remaining polysulfides in supernatant after adsorption were measured using UV-vis (UV2600).
|
v3-fos-license
|
2017-06-18T02:56:08.602Z
|
2014-08-13T00:00:00.000
|
3180113
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0104961&type=printable",
"pdf_hash": "7261ee82e6d23a5113298d52fe6a4113fac77a2c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45955",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "19e1963ce2816aa87453dbefa8a1bda481b0264a",
"year": 2014
}
|
pes2o/s2orc
|
Multi-Level Factors Affecting Entry into and Engagement in the HIV Continuum of Care in Iringa, Tanzania
Progression through the HIV continuum of care, from HIV testing to lifelong retention in antiretroviral therapy (ART) care and treatment programs, is critical to the success of HIV treatment and prevention efforts. However, significant losses occur at each stage of the continuum and little is known about contextual factors contributing to disengagement at these stages. This study sought to explore multi-level barriers and facilitators influencing entry into and engagement in the continuum of care in Iringa, Tanzania. We used a mixed-methods study design including facility-based assessments and interviews with providers and clients of HIV testing and treatment services; interviews, focus group discussions and observations with community-based providers and clients of HIV care and support services; and longitudinal interviews with men and women living with HIV to understand their trajectories in care. Data were analyzed using narrative analysis to identify key themes across levels and stages in the continuum of care. Participants identified multiple compounding barriers to progression through the continuum of care at the individual, facility, community and structural levels. Key barriers included the reluctance to engage in HIV services while healthy, rigid clinic policies, disrespectful treatment from service providers, stock-outs of supplies, stigma and discrimination, alternate healing systems, distance to health facilities and poverty. Social support from family, friends or support groups, home-based care providers, income generating opportunities and community mobilization activities facilitated engagement throughout the HIV continuum. Findings highlight the complex, multi-dimensional dynamics that individuals experience throughout the continuum of care and underscore the importance of a holistic and multi-level perspective to understand this process. Addressing barriers at each level is important to promoting increased engagement throughout the continuum.
Introduction
In the past decade, the scale-up of antiretroviral therapy (ART) has led to an unprecedented number of people living with HIV (PLHIV) on treatment, resulting in decreased HIV-related morbidity, mortality and onward transmission [1,2]. Despite these considerable gains, ART coverage remains low. In 2012, treatment coverage in low and middle income countries was estimated at 61% of all eligible individuals under the 2010 World Health Organization (WHO) HIV treatment guidelines. However, under new WHO treatment guidelines, which recommend ART initiation at #500 cells/mm 3 , this ART coverage represents only 34% of the people eligible in 2013 [3,4] and considerable obstacles to universal treatment access remain.
Successful ART programs depend on the progression of PLHIV through a number of stages. This continuum of care, also referred to as a cascade of care or HIV care pathway, includes HIV testing and counseling (HTC), linkage to care (defined here as the initial engagement with the health system to receive HIV care and treatment services following an HIV diagnosis), eligibility assessment and clinical staging or CD4 testing, pre-ART care, ART initiation and lifelong ART adherence and retention in care. Considerable losses along each stage of this continuum have been well documented, especially in the pre-ART initiation [5][6][7] and retention stages [8]. However, factors contributing to suboptimal progression at each stage of the continuum are poorly understood. A systematic review of factors affecting linkages to ART in sub-Saharan Africa found that key barriers to engagement included transport costs and distance to health facilities, stigma and fear of disclosure and limited human resources [9]. MacPherson et al. reported that socio-cultural norms, support networks and limited human resources and laboratory capacity affected progression throughout the continuum in Malawi [10].
The Iringa region of Tanzania is located 500 kilometers (km) southwest of Dar es Salaam and has a population of approximately 900,000 people. The Tanzam Highway, a conduit between Dar es Salaam and Northern Zambia, cuts through Iringa, which is also home to large tea and timber plantations, leading to high rates of mobility which potentially contribute to HIV risk. Most people in Iringa are involved in small-scale agriculture [11]. A majority of the population lives more than 2 km from the nearest health facility, while over one third of rural residents live greater than 5 km away [12]. Iringa has the second highest HIV prevalence (9.1%) in the country [12]. It is estimated that only 68.6% of women and 52.7% of men in Iringa have ever been tested for HIV and received their results (26.0% and 28.2% in the past year, respectively) [12]. Thus, many individuals are unaware of their HIV serostatus and miss the opportunity for linkage to care and treatment services. Consistent with findings across sub-Saharan Africa, findings from other regions of Tanzania indicate that a substantial portion of individuals who receive a positive HIV diagnosis are not referred for subsequent care, and among those who are referred, many fail to register for services and clinical staging [13,14]. Country-level estimates suggest that less than one third of those eligible for ART based on WHO 2013 guidelines are receiving it [3].
While there is considerable need to improve access to and retention in HIV services in Iringa and elsewhere, little is known about barriers or facilitators to linkages throughout the HIV continuum of care. In this study, we examined multi-level barriers and facilitators influencing entry into and engagement in the continuum of care in Iringa, Tanzania.
Methods
Between March and November 2013 we used a mixed-methods approach to examine individual, facility and community level factors affecting linkages to HIV care and treatment.
Facility-based data collection
At the facility level, we conducted assessments of a sample of HIV testing and treatment services, including HTC sites (n = 4), care and treatment centers (CTC) (n = 4), prevention of mother to child transmission (PMTCT) services (n = 4) and voluntary medical male circumcision (VMMC) outreach sites (n = 1). Facilities were purposively sampled throughout Iringa region to ensure diversity in urban/rural location and facility size. At each facility, we completed a structured facility checklist to gather basic operational information, a direct observation to collect informa- tion on the flow of clientele through each facility, provider-client interactions, client-client interactions, wait times, and the general ambience. In addition, convenience sampling was used to identify providers (n = 26) and HIV-infected clients (n = 75) at facilities who participated in qualitative in-depth interviews (IDIs). Facilitybased data collection is summarized in Table 1.
Community-based data collection
In addition to facility-based data, we visited community-based providers of HIV care and support services, including support groups (n = 5), traditional healers (n = 12) and spiritual/religious healers (n = 2) sampled from urban and rural areas throughout Iringa region. In-depth qualitative interviews were conducted with providers (n = 52) and clients (n = 42) of these services who were conveniently sampled at these locations. IDIs were conducted with government representatives who had worked with traditional healers in the region (n = 4). We also conducted direct observations at community-based facilities. Finally, focus group discussions (FGD) were organized with existing support groups for PLHIV (n = 5) ( Table 1).
Cohort of PLHIV
To further explore the social context and dynamics of barriers and facilitators of engagement in care, we conducted longitudinal interviews with 48 PLHIV followed prospectively at three time points over the course of six months. Participants were stratified by gender, ART status and urban/rural location (Table 1).
Data analysis
All data were collected by trained Tanzanian research assistants. Interviews and focus groups were recorded, transcribed, and translated into English. Analysis of qualitative data was conducted through a multi-stage process using a narrative and case study approach. First, each individual interview with providers and clients was summarized and re-written in brief narrative form, with particular attention to structured categories of interest related to linkages within the continuum of care. Second, for each facility, these individual narrative summaries from both client and provider interviews were brought together with data from the facility checklist and observations to develop a case summary report for that facility. In this way, the analysis brought together a variety of perspectives on individual-level, providerlevel, and facility-level barriers and facilitators to engagement in HIV services for a particular setting. Similar methodology was employed for the longitudinal interviews that also sought to document the story or trajectory of each participant through the continuum of care and key factors linked to continuity or breakages in progression. Key themes were identified across these narratives and developed into a conceptual framework summarizing findings across levels and steps in the continuum of care.
Ethics Statement
All participants provided oral informed consent prior to data collection. Written consent was not obtained because the authors felt that asking individuals to disclose their full names by providing written consent would decrease participants' anonymity. Participants received 10,000 Tanzanian Shillings (,USD6) at the end of each interview or FGD for their time and transport. Ethical approval, including the decision to obtain oral informed consent, was obtained from Muhimbili University of Health and Allied Sciences, the Tanzania National Institute for Medical Research and Johns Hopkins Bloomberg School of Public Health.
Results
We present factors influencing engagement at each stage of the continuum of HIV care and treatment, including HTC, linkage to care, clinical staging, pre-ART care, ART and cross-cutting issues. At each stage in the HIV care continuum, we identified barriers and facilitators at the individual, facility, community, and structural levels, presented in a multi-level continuum of care framework ( Figure 1). Representative quotes from key themes are summarized in Table 2. In the cohort of PLHIV, data collectors made multiple attempts to contact cohort participants for followup interviews. If the participant could not be reached after three separate attempts (including phone calls and visits to the participant's home of record), the participant was designated as lost to follow up. Seven participants (3 females, 4 males) were lost to follow up during the second round of data collection and two participants (both male) were lost to follow up for the third round of data collection, for an overall retention rate of 81%.
HIV testing and counseling
HTC is available in Iringa through a mix of client-initiated and provider-initiated services. Client-initiated voluntary counseling and testing services are offered through both static and mobile services. Provider-initiated testing and counseling (PITC) should be routinely practiced in all health facilities. In addition, PITC services are offered routinely with VMMC and antenatal care (ANC) services.
Study participants described a perception that HIV is characterized by severe illness and that infected individuals would be visibly sick. Participants described a strong reluctance to test while healthy and said that individuals often distrust positive test results if they are not sick and therefore often seek multiple HIV tests to confirm positive results. Fear of the stress of an HIV-positive test result and internalized stigma were additional individual-level barriers to HTC, especially among asymptomatic individuals ( Table 2, quote 1.1).
A majority of our participants received HTC only after being sick for extended periods of time. Medical care was sought at multiple health facilities for recurring illnesses but HTC was rarely recommended during these visits. Despite the government policy of PITC, this service was not routinely offered to participants in our study and providers missed critical cases among these clients ( Table 2, quote 1.2.1).
Persistent stock-outs of HIV test kits, which were common throughout Iringa region for the duration of this study, prevented access to HTC services. Service providers noted that they routinely turned clients away, while clients explained that the stock-out had caused many people to give up on HIV testing completely (Table 2, quote 1.3.1).
In addition, women in our study were often denied antenatal care (ANC) services unless they brought their male partners to the first ANC visit for couples HIV testing. During a direct observation of a PMTCT facility, a data collector noted a poster on the wall that stated (in Swahili), ''It is required for a pregnant woman to come with her husband/partner on her first [ANC] visit. You will not be served without following this.'' Multiple women in our study reported being denied ANC services because they were not accompanied by a male partner, and participants shared stories of friends who delayed ANC services or avoided them altogether (Table 2, quote 1.4.1 and 1.4.2). While Tanzania national guidelines do not support this practice, they do encourage increased male participation and recommend that ANC providers promote ''couple/partner HIV/STI testing and counseling for all young women'' [15]. Service providers interviewed for this study explained that they required women to bring their partners in order to increase male involvement, which appears to be a misrepresentation of national guidelines, with the unintended negative consequence of discouraging single women from seeking care.
Traditional healers offered an alternative to the formal health system and some individuals attended these services after multiple failed attempts to determine the cause of their illness. Clients of traditional healers were rarely advised to test for HIV and noted they were always told that they were bewitched and in need of traditional medicine, preventing further engagement with the health system (Table 2, quote 1.5.1).
Several factors facilitated HTC, including the perception that ART is highly efficacious, often as a result of witnessing health improvements when a friend or relative initiated ART; increased perceived risk of infection resulting from an AIDS-related death of a spouse or family member; or having multiple sexual partners (
Access to and linkage to care
Following HTC, all additional HIV services in Iringa are provided at CTCs. The linkage to care phase includes the process of a newly diagnosed HIV-infected individual successfully progressing from an HTC facility to a CTC for assessment of ART eligibility.
One key facilitator prompting linkage to care was severe illness at the time of HIV diagnosis. These clients expressed relief in determining the cause of their illness and were happy to immediately link to a CTC in order to initiate treatment. In contrast, participants widely acknowledged that asymptomatic individuals were much less likely to link to a CTC because they either did not believe the HIV test results or did not see the point in receiving care (Table 2, quote 2.1.1). Several clients of spiritual healing services also delayed linking to care because they had faith that God would either cure them or stop disease progression.
Co-located HTC and CTC services facilitated linkages to care. Participants agreed that linking a person to a CTC on the same day increased that person's chance of continuing to receive HIV services, especially when a service provider personally escorted the client. However, many HTC facilities were stand-alone, so clients were given a referral card and told to travel to the nearest CTC on their own which was noted as a key point at which individuals drop out of services (Table 2, quote 2.2.1). In addition, providers had no way to follow up with patients to ensure successful linkage to treatment and care services because no active tracking systems existed in the facilities we assessed.
Clients often encountered challenges during their initial visit and were told to leave and return on another day due to restricted opening hours, limited capacity for enrollment and shortages of providers ( Table 2, quote 2.3.1). Some participants also expressed frustration at the way they were treated by CTC providers during their initial visit (
Clinical Staging and CD4 testing
After successfully linking to HIV care and treatment services, clients undergo clinical and/or laboratory staging to determine whether they are eligible for either pre-ART or ART care. In Iringa, this process most commonly involved a medical evaluation and CD4 testing. According to national guidelines, ART initiation is recommended for individuals with WHO stage 3 and 4 clinical criteria, regardless of CD4 cell count, and for adults and adolescents with a CD4 count ,350 cells/mm3, regardless of clinical symptoms [15].
While government guidelines allow for ART initiation in individuals based on clinical evaluation, all participants in our study reported being required to receive CD4 testing before ART initiation. This process was a major challenge for most CTC clients in this study. From the results of the facility checklist, we found that of the four CTC facilities visited, only one had a working CD4 machine, while two had machines that were either broken or lacked reagents. Clients were required to get a referral letter from their original CTC, travel long distances, were often turned away and test results were commonly lost, forcing clients to repeat the process multiple times (Table 2, quote 3.1.1). Because of these inconveniences, participants said that many PLHIV lose hope and drop out of HIV services at this stage.
Pre-ART care
Following staging, PLHIV who are not yet eligible for ART engage in pre-ART care services until they are eligible to initiate ART. This stage in the continuum of care should include regular clinical assessments for ART eligibility and consistent HIV care. According to Tanzanian guidelines, individuals in this stage of the continuum of care should receive cotrimoxazole prophylaxis, a combination of antibiotics used to treat a range of opportunistic infections associated with HIV [15]. Clients are expected to return to the CTC monthly for monitoring and to receive cotrimoxazole, free of charge, and should receive CD4 testing every six months until eligible for ART initiation.
Many ART-ineligible clients who were enrolled in pre-ART care viewed receiving cotrimoxazole as the main benefit of attending monthly appointments. In fact, several clients attributed dramatic improvements in their health to this medication. Unfortunately, chronic stock-outs were common; six out of 11 facilities reported cotrimoxazole stock-outs at the time of the facility checklist and study participants were often told to purchase the medication from a private pharmacy. Participants listed this as a significant factor for drop out of pre-ART care services (
ART initiation, adherence and retention
The next stage in the continuum of care is ART initiation. Current Tanzanian national guidelines recommend ART for individuals with CD4 counts of #350 cells/mm3 or those with severe or advanced clinical disease. After initiating ART, PLHIV are counseled to adhere to daily ART regimens and should return for regular clinic appointments for assessment, counseling and medication refills.
Many ART clients reported high levels of ART adherence due to health improvements experienced after ART initiation. Additionally, clients were motivated to continue attending CTC services because seeing other PLHIV made them feel that they were ''not alone'' and helped them to see HIV as a ''normal'' disease ( Table 2, quote 5.1.1). There were potential provider-level factors that acted as barriers to ART initiation and adherence. Several cohort participants reported that they were told by their doctors to discontinue ART when they became healthy. Participants also noted that they were not initiated on ART, even after their clinical symptoms indicated ART initiation or when their CD4 count was well below 200. However, these situations were reported by only a few study participants and were not supported by observations or interviews with providers, so the generalizability of these findings is not known.
One of the most significant barriers to retention in CTC services was disrespectful and abusive treatment by service providers. Almost all participants in this study encountered negative experiences where they were shouted at, ''scolded'' or ''punished'' by one or more providers. Often, negative interactions occurred when a client disobeyed rules set by providers, most commonly arriving late or missing an appointment (Table 2, quote 5.2.1). When clients returned to the CTC on a day other than their assigned clinic day, they were often either denied services completely or forced to wait until the end of the day as punishment or ''correction'' for their behavior. Harsh and disrespectful treatment was the most common reason for CTC clients to disengage from care ( Visiting traditional and spiritual healers were common alternate pathways to care. While most traditional healers said that they were unable to treat HIV, several noted that they routinely diagnosed, treated and cured PLHIV. In addition, all spiritual healers expressed faith in God's ability to cure HIV and encouraged their clients to trust in God's healing powers, and one spiritual healer explained that prayers helped PLHIV more than ART (Table 2, quote 5.3.1). A majority of alternate healers said that while they did not discourage clients from attending HIV services, they generally did not actively encourage attendance at HIV services, and several participants knew of people who were encouraged to stop taking ART while participating in spiritual healing services. Most clients of traditional and spiritual healers in this study attended alternate healing services in parallel with CTC services. These individuals were aware of the importance of ART for their survival, but had hope that they might be cured through alternate healing systems, so continued to attend both services. Clients of spiritual healers also discussed the support they received from other PLHIV during healing services as a reason for continued attendance. Support groups for PLHIV were identified throughout Iringa region. These groups were generally initiated and managed by PLHIV in the community and members attended regular meetings. Each group visited in this study also had an economic strengthening component through income generating activities or savings programs. Members of PLHIV support groups benefitted in many ways which facilitated their involvement in CTC services, including through social support and income generating opportunities. Most group members said they were no longer ''embarrassed'' of being HIV-positive and did not feel ashamed to attend CTC services because they knew that they were not alone ( Table 2, quote 5.4.1). Clients of HIV services who were not members of support groups expressed their desire for more opportunities for support and income generating activities but were not aware of organized groups in the area.
Home based care providers (HBCs), community volunteers who serve as a link between the health system and community by visiting PLHIV in their homes and following up after missed appointments, were present in two CTC facilities in this study. In these facilities, HBCs were mentioned as a very effective system for following up with patients who did not attend their CTC visits since they lived and worked at the village level. Service providers at the only CTC facility with HBCs said that they rarely lost clients to follow-up due to the strong support provided by HBCs (Table 2, quote 5.5.1).
Cross-cutting themes
In addition to the themes presented above which correspond to a specific stage within the continuum of care, several cross-cutting themes were identified which influenced engagement in HIV services at all stages.
Participants described frustration after traveling long distances for care only to be turned away if they arrived outside of clinic hours, if providers were too busy to serve them, if there were stockouts of HIV test kits and cotrimoxazole or when they encountered challenges accessing CD4 testing services (Table 2, quote 6.1.1). Poor communication between providers and clients limited clients' abilities to fully understand their situation, which led to misunderstanding important concepts such as what CD4 levels mean, the difference between cotrimoxazole and ART, and the importance of lifelong ART adherence. Finally, long wait times and severe crowding, especially at CTC facilities, caused frustration and led some PLHIV to disengage from care.
Stigma, discrimination and internalized stigma were very common themes throughout this study which impeded individuals' ability to access and participate in HIV services. Participants commonly reported that people avoided HTC due to fear of discrimination they would face if others discovered their HIV status, and many CTCs had waiting areas or queues that were outside, making the patients visible to all other hospital attendees or even people passing by. Some CTC clients traveled long distances to avoid seeing people that they knew at CTC facilities in their communities, while others discussed the ''humiliation'' they were forced to endure every time they attended CTC services. Many clients of HIV services also shared experiences of being discriminated against in their communities (Table 2, quote 6.2.1). Because of the real and perceived social stigma, many participants had not disclosed their HIV status to anyone outside of their immediate family, and many had not disclosed to their spouses due to fear of violence or abandonment.
Service providers discussed challenges they faced which ultimately led to burn out and lower quality care, including severe shortages of staff, lack of incentives and inadequate training. Doctors were often assigned to multiple departments while providers mentioned that they turned clients away from HTC or CTC services if they came towards the end of the day, since they were too exhausted to see more clients (Table 2, quote 6.3.1).
Individual trajectories
While the results presented above highlight specific factors which affect an individual's decision to engage in HIV services, these single barriers and facilitators do not capture the complexity of participant experiences, which often involve multiple competing priorities and challenges. To illustrate this complexity, we present three case studies from participants in the longitudinal cohort to contextualize the themes within a person's life trajectory. All names are pseudonyms.
Emmanuel: Disengagement following poor quality care. In 2011, Emmanuel's wife was pregnant. She told him that it was mandatory for men to accompany their wives to ANC services for HIV testing; if he did not attend, his wife would not receive services. Emmanuel hesitantly agreed to accompany his wife. At this visit, he was diagnosed with HIV even though he felt healthy and had no visible signs of illness.
His wife encouraged him to pursue further HIV care services, so he immediately travelled to the nearest CTC which was more than two hours away via public transportation. However, he became frustrated with the poor quality of care and long distances and disengaged from HIV services after only four visits.
Emmanuel's decision to disengage from care was a culmination of several factors, including inadequate services, being told repeatedly to return on a different day, and the cost associated with each trip. During his visits, he noted that he was not provided with any education or medication. He explained, ''I gave them my [CTC] card and they would tell me to come back the next month. I would go the next month, again they would write for me [to come back the next month], without getting any medication.'' Similarly, he tried four times to receive his CD4 test results but was denied every time and told to return on another day. Emmanuel explained that he could not justify wasting his time and money on CTC visits when the quality of services was so low and he was in good health, so he disengaged from care and never returned.
Linda: Internalized stigma. In 2005, Linda was diagnosed with HIV after being chronically ill for several years. Before her HIV diagnosis, Linda had frequent fevers, chest pain and tuberculosis. Even after completing tuberculosis treatment, she continued to experience recurrent illness but had never been advised to test for HIV by health providers. After ruling out all other possible problems, Linda decided to receive HTC.
After receiving her HIV-positive diagnosis, Linda was shocked and contemplated suicide. She recalled, ''I wished to take ten tablets at once so that I would die. That's what I was thinking.'' However, several service providers supported her and convinced her to accept her diagnosis and engage in HIV care and treatment services.
Linda did not disclose her HIV status to her husband or her children for one year due to fear of abandonment and discrimination. She believed that people perceived HIV to be a disease brought about by casual sex, and thought that PLHIV were viewed as ''hooligans'' or ''prostitutes.'' Linda found this particularly challenging because she was an older woman and felt that she was being harshly judged by people in her community.
Linda's decision to receive treatment was challenging and complex. When she arrived for services, extremely long lines, poor ventilation, and waiting for long periods in public where others could identify her as HIV-positive made her feel humiliated. Despite these challenges, Linda continued to engage in HIV services, which is due in large part to her participation in a support group for PLHIV. The group helped her to cope with the challenges she faced while waiting for CTC services and made her understand the importance of ART adherence for her survival.
Furaha: Verbal abuse from service providers. Furaha was diagnosed with HIV in 2007 after suffering from recurrent illnesses for more than one year. She was relieved when she received her HIV diagnosis because she finally discovered what was bothering her and felt happy that she could get relief. She said that her health improved dramatically after ART initiation and she attended CTC services regularly for five years. In 2012 however, she disengaged from care due to missing one CTC appointment because she was working away from home. When she returned on another date, she was scolded and yelled at by the providers who refused to give her ART. She tried to return several weeks later but was still denied services as a punishment for missing one appointment and did not want to continue to face service providers who treated her so poorly.
During the period that she was disengaged from care, Furaha purchased cotrimoxazole from the pharmacy and ''borrowed'' ART from her friends who were engaged in CTC services. She expressed concern and anxiety about not being able to adhere to ART and said, ''I have to go back because I don't want to die. I have to take the medications to stay alive …I pray that they accept me without scolding''.
During the course of this study, Furaha was successfully able to re-engage in care. The doctor ''warned'' her not to repeat missing appointments, but agreed to allow her to resume ART, but only after repeating three weeks of ART training.
Discussion
Understanding factors which motivate and prevent PLHIV from engaging in and adhering to care at each step along the continuum is critical to successful HIV treatment and prevention efforts. Our findings are consistent with previous studies assessing barriers and facilitators throughout various stages of the HIV care continuum [9,10,[16][17][18][19]. While many of the barriers presented here have been looked at independently as factors associated with negative care outcomes, few if any studies have looked holistically at all of these multiple levels and types of factors influencing the full continuum of care in a given setting. This study highlights the complex interplay of these factors and the need to provide comprehensive solutions which address the multiple challenges to providing HIV treatment and care services.
An individual's health was a strong influencing factor in progression through the continuum of care. Those who were visibly sick and had ruled out other causes of disease were most likely to seek HTC services, accept their diagnosis and immedi-ately link to care and treatment. Further, these individuals were also the most likely to see dramatic improvements in their health after initiating cotrimoxazole and/or ART, and therefore viewed these medications as important for sustained health. In contrast, healthy participants in our study expressed reluctance to receive HTC and were more likely to delay linking to care or disengage from care and treatment services, which is consistent with other findings throughout sub-Saharan Africa [20][21][22][23]. When faced with additional barriers to care such as long distances, high transport costs, stigma and risk of verbal abuse from providers, these individuals often chose not to engage in care because the perceived need for medical care was outweighed by multiple barriers and competing priorities. As treatment guidelines evolve to recommend ART initiation at higher CD4 counts, more people will initiate ART before a noticeable decline in health. Identifying these individuals and ensuring successful progression through the continuum of care is critical, but will be challenging. Fully implementing the policy of universal and routine PITC services in all health facilities to normalize HTC, and behavior change communication strategies to promote earlier testing and engagement in care, could change the current social norms around HIV testing and promote earlier uptake of HIV services.
Persistent stock-out of supplies, including HIV test kits, CD4 reagents and cotrimoxazole were common throughout Iringa region during this study. In addition to causing frustration and demotivation among service providers and clients, these stock-outs decreased trust in the health system, promoted disengagement from care and led to poor health outcomes. HTC providers noted that they routinely turned clients away during frequent HIV test kit stock-outs and believed that these people would give up and not return at a later date. In addition, lack of timely CD4 counts due to broken machines or missing reagents led to an inability to determine ART eligibility and has been shown delay ART initiation [10,24,25]. Provision of free cotrimoxazole, which was described by our study participants as the most important element of pre-ART care, has been shown to double retention in pre-ART care services [26]. Unfortunately, this benefit was undermined by chronic cotrimoxazole stock-outs which led to deep dissatisfaction and reduced clients' perceived need to attend monthly visits. Participants in our study who did not believe they would receive required services were less likely to invest in the time, money and effort needed to attend visits to health facilities. Stock-outs of HIVrelated medication and supplies have been documented in other African settings [27][28][29][30] and these findings highlight the need to strengthen supply chain management systems. In addition, point of care CD4 testing, which provides immediate results for use in patient care, could eliminate many of the logistical and operational barriers to CD4 testing noted in this study [31][32][33].
Provider attitudes and treatment of clients were significant barriers to retention in care and the main reason for disengagement from CTC services among clients of ART care. Many study participants endured verbal abuse and disrespectful treatment by providers at CTC facilities because accepting this mistreatment was the only way to receive ART. However, several clients disengaged from care when they could no longer handle being degraded, ridiculed and punished, even though they knew their health would suffer as a consequence. These findings point to a clear need to improve provider-client interactions as a means of reducing disengagement from care. Physical and verbal abuse against patients has been reported in a variety of health settings throughout sub-Saharan Africa [34][35][36][37]. In a study in South Africa, Jewkes et al. described providers' abusive treatment as ''a complex interplay of concerns including organizational issues, professional insecurities, perceived need to assert 'control' over the environment and sanctioning of the use of coercive and punitive measures to do so, and an underpinning ideology of patient inferiority'' [38]. They explained that violence was allowed to become commonplace due to lack of accountability and limited action taken by managers against service providers who abuse patients, which appears consistent with our findings. Service providers in our study discussed burnout and demotivation as a result of staff shortages, unrealistic workloads and lack of supervision and training. Health-system level changes to increase human resources, provide incentives, ensure adequate support systems and provide ongoing training and supervision are needed to increase service provider motivation and improve clientprovider interactions.
Consistent with other studies, our findings suggest that HIVrelated stigma and discrimination are key barriers to engagement in HIV services throughout the continuum, leading to suboptimal levels of HTC [39], disclosure [40], retention in care [41] and ART adherence [42,43]. Negative health outcomes resulting from HIV-related stigma have been well documented [44][45][46][47][48] and our findings highlight the need for stigma-reduction strategies to accompany HIV prevention and treatment efforts. While effective interventions to reduce community-level stigma and damaging social norms are not well-defined [49], there is general consensus that that four basic approaches are effective in reducing stigmatizing attitudes among individuals and groups, including information, skills-building, counselling and PLHIV testimonials [50,51].
Community-level approaches to HIV service delivery could be one strategy to reduce barriers at multiple levels identified in our study. A growing body of evidence suggests that novel strategies to bring HIV services to the community level through task shifting, such as home-based HIV testing [52][53][54][55] and home-based ART initiation [56] and delivery [57][58][59][60][61] are effective, feasible and acceptable [62]. In addition, positive side effects of home-based ART delivery programs include increased community-level social support and decreased discrimination [61,63], and clients of these services reported saving time and money due to reduced clinic visits [64]. These novel community-based strategies would require significant political commitment and operational research to tailor programs to the local context, but may have the potential to strengthen all stages of the HIV care continuum by improving identification of HIV-infected individuals, simplifying linkages to care, improving retention in ART care programs, and reducing structural-level barriers such as distance, cost and stigma and discrimination.
Finally, leveraging and expanding services and opportunities identified in this study which facilitate engagement in care, including PLHIV support groups, income generating opportunities, HBC providers and government engagement with alternate healing systems, could strengthen linkages to care and help to reduce the impact of additional barriers to engagement and retention in care in this setting.
Strengths of this study include the diversity of respondents and data collection methods which provided multiple perspectives on factors affecting the continuum of care in Iringa. In addition, the longitudinal nature of the qualitative cohort allowed us to gain an in-depth understanding of the complexity of PLHIV's experiences as they moved through the continuum of care. At the facility level, our interviews with clients of HIV services and service providers, direct observations and facility checklists allowed us to triangulate findings. In addition, interviews and focus group discussions with members of support groups, traditional healers and spiritual healers add a unique dimension and understanding to communitylevel influence throughout the continuum of care.
Despite these strengths, the study has several limitations. First, HIV test kit stock-outs limited our ability to interview newly diagnosed clients, especially those who chose not to link to further care and treatment services, which could have provided valuable insight into barriers at this stage. Another limitation was our inability to recruit HIV-infected clients of traditional healers. Traditional healers universally noted that PLHIV do not disclose their status, and they therefore were not aware of any HIVpositive clients. While we would have liked to understand the perspective of this group, we were able to capture experiences from longitudinal cohort participants who had visited traditional healers themselves or knew of people who had.
Conclusion
This study presents a multi-level framework for understanding barriers and facilitators to linkages to care in Iringa, Tanzania. Our findings highlight the complex, multi-dimensional dynamics that individuals experience throughout the continuum of care and underscore the importance of taking a holistic and multi-level perspective to understand this process. Interventions to address single barriers identified in this study are insufficient; our findings illustrate how multiple barriers interact and influence decisions about engagement in care. Addressing barriers at multiple levels is needed to promote increased engagement and retention in care.
|
v3-fos-license
|
2023-12-02T17:29:41.905Z
|
2023-11-28T00:00:00.000
|
265537177
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-0817/12/12/1397/pdf?version=1701145894",
"pdf_hash": "6aaf0b74d9b326d7d757baaccc9a047e4292dfca",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45958",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "37ead3d77cd5f6b64c752ee7361f8d9738b79de0",
"year": 2023
}
|
pes2o/s2orc
|
Antibiotic Utilization in Hospitalized Children with Bronchiolitis: A Prospective Study Investigating Clinical and Epidemiological Characteristics at a Secondary Hospital in Madrid (2004–2022)
Bronchiolitis is a viral respiratory infection, with respiratory syncytial virus (RSV) being the most frequent agent, requiring hospitalization in 1% of affected children. However, there continues to be a noteworthy incidence of antibiotic prescription in this setting, further exacerbating the global issue of antibiotic resistance. This study, conducted at Severo Ochoa Hospital in Madrid, Spain, focused on antibiotic usage in children under 2 years of age who were hospitalized for bronchiolitis between 2004 and 2022. In that time, 5438 children were admitted with acute respiratory infection, and 1715 infants (31.5%) with acute bronchiolitis were included. In total, 1470 (87%) had a positive viral identification (66% RSV, 32% HRV). Initially, antibiotics were prescribed to 13.4% of infants, but this percentage decreased to 7% during the COVID-19 pandemic thanks to adherence to guidelines and the implementation of rapid and precise viral diagnostic methods in the hospital. HBoV- and HAdV-infected children and those with viral coinfections were more likely to receive antibiotics in the univariate analysis. A multivariate logistic regression analysis revealed a statistically independent association between antibiotic prescription and fever > 38 °C (p < 0.001), abnormal chest-X ray (p < 0.001), ICU admission (p = 0.015), and serum CRP (p < 0.001). In conclusion, following guidelines and the availability of rapid and reliable viral diagnostic methods dramatically reduces the unnecessary use of antibiotics in infants with severe bronchiolitis.
Introduction
Bronchiolitis is traditionally defined as the initial episode of expiratory dyspnea accompanied by catarrhal symptoms occurring within the first two years of life (McConnochie criteria) [1].Through active surveillance, the presence of at least one virus can be identified in approximately 85% of cases, with 25% of cases exhibiting multiple viral infections [2].Respiratory syncytial virus (RSV) accounts for 65-75% of the detected viruses, with even higher rates during traditional winter epidemics.The presence of coinfections, particularly when RSV is involved, might contribute to greater clinical severity, although consensus on this matter remains elusive within the scientific literature [3].Among children under the age of two who develop bronchiolitis, approximately 1% necessitate hospitalization, with a notable proportion requiring respiratory support and intensive care.The peak incidence of hospitalization is observed between 3 and 6 months of age [4,5].In Spain, the cumulative incidence of hospital admissions for bronchiolitis, as evidenced by data from a cohort of children under 24 months, ranges from 1% to 3.5% (with a rate of 5% among those under 3 months of age, 3.7% among those under 6 months, and 2.5% among those under 12 months) [6].While certain established risk factors such as prematurity, age under 3 months, or comorbidities may increase the likelihood of a more severe outcome, the majority of hospitalized children are healthy infants.
The second most common virus is rhinovirus (HRV), responsible for approximately 15% of bronchiolitis cases [7].HRV tends to be prevalent during the early autumn and spring seasons, although it can manifest at any time of the year.To a lesser extent, other viruses such as human bocavirus (HBoV), which typically circulates in conjunction with RSV during the winter and often presents as a co-infection, and human metapneumovirus (HMPV), which usually exhibits epidemics during the spring, also serve as causative agents of bronchiolitis.Human adenovirus (HAdV) has been frequently identified in co-infections as well [8].In the case of SARS-CoV-2, it has been shown to induce bronchiolitis, albeit rarely, and generally results in milder symptoms in comparison to other viruses, such as RSV or even HRV [9,10].
Although it is a viral process, a high percentage of children with bronchiolitis receive treatment with antibiotics.Antibiotics have allowed great progress in the treatment of infectious diseases, reducing the morbidity and mortality of the population [11], but they have also been one of the great problems of humanity.According to the World Health Organization (WHO), bacterial resistance represents a significant threat to global public health because it reduces the effectiveness of medical treatments and can lead to more serious and difficult-to-treat infections.We are observing a spread of multi-resistant bacteria, and reducing antibiotic resistance is one of the main objectives of the WHO.It is estimated that antibiotic resistance is responsible for 700,000 deaths per year worldwide and 25,000 deaths per year in the EU.In addition, it could cause more mortality than cancer in the year 2050 [12,13].
Antibiotic resistance is a growing problem, and it is largely associated with excessive use of antibiotics, inadequate treatments, and a lack of awareness about the problem [13,14] The pediatric population is one of the patient groups that receives the greatest number of antibiotic prescriptions [15] and is not immune to this problem.The fight against antibiotic resistance is today a global priority.It is essential to ensure that antibiotics are used rationally and only in those cases in which they are necessary.
This study aims to investigate the utilization of antibiotics in a cohort of children hospitalized for bronchiolitis and identify associated clinical and epidemiological characteristics.
Materials and Methods
This is a sub-study of an ongoing prospective investigation of respiratory tract infections in children, funded by grants from the Spanish Health Research Fund (FIS) under the reference numbers PI98/0310, PI06/0532, PI12/0129, PI15CIII/00028, PI18CIII/00009, PI21CIII/00019, and PI21/00377 and approved by the Medical Ethics Committee at University Hospital Severo Ochoa.
Study Population
The study population encompassed all children under the age of 2 diagnosed with acute bronchiolitis and admitted to the secondary public hospital Severo Ochoa in Leganés, Madrid, between September 2004 and August 2022.Informed consent was duly obtained from parents or legal guardians.
The Severo Ochoa Hospital is the only hospital in the city of Leganés with a population of 186,000, including approximately 32,000 children under the age of 14.The hospital lacks a pediatric intensive care unit (PICU), necessitating the transfer of children requiring such specialized care to a tertiary hospital.Subsequently, patients return to Severo Ochoa Hospital following their stay in the PICU.
Bronchiolitis was defined as the first episode of lower airway respiratory distress in children under 2 years of age [1].All other episodes of acute expiratory wheezing were considered to be recurrent wheezing and were not included.Exclusion criteria were a refusal to participate in the study.
Study Procedures
All patients were evaluated by an attending physician.Clinical and epidemiological characteristics of the patients were collected.During the hospital stay, and as part of the study, a physician filled out a study questionnaire with the following variables: age, sex, month of admission, clinical diagnosis, history of prematurity and underlying chronic diseases, need for oxygen therapy (evaluated via transcutaneous oxygen saturation), fever and maximum axillary temperature, presence of infiltrates and/or atelectasis in chest X-rays, administration of antibiotic therapy at any time during the admission, length of hospital stay, total white blood cell (WBC) count, C-reactive protein (CRP) serum levels and blood culture results (for those cases in which such tests had been performed), and the results of a virological study.
Specimens consisted of nasopharyngeal aspirates (NPAs) that were taken from each patient at admission.
Antibiotic prescription was considered to be adequate when the patient was diagnosed with a bacterial infection in addition to bronchiolitis, or when the blood culture was positive.Urinary infections confirmed via a urine culture or concomitant acute otitis media, evaluated by a pediatrician and with redness and bulging of the eardrum or drainage, were considered bacterial infections.
Virological Study
NPAs were sent for virological investigation to the Influenza and Respiratory Viruses Laboratory at the National Center for Microbiology (ISCIII), Madrid, Spain.Samples were stored at 4 degrees Celsius in the refrigerator and processed within 24 h after collection.Upon reception, three aliquots were prepared and stored at −80 • C.
Both the reception and the NPA-sample-processing areas were separated from those defined as working areas.RNA and DNA from 200 L aliquots of NPA were extracted using the QIAamp MinElute Virus Spin Kit in an automated extractor (QIAcube, Qiagen, Valencia, Spain).Respiratory virus detection was performed by four independent real-time multiplex PCR (RT-PCR) assays using the SuperScript III Platinum One-Step Quantitative RT-PCR System (Invitrogen ® , Waltham, MA, USA).The first assay detected Influenza A, B, and C viruses; the second assay detected parainfluenza viruses 1 to 4 (PIV), hRV, and enteroviruses; the third assay detected RSV types A and B, human metapneumovirus (hMPV), human bocavirus (hBoV), and AdV.Human coronavirus (HCoV) was investigated using a generic RT-PCR that was able to detect human alpha and beta coronavirus, HCoV 229E/HCoV NL63, and HCoV OC43/HCoV HKU1.The primers and Taqman probes used in this study have already been reported by the study investigators [16].In addition, the detection of SARS-CoV-2 was performed on an extracted RNA from NPAs from 2020 using a real-time RT-PCR assay based on the method designed by Corman et al. [17] for the specific amplification of the E gene using the One-Step RT-PCR Kit (NZYTech, Lisbon, Portugal).This method was adapted to our laboratory, including the amplification of an internal control from the sample in a multiplex way.Assay sensitivity was regularly assessed to check for potential failures in specificity associated with viral variability.Quality controls organized by the ECDC/WHO and QCMD were conducted annually to check the sensitivity and specificity of all of the tests used.The results of the virological study were available in 5-7 working days.After the onset of the COVID-19 pandemic, in addition to the virological study conducted for this research at the ISCIII, respiratory virus PCR testing was implemented in our hospital, with results available within the day.
Statistical Analysis
Descriptive data were expressed as means and standard deviations (SDs) for continuous variables and through counts and percentages for categorical variables.Continuous variables that followed a normal distribution were compared using a one-way analysis of variance with a Bonferroni correction or through t-tests.Categorical variables were compared using the chi-squared test or Fisher's exact test, as appropriate.
Logistic regression models were constructed to evaluate a range of potential risk factors associated with antibiotic prescription.Each variable was individually introduced into univariate models, and odds ratios (ORs) with corresponding 95% confidence intervals (CIs) were computed.Explanatory factors with p-values < 0.1 in the univariate analysis were subsequently analyzed in a multiple regression model.A multivariate backward stepwise logistic regression model was employed to determine adjusted ORs along with 95% CIs, enabling the estimation of independent associations between various factors and antibiotic prescription.
A p-value of less than 0.05 was considered statistically significant.All analyses were two-tailed and were performed using the Statistical Package for the Social Sciences (SPSS), version 25; SPSS Inc., Chicago, IL, USA.
Results
A total of 5438 children were admitted to Severo Ochoa Hospital with acute respiratory infections between September 2004 and August 2022.Of these, 1715 infants under the age of 2, who were diagnosed with acute bronchiolitis, consented to participate in the study.The average age of the participants was 5.5 ± 6.7 months, with males comprising 57% of the cohort.Table 1 provides a summary of the primary clinical and demographic characteristics of the study group.
Infants who were prescribed antibiotics were more likely to be older than 6 months (p < 0.001), exhibit symptoms such as fever (p < 0.001), experience hypoxia (p < 0.001), and require admission to the ICU (p < 0.001).Furthermore, a longer duration of fever (p < 0.001) and hypoxia (p = 0.001) were significantly associated with antibiotic prescription.Patients who received antibiotic treatment also had a notably prolonged hospital stay (Table 2).
Patients with abnormal chest X-rays (infiltrate/atelectasis) were six times more likely to receive antibiotic prescriptions (p < 0.001).Notably, there was a remarkable decrease in the request for chest X-rays throughout the study period.Starting in 2015, following the publication of the 2014 AAP guideline for bronchiolitis management, the percentage of chest X-ray requests decreased from 74% to 45% (p < 0.001).The factors associated with a request for chest X-rays are detailed in Table 3.
After a multivariate logistic regression analysis, the factors independently associated with the performance of a chest X-ray were age greater than 6 months (p = 0.04, OR: 2.6, 95% CI: 1.1-6.6),duration of fever, and hypoxia (p = 0.07 and p = 0.03, respectively), and admission prior to the publication of the 2014 AAP guideline (p = 0.001, OR: 4.1, 95% CI: 1.7-9.6).Significantly elevated levels of both serum CRP (p < 0.001) and white blood cell count (p = 0.008) were observed in the group of patients treated with antibiotics compared to those who were not treated.
Patients with fever (N = 866) were examined, and the findings are presented in Table 4.Among them, 191 (22%) were administered antibiotics.This subgroup exhibited significantly prolonged durations of fever, hypoxia, and hospital admission.They displayed infiltrates on X-rays and required admission to the PICU more frequently.Although their CRP levels were elevated, there was no significant increase in leukocytosis.Furthermore, an analysis was conducted on children admitted to the PICU, comparing those who received antibiotics to those who did not (Table 5).Children who received antibiotics experienced more frequent fevers and had longer hospital stays.Elevated CRP levels were also observed in this group.However, there were no significant differences in radiological infiltrates or the need for mechanical ventilation between children with and without antibiotic therapy.We conducted a comparative analysis of antibiotic usage before and after the implementation of the 2014 AAP guideline.Although there was a decrease in antibiotic use following the AAP guideline implementation, with a rate of 21.9% (396 cases) in contrast to the 23.9% (967 cases) observed prior to the guideline, this difference did not reach statistical significance (p = 0.091, OR 1.12 (95% CI: 0.98-1.29)).Similarly, a comparison of antibiotic utilization before and after the implementation of multiplex respiratory PCR in our hospital did not reveal statistically significant changes.The rates were 23.4% (1034 cases) before PCR implementation and 22.4% (229 cases) after PCR implementation (p = 0.492, OR 1.06 (95% CI: 0.9-1.24)).Additionally, we assessed the duration of hospitalization before and after the introduction of multiplex respiratory PCR in our hospital and found that the hospital stay was longer during the period preceding the rapid PCR diagnosis implementation, with a mean and standard deviation of 4.01 (2.72) compared to 3.57 (2.32), p < 0.001.
No differences regarding passive smoking, family history of atopy, or breastfeeding were found.
A multivariate logistic regression analysis revealed a statistically independent association between antibiotic prescription and the following variables: fever > 38 • C, p < 0.001, abnormal chest-X ray, p < 0.001, PICU admission, p = 0.015, and serum C reactive protein, p < 0.001.See Table 6 for details.Patients with HBoV infections showed a tendency to receive antibiotics more frequently, although without reaching statistical significance (p = 0.07).
Discussion
In this study conducted on a prospective cohort of more than 1700 infants hospitalized for bronchiolitis between 2004 and 2022 in Spain, we observed that 13.4% of them received antibiotic treatment despite a lack of evidence of bacterial infection.This percentage, while lower than that reported in most studies, could likely be further reduced.Antibiotic prescription was considered appropriate in our series in 22.7% of cases in which a bacterial infection was diagnosed.The main independent risk factor associated with antibiotic prescription was the presence of infiltrates or atelectasis in the chest X-ray, leading to a fivefold increase in the probability.Additionally, other factors independently associated with antibiotic treatment included the presence of fever, admission to the PICU, elevated serum CRP, and HBoV infection.
In numerous guidelines, it is recommended not to prescribe antibiotic treatment to children diagnosed with bronchiolitis unless there is evidence of bacterial superinfection [18,19].However, it still prevails that these patients, particularly those with a more severe condition, are prescribed antibiotics, potentially influenced by the perception of severity among prescribers.Some studies, including one conducted by Obolski in Israel [20], reported an antibiotic therapy rate of up to 33% among children hospitalized for bronchiolitis.Consistent with our findings, antibiotics were prescribed more frequently to children with fever and more severe symptoms, as well as in those with previous visits to the emergency room.In the USA, between 2007 and 2015, up to a quarter of children with bronchiolitis received antibiotic treatment [21], and this proportion was even higher in Italy, reaching half of all infants [22].
Nonetheless, bacterial coinfections are infrequent in children with bronchiolitis, occurring in approximately 1-2% of cases, including urinary infections.The incidence increases in patients requiring mechanical ventilation [23,24], but it remained uncommon in our series and generally in bronchiolitis cases.Among our patients treated with antibiotics, only 0.1% had a positive blood culture, and 5% had a positive urinary culture.
It is widely acknowledged that certain viruses, such as HAdV and HBoV, are associated with high fever and a significant increase in C-reactive protein levels, which may mimic bacterial infections.Both viruses are also frequently associated with infiltrates on chest X-rays [8,25,26].The combination of these factors may justify the suspicion of bacterial infection and lead to increased antibiotic prescriptions, as observed in our series, especially in cases of HBoV infections.In a previous study by our research team involving 319 HBoV infections, we observed that 68% of the cases had fever, 47% had an infiltrate on X-ray, and CRP levels were moderately elevated (25,26).Identifying the etiology of the episode, as is often the case with RSV infections, thanks to the routine availability of rapid tests in emergency departments, can serve as a valid rationale for reducing antibiotic prescriptions.
Regarding the value of CRP, it is somewhat controversial in the literature [27].Procalcitonin (PCT) and CRP are non-specific markers of the host response to tissue injury and inflammation, and their serum concentrations usually are higher in bacterial than in viral respiratory tract infections.Although PCT has shown somewhat better performance in detecting bacterial infections, especially in children with bronchiolitis in the PICU [28], neither of the two markers are usually in high ranges in viral infections [28].It is considered that CRP values above 80-100 mg/L are associated with bacterial superinfections, but these figures are rarely reached in children with bronchiolitis, and many viral infections moderately elevate CRP overall.Alejandre et al., upon assessing 706 Spanish infants treated in the PICU for bronchiolitis, consider that there is no need to treat an invasive bacterial infection as long as PCT stays at the level of <1.0 ng/mL and CRP < 70 mg/L.In our study, the mean CRP is generally below this figure, even in patients admitted to the PICU.Unfortunately, PCT was not used throughout the entire period and could not be analyzed.Possibly, the use of markers such as PCT or CRP should be accompanied by the implementation of clinical practice guidelines to translate well into the management of antibiotics in children with bronchiolitis [29].
In our study, there was a decrease in antibiotic usage over time, possibly influenced by two factors.Firstly, there was a reduction in the number of radiographs requested between the earlier years and the more recent ones.It is recognized that viral infections, including RSV, which is the most common virus, give rise to radiological infiltrates, atelectasis, and viral pneumonia.Although they can probably increase the severity of the condition, they do not require antibiotic treatment, but performing a chest X-ray may induce more antibiotic prescriptions.In this regard, better adherence to guidelines has been described as a factor that prevents antibiotic therapy [30].In addition to the international guidelines, local guidelines were also published in Spain that have possibly influenced this reduction in chest X-ray performance [31].The training of physicians and the implementation of antibiotic stewardship programs are also crucial factors to consider in reducing inappropriate prescriptions [32,33].
Secondly, and perhaps more importantly, it is worth noting that in the pre-pandemic era, the virological diagnosis of respiratory infections in our hospital was conducted through the ISCIII as part of this study, with a delay in obtaining results of nearly a week.However, with the onset of the pandemic, the hospital implemented local viral diagnosis via PCR for respiratory viruses.This strategy allowed for much quicker access to virological results, enabling therapeutic decisions to be made in accordance with them and resulting in a 50% reduction in antibiotic prescriptions in children admitted with bronchiolitis.This outcome underscores the significant importance of implementing molecular diagnostic techniques in hospitals as, as in our case, it can lead to a 50% reduction in unnecessary antibiotic prescriptions.
The inappropriate use of antibiotics fosters the development of antimicrobial resistance, can contribute to the onset of allergies, and disrupts the intestinal and respiratory microbiota during a critical period when a child's immune system is maturing [34].For all these reasons, it is crucial to carefully select the children diagnosed with bronchiolitis who truly require this treatment.
Thorough adherence to clinical practice guidelines, the effective implementation of antimicrobial stewardship programs and the establishment of precise virological diagnoses collectively contribute to the judicious use of antibiotic treatments.The strict adherence to bronchiolitis management protocols has shown a substantial reduction in antibiotic use and an overall enhancement in treatment quality.Therefore, these initiatives should be extended to other healthcare settings [35].Education on the correct use of antibiotics must reach not only prescribers but also parents and caregivers who must understand their children's illness.Pressure from family members can be a factor that determines the use of antibiotics, and educational programs for the population are also necessary.At our hospital, we are working on the development of educational materials for parents and caregivers.
Table 1 .
Clinical characteristics of 1715 infants hospitalized with acute viral bronchiolitis.
Table 2 .
Clinical characteristics of infants admitted for bronchiolitis (N = 1715), treated or not with antibiotics.
Table 3 .
Factors associated with a request for chest X-rays in infants hospitalized for bronchiolitis.
Table 4 .
Clinical characteristics of infants admitted for bronchiolitis with fever (N = 866), treated or not with antibiotics.
Table 5 .
Clinical characteristics of infants admitted to the Pediatric Intensive Care Unit (PICU) for bronchiolitis (N = 68) who were treated with antibiotics and those who were not.
Table 6 .
Multivariate analysis of factors associated with antibiotic prescription in infants admitted for bronchiolitis.
|
v3-fos-license
|
2021-09-25T15:49:38.803Z
|
2021-08-26T00:00:00.000
|
243043408
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://reproductive-health-journal.biomedcentral.com/track/pdf/10.1186/s12978-022-01338-5",
"pdf_hash": "08da73b6d80f2e77ffc53c269338f0bb6ac3b1e6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45959",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "baa4f36ba5101bdb767fca18fd762ae14932018e",
"year": 2022
}
|
pes2o/s2orc
|
Factors affecting knowledge regarding unmet need on fertile aged women in Indonesia: evaluation of 2012 and 2017 IDHS
Background The Family Planning (FP) Program is a national method of controlling population growth rates while improving maternal and child health. Indonesia, as one of the largest countries, has abysmally low contraceptive coverage. One of its main issues is unmet contraceptive needs. This study aims to determine the factors that influence women's unmet need of childbearing age (WCA) in Indonesia. Methods We performed an unpaired comparative analytic study with a cross-sectional method was conducted on secondary data obtained from 2012 to 2017 Indonesia Demographic and Health Survey (IDHS). The subjects in this study were all women of childbearing age (15–49 years). Subjects with incomplete data were excluded from the study. Unmet need was defined as WCA who did not use contraception but decline to have more children or wanted to delay their pregnancies. Chi-square analysis was performed on categorical data and Mann–Whitney U analysis on numerical data. Result A total of 45,607 WCA in the 2012 IDHS data and 29,627 WCA in the 2017 IDHS data were included in the study. In the 2012 IDHS data, factors influencing unmet needs were age (p = 0.023) and parity (p < 0.0001). In the 2017 IDHS data, factors influencing unmet needs were the residential area (p = 0.003), level of education (p = 0.008), level of spouse’s education (p < 0.0001), employment status (p = 0.03), possession of electricity (p = 0.001), and possession of television (p = 0.01). Conclusion Factors affecting unmet needs are age, parity, residential area, level of education, level of spouse’s education, employment status, possession of television, and possession of electricity. There were no recurring factors on 2012 and 2017 IDHS data.
Background
It is estimated that the global population would reach 7 billion people, And with increasing life expectancy every year, it is predicted to continue to grow and reach 8 billion in 2023 [1,2].
Over the past 20 years, the usage of contraceptives in developing countries has decreased the number of maternal mortality simply by reducing unplanned pregnancies [3]. This action directly decreases the number of illegal abortions and high-risk pregnancies. Nevertheless, these successes are far from perfect. Previous research states that as many as 30% of maternal deaths can be further reduced by meeting unmet needs for contraception [3]. In addition, contraception may also increase perinatal outcomes by increasing the interval between pregnancies [3,4].
Methods and forms of pregnancy planning differ widely, starting from traditional techniques, such as periodic abstinence, disrupted intercourse, and methods from myths and beliefs to modern techniques that have been studied for their efficacy. The intrauterine device (IUD), condoms, hormonal (pills), implants, and birth control injections are some well-known pregnancy planning approaches in the culture. There are also modern procedures, such as vasectomy and tubectomy, which are not commonly known or even feared by the public [5].
Based on data in 2017, traditional contraceptive methods were used by 4.6% of women of childbearing age (WCA) in Indonesia, while modern methods were used by 41.4% of WCA. The most widely used modern methods are injection (20.9%), pill (8.7%), IUD (3.5%), and implant (3.4%). Other methods, such as the lactational amenorrhea method (LAM) and male sterilization, were only used by 0.1% of all WCA [6,7].
Unmet need is one of the persistent problems found in every country related to the provision of contraceptive services. Unmet need is defined as WCA who decline to have more children or delay pregnancies but do not use contraception [6,8].
The level of unmet need varies from country to country, with a higher percentage in developing countries such as Uganda, Haiti, and Ghana [9]. Based on 2014 data, it was found that the amount of unmet need in Indonesia ranged from 10 to 11%, more or less the same as other Asian countries [9].
Previous studies have shown that several interventions may be utilized to increase contraceptive use rates. However, unmet need is one of the most prevalent problems to be addressed. Currently, there were only a few studies regarding unmet contraceptive needs in Indonesia. This study aims to determine the factors influencing unmet needs in Indonesia.
Methods
An analytic observational study with a cross-sectional method was done using re-analysis of 2012 and 2017 Indonesia Demographic and Health Survey (IDHS) raw data. The study population was WCA, whose data was recorded on IDHS. Patients with incomplete records were excluded from this study. 45,607 subjects were recorded on 2012 IDHS, while 29,267 subjects were recorded on 2017 IDHS.
Risk factors analyzed in this study were age, parity, history of sexually transmitted disease, residential area, level of education, level of spouse's education, employment status, socioeconomic status, possession of electricity, radio, television and cellphone, smoking status, and the willingness of discussing puberty with daughter. Unmet need is defined as WCA who did not use any form of modern contraception but decided to delay or prevent birth.
Baseline characteristics were then analyzed and compared. Bivariate analysis between subjects' characteristics and contraceptive knowledge was done. Multivariable analysis was done to determine factors associated with An unpaired comparative analytic study with a cross-sectional method was conducted on secondary data obtained from 2012 and 2017 Indonesia Demographic and Health Survey (IDHS). The subjects in this study were all women of childbearing age (15-49 years). Subjects with incomplete data were excluded from the study. Unmet need was defined as WCA who did not use contraception but decline to have more children or wanted to delay their pregnancies. Chi-square analysis was performed on categorical data and Mann-Whitney U analysis on numerical data. A total of 45,607 WCA in the 2012 IDHS data and 29,627 WCA in the 2017 IDHS data were included in the study. In the 2012 IDHS data, factors influencing unmet needs were age and parity. In the 2017 IDHS data, factors influencing unmet needs were the residential area, level of education, level of spouse's education, employment status, possession of electricity, and possession of television.
In conclusion, factors affecting unmet needs are age, parity, residential area, level of education, level of spouse's education, employment status, possession of television, and possession of electricity. There were no recurring factors on 2012 and 2017 IDHS data.
contraceptive knowledge and unmet need. Ethical clearance was issued from the health research and ethical committee in Faculty of Medicine, University of Indonesia.
Results
Using the raw data of Indonesian Demographic and Health Survey (IDHS), 45,607 respondents from 2012 IDHS data and 29,267 respondents from 2017 IDHS data were analyzed. Table 1 (2012 IDHS) and Table 2 (2017 IDHS) investigated the relationship between characteristics of subjects and unmet needs.
Subsequently, a multivariable analysis was done between characteristics and unmet needs. The result could be seen in Table 3 (2012 IDHS) and Table 4 (2017 IDHS).
Discussion
In this study, it is clear that numerous factors were affecting unmet needs in WCA. The family planning program is a program that has succeeded in increasing contraceptive use by as much as 60% in couples worldwide [10]. It is estimated that there are 225 million women in the world whose contraceptive needs are still not being met each year. The situation is unfortunate, considering that contraception in an unmet need population can further prevent 36 million abortions, 70,000 maternal deaths, and 52 million unwanted pregnancies [10].
Age is one of the factors that determine the use of contraception. Previous research focusing on women aged 15-24 has shown that contraceptive knowledge and use among younger women tend to be lower, especially when combined with lower education and rural areas [11,12]. Previous studies have also shown that this is related to more significant concern on younger women and would translate into lower contraceptive coverage in the younger age category [13].
Education, spouse's education, and possession of various facilities (electricity, radio, television, cellphone, and internet) are linked to the availability of information flows that reach the WCA. Previous research in Bangladesh and Ghana has shown that education is a very influential factor in the use of contraception because women with higher levels of education tend to have a better understanding of the benefits and risks of using contraception [14,15]. Better education would also lead to higher levels of contraceptive use [14,16,17].
Afterward, it was also known that factors associated with unmet needs are age, parity, residential area, level of education, level of spouse's education, employment status, possession of television, and possession of electricity. The number of unmet needs is directly related to the number of unplanned and unwanted pregnancies.
Previous research has shown a 16-fold chance of developing an unwanted pregnancy in women with unmet needs [17].
Age, parity, education, spouse's education, and access to information would influence the incidence of unmet needs in Indonesia. Previous research conducted in Indonesia in 2015 also showed similar results that age and parity would determine the incidence of unmet need in WCA [18]. Therefore, further education is needed, not only about family planning and contraceptive programs but also the ideal number of children for couples [19]. One of the considerations affecting the decision on contraceptive use is the characteristics of the spouse. As one of the countries with strong patriarchal values, WCA in Indonesia has difficulties ranging from accessing school and sexual education to not having the right to determine the number of children deemed appropriate [10]. In this report, women with lower spouse's education are more likely to be identified as an unmet need. Previous research has shown that women in developing countries appear to be rejected by their spouses, who desire more offspring. They also have many obstacles and must struggle harder in order to have access to contraception [10,20].
In conclusion, factors affecting unmet needs range from intrinsic characteristics such as age and parity to spouse's characteristics such as education and socioeconomic status. There were no recurring risk factors. However, the risk factors multiplied in the later years. Comprehensive education and contraceptive provision would be beneficial to improve the rate of contraceptive use in Indonesia.
Conclusions
Factors affecting unmet needs are age, parity, residential area, level of education, level of spouse's education, employment status, possession of television, and possession of electricity. No recurring factors were affecting unmet need on 2012 and 2017 IDHS data.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2014-11-01T00:00:00.000
|
987544
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00431-014-2444-x.pdf",
"pdf_hash": "21fefaf1fccf0b3ef9dcc3eb6d9a345c2e1d4699",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45962",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "21fefaf1fccf0b3ef9dcc3eb6d9a345c2e1d4699",
"year": 2014
}
|
pes2o/s2orc
|
Examining a possible association between human papilloma virus (HPV) vaccination and migraine: results of a cohort study in the Netherlands
Since the introduction of the bivalent human papilloma virus (HPV) vaccine in the Netherlands, migraine has been reported as a notable event in the passive safety surveillance system. Research on the association between HPV vaccination and migraine is needed. Therefore, potential migraine cases in 2008–2010 were selected from a group of general practitioners and linked to the vaccination registry. Data were analysed in three ways: (i) incidences of migraine postvaccination (2009/2010) were compared to pre-vaccination incidences (2008); (ii) in a cohort, incidence rates of migraine in vaccinated and unvaccinated girls were compared and (iii) in a self-controlled case series analysis, the relative incidence of migraine in potentially high-risk periods was compared to non-high-risk periods. Incidence rates of migraine for 12- to 16-year-old girls and boys postvaccination were slightly higher than pre-vaccination incidence rates. Incidence rate ratios (IRRs) for vaccinated compared to unvaccinated girls were not statistically significantly higher. Furthermore, the RR for migraine in the high-risk period of 6 weeks following each dose versus non-high-risk period was 4.3 (95% confidence interval (CI) 0.69–26.6) for certain migraine. Conclusion: Using different methods, no statistically significant association between HPV vaccination and incident migraine was found. However, the number of cases was low; to definitively exclude the risk, an increased sample size is needed.
Introduction
Vaccination against human papilloma virus (HPV) with the bivalent HPV-16/18 vaccine (Cervarix®) was introduced in the Dutch National Immunisation Programme (NIP) in 2010 and has been provided annually to 12-year-old girls born in 1997 or later. Prior to the introduction of the vaccination into the NIP, a catch-up campaign was organised in 2009 for 13-to 16-year-old girls (i.e. born in [1993][1994][1995][1996] [8]. This campaign was extended in 2010 for girls who were not completely vaccinated. Vaccination coverage in 2009 and 2010 ranged per birth cohort between 49 and 56 %.
In the Netherlands, a national enhanced passive safety surveillance system has been in place since 1962. Reporting criteria are broad, and adverse events following immunisation can be reported by telephone or digitally. A new and notable event in the enhanced passive safety surveillance system for adverse events since the introduction of HPV vaccination was the number of reports of migraine. In 2009 and 2010, 52 reports of headache have been received following almost 800,000 doses of HPV vaccine [30]. Eight cases were diagnosed as migraines, of which three of the patients were born in 1994, one in 1995, two in 1996 and two in 1997. Headaches are known to be frequently reported following administration of the HPV vaccine [9,11,19]. However, migraine headaches can be severe, and attacks can occur regularly. Because background incidence rates for migraines are lacking, it is unknown whether the number of cases is more frequent in comparison to what can be expected for girls in this age range.
For seven out of the eight reports of migraine in the passive safety surveillance system, an expert panel assessed that it was improbable that the cause was vaccination [25,26]. This assessment was based on the following points of consideration: diagnosis with severity and duration, time interval, biological plausibility, specificity of the symptoms, indications for other causes, proof for vaccine causation and underlying illness or concomitant health problems. In addition, no plausible pathophysiological mechanism is known to explain how migraine may be caused by HPV vaccination. Vaccination may act as a trigger for migraine, i.e. provocative factor for an attack. Although, according to many experts, the value of triggers is overestimated [14].
Migraine is characterised by recurrent moderate to severe attacks of usually unilateral, pulsating headaches with nausea and/or vomiting. The headache worsens with physical activity, and there is often increased sensitivity to light (photophobia) and sounds (phonophobia). Headaches last from 4 to 72 h in patients 15 years and older [13] and from 1 to 72 h in children younger than 15 years old [3,15]. Migraine may be preceded by an aura, a transient focal neurological phenomenon that usually is visual [13]. The cause of migraine is unknown, and the value of triggers that cause an attack is difficult to prove. Migraines can occur in any patient from approximately 3 years of age. In childhood, migraine is more common in boys than in girls, but in puberty, the incidence of migraine in girls rises above that in boys [2]. The duration of migraine attacks in children is different than attacks in adults; the duration can be shorter in children and often lasts for only 1 h [3]. Therefore, doctors do not always identify migraines in children [12] and often misdiagnose migraines as tension headaches. In puberty, the character of the attacks changes, leading to a diagnosis of migraine.
The estimated lifetime prevalence of migraine for 20-to 65-year-old Dutch women is 33 % [18]. The prevalence of migraine in children 6-16 years old, with a case definition of headache attacks lasting 2-8 h associated with symptoms such as photophobia or nausea or preceded by an aura, is estimated at 8 % [28]. The incidence in general practices is approximately one new migraine patient per month [17].
In a previous study in the Dutch general population [32], we analysed the long-term occurrence of headache incidence in HPV-vaccinated and unvaccinated girls more than 1 year after the HPV vaccination campaign. At 14 years of age, no higher risk of having headache attacks (odds ratio (OR) 0.67, 95 % confidence interval (CI) 0.37-1.21), self-reported migraine or severe headaches (OR 0.96, 95 % CI 0.40-2.34) and migraine symptoms (i.e. headache attacks mostly during 1 h or more and at least two of the following characteristics: a pulsating pain, unilateral, severe enough to avoid going to school or other activities; OR 0.39, 95 % CI 0.15-1.00), with a first attack on or after 12 years of age, was found in the 12 months before the study questionnaire administration for HPV-vaccinated girls (n=751) compared with unvaccinated girls (n=368) [27]. Moreover, in this study, we aimed to investigate whether there is an association between HPV vaccination and incident migraine on a short term following administration of the bivalent HPV vaccine by using three different study designs. Even in a small number, reports of adverse events can easily lead to negative attention towards HPV vaccination. Therefore, research on the association between HPV vaccination and migraine is necessary to maintain trust in the NIP.
Design
First, we applied a population-based study to estimate the incidence of migraine in the population pre-and postintroduction of the HPV vaccination programme. Furthermore, we conducted a cohort study and a self-controlled case series (SCCS) analysis in girls from 20 general practitioners (GPs) who agreed to provide data for linkage with the vaccination registry to assess the association between HPV vaccination and migraine.
Setting
The department of Medical Informatics of the Erasmus University Rotterdam developed an electronic database of medical records from Dutch GPs, the Integrated Primary Care Information (IPCI) database. The database is designed for post-marketing surveillance and pharmaco-epidemiological research [31]. IPCI is a longitudinal observational database, which contains digital medical patient records from GPs in the Netherlands. At present, the database contains demographic information, medical notes, prescriptions and indications for therapy, referrals, hospitalisations and laboratory results for about 1,500,000 patients from 600 GPs, which is almost 9 % of the total Dutch population. For coding, the Dutch standard for GPs for the classification of symptoms and diseases, the International Classification of Primary Care (ICPC) is used in addition to free text.
Study population
Data from all individuals between 1 January 2007 and 31 December 2010 in the IPCI database were used for case selection. Date of entry in the study was defined as 1 January 2008 or the date at which 1 year of valid data history in the database was accumulated if this was later. The date of the end of the study was 31 December 2010 or the end of registration of the patient, death or last IPCI data delivery if this was earlier. This entire study population was used to compare age-and gender-specific incidences of migraine before and after the introduction of the HPV vaccination programme. Additionally, a small cohort from this study population was used to compare incidences of migraine in vaccinated and unvaccinated girls and for the SCCS analysis. This cohort consisted of girls born in 1993-1997 from GPs who agreed to make data available for linkage with the vaccination registry (n=20 GPs).
Case selection
Patients with newly diagnosed migraine were retrospectively identified within the study population. Potential cases were selected if the patient record contained the ICPC-code N89 (migraine) or 'migrai*' in the free text during the study period (n=24,183). All potential cases were manually validated by review of the anonymous medical record. Migraine was coded according to the diagnoses of the GP as definite, unclear/ possible, menstruation related, medication-overuse headache or no migraine (n=4535). The date of onset of migraine symptoms was defined, or if this was unknown, the date of first entry of migraine symptoms in the patient record was used. Based on this, cases were classified as incident if the onset of first migraine symptoms or first entry of migraine symptoms was within the study period (n=5509 of which 448 are 12-16 years of age), or as prevalent if the patient was diagnosed with migraine before the study period or first symptoms of migraine started before the study period (n = 14,139). Prevalent cases were excluded from further analysis. In addition, the occurrence of aura was coded as definite, unclear/possible, typical aura without headache or absent. Finally, two categories were classified, namely 'certain migraine' and 'uncertain migraine' [7]. Certain migraine refers to patients with definite migraine and menstruation-related migraine (n=321 12-to 16-year olds). Uncertain migraine comprised patients with unclear/possible migraine and typical aura without headache (n=127 12-to 16-year olds). The identification process of migraine cases is described in Fig. 1.
HPV vaccination exposure
In the Netherlands, all girls aged 13-16 years were invited for HPV catch-up vaccination in 2009, and 12-year-old girls were invited from 2010 onwards. HPV vaccination consists of three successive doses of the bivalent HPV-16/18 vaccine (Cervarix®). All three doses have equal content. Vaccination rounds were organised in March/April, April/May and September/October. However, some girls started later or skipped rounds. Girls not fully vaccinated received another opportunity to complete the series in the next year. All administered doses were registered in the national vaccine registry, Praeventis. Praeventis covers the entire Dutch population in the age groups eligible for vaccination and is updated continuously, as described by Van Lier et al. [29].
Girls born between 1993 and 1997 who were part of the IPCI study population and were patients of the 20 GPs who gave permission for data linkage were linked to the national vaccination registry to determine their HPV vaccination status and the number and dates of vaccine administration (n=2363, Fig. 1). Probabilistic techniques were used for the linkage, because we were not able to use a personal identifier. With this technique, (parts of) personal identifiers, i.e. postal code, date of birth, first name, surname and citizen service number, were taken into account to assess the probability that two records from both databases were from the same person. This linkage technique was validated in an earlier study by Kemmeren et al. [16]. A trusted third party was used to exclude identifying variables from the research database.
Analysis
To identify a possible association between HPV vaccination and incident migraine, we analysed the data in three different ways. First, age-and gender-specific incidence rates of migraine were obtained for the period before (pre 2008) and after (post 2009/2010) vaccination. Incidence rate ratios (IRRs) with 95 % confidence intervals were calculated to compare the postvaccination period with the pre-vaccination period. Males are included as a control to indicate trends over time that were independent of HPV vaccination as they were not eligible for HPV vaccination. Second, a cohort analysis was performed to compare monthly incidence rates of migraine between vaccinated and unvaccinated girls. "Unvaccinated" person time comprised the time from entry in the study until the date of the end of the study (see above), the date of occurrence of an event or the date of administration of the first dose, whichever came first. "Vaccinated" person time included the time between administration of the first dose and the date of the end of the study or the date of occurrence of the event, whichever came first (Fig. 2). If the date of the end of the study was earlier than 31 December 2010, the patient was censored. Time after the first dose was divided in months, providing incidence in consecutive months after vaccination. IRRs of these periods compared to the reference period were estimated. Confidence intervals were estimated by the mid-P exact method [20]. Third, SCCS analysis was used to compare the incidence of migraine in the high-risk and non-high-risk periods. Only HPV-vaccinated girls with incident migraine were included in the analysis, whereby each case acts as its own control. A primary high-risk period of 6 weeks following each dose was defined, according to other studies on neurological events following immunisation [6,10,24]. Furthermore, three different high- Fig. 1 Process of identification of migraine cases Fig. 2 Calculation of the person time used in the cohort analysis risk periods were defined, one longer period of 13 weeks and two shorter periods of 4 and 2 weeks, to conduct sensitivity analyses. Additionally, we adjusted for school holidays (between 1 July and 31 August and between 20 December and 7 January) because we observed a lower incidence of migraine in those periods. This potentially can cause bias because vaccination was given outside these periods. Furthermore, no seasonality was observed in the incidence of migraine.
Pre-and postvaccination period incidences of migraine
The study period comprised a total of 73,245 person years for 12-to 16-year-old boys and girls, in which 321 certain migraine cases and 127 uncertain migraine cases occurred. In 12-to 16year-old girls, the incidence rate of migraine was slightly higher in the postvaccination period (2009/2010) compared to the prevaccination period (2008; Table 1). The IRR for certain migraine was 1.14 (95 % CI 0.82-1.62), and the IRR for certain+ uncertain migraine was 1.10 (95 % CI 0.83-1.49). Moreover, in 12-to 16-year-old males, the postvaccination incidence rate of migraine was also slightly higher, but not significantly different, than the pre-vaccination incidence rate (Table 1), with an IRR of 1.21 (95 % CI 0.77-1.97) for certain migraine and 1.20 (95 % CI 0.83-1.78) for certain and uncertain migraine.
Cohort analysis
In the cohort analysis, we included 2005 girls eligible for HPV vaccination in IPCI who could be linked to the vaccination registry. Of these girls, 1306 (65.1 %) received at least one dose of HPV vaccination, 1275 (63.3 %) received at least two doses, and 1228 (61.3 %) received all three doses within the study period. Adherence to the dosing schedule was good: 97 % received the second dose within 2 months after the first, and 84 % of all vaccinated girls received the third dose within 7 months after the first. In 2009/2010, 22 girls had incident migraine (14 certain and eight uncertain), of which 11 (50.0 %) were vaccinated for HPV and 11 (50.0 %) were unvaccinated.
The IRRs for migraine in monthly periods following the first dose compared to migraine in unvaccinated girls or migraine occurring in the period before vaccination ranged between 0.0 and 3.0 (month 6; Fig. 3). None of these IRRs were statistically significant, and increases in IRRs were not related to the months in which vaccination occurred. Figure 4 shows a lower cumulative proportion of migraine for HPVvaccinated girls than for unvaccinated girls; however, confidence intervals were overlapping. Increases in the cumulative proportion of migraine during or following vaccination were observed in both vaccinated and unvaccinated girls.
Self-controlled case series (SCCS) analysis
The association between HPV vaccination and migraine was tested using a hypothesis-testing SCCS study including 11 incident migraine cases in vaccinated girls. Of them, six were certain migraine cases and five were uncertain cases. Ten out of the 11 cases received three doses of HPV vaccine, and one certain case received only one dose. Of the six certain migraine cases, four occurred in 2009 and two occurred in 2010, and of the five uncertain cases, two occurred in 2009 and three occurred in 2010. The mean duration of observation in the study period was 1.8 years (range 0.7-2.0).
Three events occurred in the high-risk period of 6 weeks post HPV vaccination. The events were equally divided over the three doses, and they took place on day 8, 37 and 23 following the first, second and third dose, respectively. We found no statistically significant elevated risk of migraine in the four defined high-risk periods versus non-high-risk periods; however, the risk estimates ranged between 2.1 and 6.3 ( Table 2).
Discussion
In this study, we studied the association between HPV vaccination and migraine. A slightly, but not significantly, higher incidence of migraine among 12-to 16-year-old girls was found in the postvaccination years compared to the prevaccination period. Furthermore, we found no significantly higher risk of migraine in high-risk weeks (defined as primary 6 weeks, and secondary 13 weeks, 4 weeks and 2 weeks) after each HPV dose compared to non-high-risk weeks. However, risk point estimates ranged between 2.2 and 6.3 for certain migraine and between 2.1 and 3.4 for certain and uncertain migraine. Finally, no significantly higher risk of migraine was found in the months following vaccination compared to unvaccinated girls. The number of migraine reports following HPV vaccination in the Netherlands received through the passive safety surveillance system was with eight cases following almost 800,000 doses higher than that in the UK. In the UK, during the first 2 years following introduction of the bivalent HPV vaccine, 17 reports of migraine out of at least 4.5 million doses of HPV vaccine were received through the Yellow Card Scheme [4]. A possible explanation for the difference in reporting rate can be the difference in age groups targeted for HPV vaccination, including the catch-up campaign. In the Netherlands, girls 12 to 16 years of age were invited as in the UK, girls aged 12 up to 18 years. Furthermore, possible differences in spontaneous reporting between countries may lead to different reporting rates. Finally, the adverse media attention for the safety of the HPV vaccine in the Netherlands could have played a role also. Nevertheless, we think it is important to follow up a possible signal to maintain public faith in the NIP as high as possible. To date, little is known about the occurrence of migraine following HPV vaccination despite the fact that headaches are one of the most reported adverse events [9,11,19]. Unfortunately, trials in which adverse events were studied did not specify the characteristics of the headaches. However, the frequency of headaches in the week before vaccination was found to be comparable or higher than that in the week following HPV vaccination [25]. Moreover, it is notable that the incidence of migraine among girls rises during puberty. The addition of pubescent girls in the NIP as a new target group for HPV vaccination could have contributed to sudden reports of migraine through the passive safety surveillance system.
The slightly higher, although not statistically significant, incidence of migraine observed among 12-to 16-year-old girls in the postvaccination years compared to the pre-vaccination year, irrespective of vaccination status, was also found for boys in the same age group. This might indicate that there are other unknown causes for an increase in migraine incidence during this period. No changes were made in headache guidelines for GPs before or during the study period. A possible explanation might be the earlier diagnosis of migraine over the years: the incidence of migraine decreased in 30-to 45-year olds and increased in 10-to 29-year olds (data not shown). However, an association is difficult to detect with this method when the effect of HPV vaccination on the incidence of migraine is small or present only for a small time period after vaccination and because only approximately half of the girls were vaccinated against HPV.
The cohort and self-controlled case series analysis also showed no significantly increased risk of migraine in girls who were vaccinated, although the estimated RRs ranged from 2.1 to 6.3. Unfortunately, the number of cases included in the analysis was small, and therefore, we had little power to detect a potential association. The results may depend on the definition of the risk periods, as we did not have an established hazard function or biological mechanism for a potential association. Consistent with studies on other neurological events (e.g. Guillain-Barré syndrome (GBS)) following immunisation, we defined a primary high-risk period of 6 weeks following each dose [6,10,24]. In studies of more insidious diseases, such as autoimmune and neurological diseases, after quadrivalent HPV vaccination, longer risk windows of 180 days were used [1,5]. We conducted a sensitivity analysis to explore the effect of a potential misclassification: we defined three additional high-risk periods, one longer period of 13 weeks and two shorter periods of 4 and 2 weeks. More or less comparable risk estimates were found by using these longer and shorter high-risk periods, which indicates that the 6-week period may not have led to underestimation.
In addition to statistical analyses that may establish an association, a declarative pathophysiological mechanism is of importance prior to determining potential causality. Although we did not find any plausible pathophysiological mechanism that explains how migraine can be caused by HPV vaccination, vaccination may possibly act as a trigger for migraine. However, according to many experts, the value of migraine triggers is overestimated [14], but often highly valued by patients. In many patients, there is a genetic predisposition for migraine, although this is difficult to prove in a complex polygenic disease such as migraine. It is not known why in some patients with a genetic predisposition to migraine develop symptoms and some do not develop symptoms [22,23]. However, the SCCS method adjusts for such timeindependent factors. A limitation of the study is misclassification of the outcome: underdiagnosis of migraine may have occurred because we used an observational database from GPs. Patients who did not go to a GP do not appear in the database. Furthermore, we defined the date of onset of migraine symptoms. However, if this was unknown, the date of first entry of migraine symptoms in the patient record was used. This may have led to some misclassification, as patients are only likely to consult their GP if multiple headache attacks have occurred. This misclassification could have led to overdiagnosis if the presumed date was within the high-risk period but the actual date of symptom onset was before the vaccination. On the other hand, misclassification of the onset date could have led to underdiagnosis if the presumed date was classified after the high-risk period but the actual date of onset of symptoms lies within the high-risk period.
This study could also include selection bias. First, in this study, a slightly higher percentage of girls was fully vaccinated (61 %) compared to the national vaccination coverage (49-56 %). Vaccinated persons are more likely to be indigenous Dutch and live in areas with higher socioeconomic status [21]. Secondly, only data of a small number of GPs (n=20) was available for linkage to the vaccination registry. Because availability was based on permission of the GP for data linkage, the introduction of selection bias due to this was unlikely.
Finally, residual confounding may have occurred in the SCCS analysis and in the cohort analysis because there was no information available on other risk factors for migraine, such as oestrogen level [13]; therefore, we were unable to adjust for these other risk factors.
In conclusion, using different methods of analysis, no statistically significant association between HPV vaccination and incident migraine was found. Because the number of cases was limited, these results should be interpreted with caution. Larger studies are warranted to investigate this topic.
|
v3-fos-license
|
2021-10-14T05:21:29.431Z
|
2021-10-01T00:00:00.000
|
238741298
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/18/19/10422/pdf",
"pdf_hash": "83cdbf009b1b6cbf6a637f1ae5c98126f1d07e3c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45963",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "83cdbf009b1b6cbf6a637f1ae5c98126f1d07e3c",
"year": 2021
}
|
pes2o/s2orc
|
A Systematic Review of Cross-Cultural Adaptation and Psychometric Properties of Oral Health Literacy Tools
The aims of this systematic review were to critically appraise the quality of the cross-cultural adaptation and the psychometric properties of the translated versions of oral health literacy assessment tools. CINAHL (EBSCO), Medline (EBSCO), EMBASE (Ovid), and ProQuest Dissertation and Thesis were searched systematically. Studies focusing on cross-cultural adaptation and psychometric properties of oral health literacy tools were included. The methodological quality of included studies was assessed according to the COSMIN Risk of Bias checklist. Sixteen oral health literacy instruments in 11 different languages were included in this systematic review. However, only seven instruments met the criteria for an accurate cross-cultural adaptation process, while the remaining tools failed to meet at least one criterion for suitable quality of cross-cultural adaptation process. None of the studies evaluated all the aspects of psychometric properties. Most of the studies reported internal consistency, reliability, structural validity, and construct validity. Despite adequate ratings for some reported psychometric properties, the methodological quality of studies on translated versions of oral health literacy tools was mostly doubtful to inadequate. Researchers and clinicians should follow standard guidelines for cross-cultural adaptation and assess all aspects of psychometric properties for using oral health literacy tools in cross-cultural settings.
Introduction
Oral diseases pose a significant health burden for many countries and remain to be a major global public health challenge [1]. The World Health Organization (WHO) has reported an estimated 3.5 billion people suffering from oral diseases worldwide [2]. According to the Global Burden of Disease Study 2017, oral diseases are the most common health conditions among both males and females [3]. It is estimated that between 1990 and 2015, the number of people with untreated oral diseases increased from 2.5 to 3.5 billion, causing a 64% increase in disability-adjusted life years [4]. Oral diseases affect people throughout the life course, causing pain, discomfort, sepsis, sleep loss [2], and may lead to social disruption and reduced employment potential [5]. Oral diseases disproportionally affect marginalized communities [6] and are associated with social determinants of health such as socioeconomic status, education, income, language, and health literacy [7][8][9][10].
Health literacy started as a concept associated with the ability of an individual to obtain and process information to support health actions [11]. The theoretical understandings and methods to measure health literacy have experienced a continuous evolution since its review evaluates OHL tools according to language to reduce the inconsistencies resulting from cultural differences. Therefore, the aims of this systematic review were to synthesis evidence on the quality of translation and cross-cultural adaptation process of instruments used to assess OHL and to perform critical appraisal of the psychometric properties of translated versions of OHL tools in relation to validity, reliability, and responsiveness.
Materials and Methods
This review was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines [45] (Table S1). The protocol of this systematic review has been registered and published with the PROSPERO International Prospective Register of Systematic Reviews (CRD42020188812) [46].
Eligibility Criteria
Studies were included if one of the main aim(s) was to: translate and culturally adapt an English version of the OHL tool and evaluate the psychometric properties of translated version (s) of an OHL tool. Additionally, only studies applying quantitative study design were included. Both self-reported and objectively measured tools were included for this systematic review.
Information Source
The following electronic databases were searched using the specified search strategy: CINAHL (EBSCO), Medline (EBSCO), and EMBASE (Ovid). ProQuest Dissertations and Theses Global was also searched for unpublished studies. Additionally, a manual search of the reference list of all identified studies and previously published systematic reviews was also performed. No restriction was made on the publication date (i.e., from the time of inception to present), type, and region. The search was initially conducted from 9 March 2020 and then updated on 25 July 2021.
Search Strategy
The Population Intervention/Exposure Comparator Outcome Study design (PICOS) [47] criteria were used to devise the key concepts and related search terms. A combination of specific Medical Subject Headings (MeSH) terms and keywords related to oral or dental health literacy, tools, psychometric properties, and cross-cultural adaptation were drafted in collaboration with a professional health sciences librarian. The Boolean operators and truncation were used to narrow down and broaden the search scope. The search strategy was pre-tested in the Medline (EBSCO) database and subsequently adapted to the syntax and subject headings of the other databases (Table S2).
Study Selection
All the studies identified from the electronic databases, theses, and manual searches were subsequently exported to a reference manager software Endnote X9 [48] for removing duplicates, screening, and selection. Two reviewers (SP and AA) independently screened the articles based on the eligibility criteria, and manuscripts were screened by title and abstract relevance. Both the authors underwent formal training to reach a consensus for the study selection processes. Studies that were considered to potentially meet the criteria for this review were read in full text by two reviewers (SP and AA). Study authors were contacted to seek additional information in case of any uncertainty on eligibility. A total of three attempts were made to contact the study authors, and if no response was received, studies were screened for eligibility based on the information available. Disagreements were resolved through consensus with discussion with a third co-author (JP/NC/AD). The reasons for excluding the studies were reported in Table S3.
Data Extraction Process
Two independent reviewers (SP and AA) extracted data on the characteristics of the included OHL tools and their measurement properties (reliability, validity, and responsiveness) based on the COSMIN recommendations [39]. Data regarding translation and cross-cultural adaptation procedures of each included study were also extracted. For each instrument, data extracted included information on the publication year, country of origin, authors, type of tool, purpose, expertise of developers, development method, mode of administration, scoring categories, language, cross-cultural adaptation process, and psychometric properties. For missing data and/or uncertainties, study authors were contacted for further information with a maximum of three attempts. Where no response was received, data extraction was completed using the information available.
Assessment of the Methodological Quality
The methodological quality assessment of each included study was evaluated according to two checklists: The Guidelines for the Process of Cross-Cultural Adaptations of Self-Report Measures [31], which states a cross-cultural adaptation, must include an initial translation, synthesis of translation, back-translation, reviews by the expert committee, and the pre-test version of the instrument. To assess the quality of the cross-cultural adaptation process, the tools were rated as + positive rating, -negative rating, 0 no information available, and unclear according to the criteria adapted from Costa and colleagues [44]. The steps of the cross-cultural adaptation process and the scoring system are described in Table S4.
2.
The COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist [38] was used for evaluating the psychometric properties of the translated versions of the OHL tools. This standardized checklist consists of nine boxes on measurement properties, each consisting of 3 to 38 items. Each checklist item is ranked on a 4-point scale (inadequate, doubtful, adequate, and very good). An overall score of the methodological quality of a study was determined by taking the lowest rating of any items in a box (i.e., worst-score-counts method).
Two reviewers (SP and AA) independently assessed the methodological quality of the included studies, and the disagreements were discussed with each other to reach a consensus.
Assessment of Psychometric Properties
Evidence for the measurement properties of included tools was extracted and assessed against the updated criteria for suitable psychometric properties [37] (Table S5). This criterion evaluates the following psychometric properties: structural validity, internal consistency, reliability, measurement error, hypothesis testing for construct validity, crosscultural validity or measurement invariance, criterion validity, and responsiveness.
Data Synthesis
After data extraction, a narrative was created to provide a descriptive synthesis of the included studies in two steps. The first task was to assess the cross-cultural adaptation of all identified OHL tools. The second step was to determine the psychometric properties (reliability, validity, and responsiveness) of each tool. The quality of the cross-cultural adaptation process was analyzed on the basis of the five basic steps: initial translation, synthesis, backward translation, expert committee review, and pretesting (Table S4).
Measurement Properties
The measurement properties are divided into three domains: reliability, validity, and responsiveness.
Reliability
Reliability is defined as the extent to which the results obtained are the same for repeated measurements under several conditions [39]. Reliability contains the following measurement properties:
1.
Internal consistency: The degree of inter-relatedness among the items, expressed by Cronbach's alpha value [39].
2.
Reliability: The proportion of total variance in the measurements, which is because of true differences among patients, is represented by the Intraclass Correlation Coefficient (ICC) or weighted kappa [37].
3.
Measurement error: The systematic and random error of a patient's score that is not attributed to the true change of the construct to be measured [49]. Measurement error is calculated as the smallest detectable change or limits of agreement, and its adequacy is determined by relating them to the minimal important change [50].
Validity
Validity is the extent to which an instrument measures the construct(s) it intends to measure. It contains the following measurement properties:
1.
Content validity: The degree to which the content of an instrument is an adequate reflection of the construct to be measured [39]. It is assessed by asking patients and professionals about the relevance, comprehensiveness, and comprehensibility of the items, response options, and instructions [51]. Content validity is only relevant for the development of original instruments [31], and therefore, not relevant to the scope of this review.
2.
Criterion validity: The extent to which scores of an instrument are an adequate reflection of a gold standard [39]. Since OHL tools do not have a gold standard for item selection, the domain criterion validity was not considered in this review.
3.
Construct validity: The degree to which the scores of an instrument are consistent with hypotheses (for instance, with regard to internal relationships, relationships to scores of other instruments, or differences between relevant groups) based on the assumption that the instrument validly measures the construct to be measured [39]. It has three important aspects: a. Structural validity: The degree to which the scores of an instrument are an adequate reflection of the dimensionality of the construct to be measured [39]. To assess the unidimensionality of the subscale, factor analysis should be performed on each scale separately [38]. b.
Hypothesis testing: The degree to which a particular measure relates to other measures in a way one would expect if it was validly measuring the supposed construct, i.e., in accordance with predefined hypotheses about the correlation or differences between the measures [39]. The following hypothesis was set for testing the construct validity: • Correlation with scores of instruments measuring a similar construct or another OHL tool included in the pre-specified list will be highly or moderately to highly correlated.
•
Correlation with scores of instruments measuring related but not the same constructs; for example, health-related quality of life measures will be either moderately to highly or moderately correlated. • A weak to moderate correlation will be observed between scores of instruments included here and two different subgroups of patients.
For hypothesis testing, the following predefined [52] correlation thresholds were used with overlap between the categories to allow more flexibility in the hypotheses: c. Cross-cultural validity: The degree to which the performance of items on a translated or culturally adapted instrument is an adequate reflection of the performance of items of the original version of the instrument [39]. This property is assessed by multi-group factor analysis or differential item functioning [53], using data from a population that completed the questionnaire in the original language, as well as data from a population that completed the questionnaire in the translated language.
Responsiveness
The ability of an instrument to detect change over time in the construct to be measured [39]. The responsiveness of the instrument is expressed as the area under the receiver operator characteristic curve [37].
Results of the Search
Initial searches retrieved a total of 927 articles from the electronic databases and manual search. In addition, theses and dissertations were searched, which further resulted in 100 studies. After the removal of 218 duplicates, 809 articles were identified for further reading. Of these, 786 articles were excluded as they did not measure OHL and report on the translation and cross-cultural adaptation of OHL tools. Further, one study was removed due to language limitations, and one study was removed due to accessibility issues. A total of 22 full-text studies were assessed by two authors (SP and AA), which further excluded six studies based on the eligibility criteria. The reasons for exclusion are provided in Table S3. The Cohen's kappa value of agreement between the reviewers was 0.90, and any disagreement was resolved through a consensus and discussion process. Finally, this review included a total of 16 studies on the translated version of OHL tools evaluating instruments in 11 different languages. The identification, screening, and eligibility process are outlined in the PRISMA flow diagram ( Figure 1).
Characteristics of Oral Health Assessment Instruments
The general overview of OHL instruments included in this review is illustrated in Table 1. All 16 instruments [54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69] were published between 2012 and 2020, indicating a recent increase in concerns regarding OHL among the non-English speaking populations. Over half of the instruments were word recognition tools [54][55][56][57][58][59][60]68,69], three were functional health literacy tools with reading comprehension and numeracy section [65][66][67], one tool was a word recognition tool with added comprehension [63], one tool encompassed comprehension, numeracy test, listening and decision-making domains [64], one tool assessed access, support, understanding, use, economic barriers, receptivity, and communication [61], and one tool assessed oral health knowledge, numeracy test and comprehension [62]. The number of items in each tool ranged from 20 to 99. Table S6 outlines the general characteristics of translated versions of OHL tools. Most of the tools were developed by a panel of specialists with expertise in dentistry, public health, and translation in consultation with language experts. However, there were two tools [58,67] that did not report on the expertise of the developers. All the 16 tools were developed by translating the existing OHL tools, with minimal modifications to suit the needs of the identified population. Most tools were administered through face-to-face interviews conducted by the study investigators. The scoring method varied among tools; however, higher scores indicated a higher level of OHL among all tools. Int. J. Environ. Res. Public Health 2021, 18, x FOR PEER REVIEW 7 of 22 The results for the cross-cultural adaptation (Table S7), psychometric properties (Table S8), and methodological quality assessment (Table S9) of different OHL instruments identified by language are presented below.
Arabic
REALD-30 is the only OHL tool that has been translated into the Arabic language [54]. REALD-30 is a word recognition test originally designed to assess the ability of an individual to read and pronounce 30 common dental words arranged in order of increasing difficulty [26]. AREALD-30 rated positive for all the steps required for an accurate translation and cross-cultural adaptation. The AREALD-30 scored sufficient (+) rating of the measurement properties for internal consistency measured by Cronbach's α= 0.89 and test-retest reliability measured by intraclass correlation coefficient (ICC) = 0.99 (range 0.97-0.99). However, the evidence for reliability was limited due to the inadequate sample size used to perform the analysis. AREALD-30 was tested for the original one-factor structure using confirmatory factor analysis, and the results showed the presence of two factors in agreement with the original REALD-30. The unidimensionality of AREALD-30 was also evaluated by Rasch analysis, and the amount of variance explained by Rasch measures was 50.9%. A significant and positive correlation was found between AREALD-30 and AREALD-99 (Spearman's rs = 0.95, p < 0.01), demonstrating very suitable convergent validity. However, the correlations of AREALD-30 with Oral Health Impact Profile (OHIP-14), self-perceived oral health status, and dental visiting habits for predictive validity were not significant. Furthermore, the discriminant validity of AREALD-30 explored across categories of the educational levels of the subjects was noted to be significant (p = 0.02).
Chinese
REALD-30 is the only tool translated into the traditional Chinese language [57]. HKREALD-30 rated positive for initial translation, synthesis, expert committee review, and pretesting steps. The only drawback was the lack of sufficient information about the backtranslation procedure. The test-retest reliability and internal consistency of HKREALD-30 were sufficient, as shown by the ICC value of 0.78 (range 0.61-0.80, CI = 0.53-0.91) and Cronbach α value of 0.84. However, the methodological quality for reliability was inadequate as only 10% of the participants were re-interviewed after one week. Rasch analysis was used to determine the validity of the response scale and to identify redundancy using infit Z statistics. The infit ZSTD (−1.55-0.46), outfit ZSTD (−1.59-0.99), infit MNSQ (−0.84-1.07), and outfit MNSQ (0.71-1.30) for the items were within acceptable ranges. HKRERALD-30 had a highly positive and significant correlation with HKRERALD-99 (rs = 0.86, p < 0.01) and TOFHLiD (Table S8), reflecting adequate convergent validity. Further, there was a significant correlation (p < 0.01) between reading habits and HKREALD-30 (rs = 0.38 for print materials and 0.27 for digital materials). However, the correlations for other subgroups, such as their educational level and pattern of dental visits for concurrent validity, were not statistically significant.
Hindi
OHL-AQ is the only OHL tool that has been translated into the Hindi language [64]. OHL-AQ is 17 items test of functional OHL originally designed to assess four conceptual domains: reading, numeracy, listening, and decision making [70]. The forward and backward translation was performed by only one translator. The discrepancies in translation were sorted out by an expert panel, and a pretesting phase was carried out. Therefore, OHL-AQ-H did not meet the quality criteria for initial translation, synthesis, and back-translation required for the process of cross-cultural adaptation.
The internal consistency determined by Cronbach's α value was acceptable (0.7), and the assessment of test-retest reliability among one-half of the participants after two weeks demonstrated significant results with an almost perfect agreement (ICC = 0.93, 95% CI = 0.88-0.96), indicating adequate reliability. Predictive validity and concurrent validity were reported to be significant by comparing OHL-AQ-H scores with oral hygiene status (p = 0.005) and dentition status (p = 0.001), and self-reported oral health (p = 0.01), respectively. However, correlation coefficients were not calculated, and no comparison with other outcome measurement instruments was performed. Therefore, the methodological quality of the OHL-AQ-H rated inadequate for construct validity.
Malay
OHLI is the only OHL tool that has been translated into the Malay language [65]. OHLI is a test of functional oral health literacy containing 38 items to assess the reading comprehension section and 19 items on numeracy skills [71]. OHLI-M rated positive for all the steps required for an accurate translation and cross-cultural adaptation process. It also scored a sufficient (+) rating for the measurement properties such as internal consistency and test-retest reliability measured by ICC = 0.86 (95% CI = 0.72-0.93). Reliability was assessed after two weeks, which makes the methodology doubtful. The Spearman's correlation between the OHLI-M and Short Test of Functional Health Literacy in Adults was positive (rs = 0.37, p<0.001), supporting adequate convergent validity. However, lack of concurrent validity was indicated by the correlations between OHLI-M scores and decayed, missing, and filled teeth (DMFT) index (Pearson's correlation r = −0.11, p = 0.33) and Community Periodontal Index (CPI) scores (r = −0.04, p = 0.70). The OHLI-M scores among categories of education (p<0.001) and time since the last dental visit (p=<0.020) were significant.
Persian
REALD-99 is the only OHL tool that has been translated into the Persian language [58]. REALD-99 is a word recognition test originally made up of 99 common dental words with varying levels of difficulty [27]. IREALD-99 rated positive for the quality criteria for initial translation, synthesis, back-translation, and pre-test. However, no information about the existence of an expert committee was provided. The project manager compared the translation versions and reconciled discrepancies before pretesting.
Internal consistency was higher than 0.70, and the test-retest reliability was also sufficient (Table S8) on administration after two weeks. A principal component analysis was performed to assess unidimensionality and strong first factor. The variance explained by Rasch measures was 47.54%. The methodological quality for structural validity was considered inadequate due to the sample size. The convergent validity of IREALD-99 was supported by the positive correlation between TOFHLiD scores and self-perceived dental health status as outlined in Table S8. For concurrent validity, IREALD-99 was compared across education and income categories. There were significant differences (p < 0.01) in the IREALD-99 scores across educational categories, but not income levels (p = 0.09).
Portuguese
There are five OHL instruments that have been translated into the Brazilian-Portuguese language: REALD-30 [55], REALMD-20 [56], OHLA-S [63], HKOHLAT-P [62], and HeLD [61]. REALMD-20 is a singular tool containing 20 items designed to screen patients by their ability to read medical and dental words [72]. OHLA-S is a pronunciation and comprehension test originally containing 30 items related to the oral conditions with an added comprehension test for the use in Spanish speakers [73]. HKOHLAT-P is a tool originally developed to be used in Hong Kong [74]. It evaluates oral health knowledge, reading comprehension, and numeracy and is mainly focused on pediatric dentistry [62]. HeLD is a tool originally containing 29 items for assessing multiple dimensions for OHL encompassing communication, access, receptivity, understanding, use, support, and economic barriers [75].
Brazilian-HeLD: Brazilian-HeLD rated positive for all the steps required for an accurate translation and cross-cultural adaptation processes, except for the synthesis after the initial translation. HeLD scale comprises HeLD-29 and HeLD-14. All the seven factors in both forms of HeLD had adequate internal consistency (Table S8). However, the evidence for reliability was unknown as the ICC value was not reported. Confirmatory factor analysis was performed to test the fit of data to the factor structure of both HeLD forms. However, the goodness of fit of the confirmatory factor analysis models demonstrated satisfactory results only for HeLD-14 subsamples (CFI = 0.97-0.98; RMSEA = 0.05 and SRMR = 0.03). Convergent validity was estimated by calculating the average variance extracted and composite reliability. However, no comparison with other outcome measurement instruments was performed, and correlation coefficients were not reported.
BOHLAT-P: BOHLAT-P rated positive for all the steps required for an accurate translation and cross-cultural adaptation process. The reliability of the BOHLAT-P evaluated through the assessment of internal consistency, and test-retest (Table S8) were well above the recommended levels. Exploratory factor analysis was performed to evaluate the dimensionality of BOHLAT-P, following which confirmatory factor analysis was performed to confirm the unidimensionality. The goodness-of-fit indices was X2 = 1506.530, df = 1124, CFI = 0.934, TLI = 0.931, and RMSEA = 0.041, thus indicating an acceptable to excellent model fit. The only limitation in determining structural validity was the inadequate sample size. Convergent validity of BOHLAT-P measured by Spearman's correlation test showed a high positive statistically significant correlation with BREALD-30 scores, the number of years of schooling, and the number of hours spent reading as outlined in Table S8. BOHLAT-P scores had a negative correlation with Early Childhood Oral Health Impact Scale scores and the number of cavitated teeth. After controlling for confounding variables, the associations of BOHLAT-P scores with caries or the number of teeth with cavitated dental caries (tooth decay) were not significant.
BREALD-30: BREALD-30 rated positive for all the steps required for an accurate translation and cross-cultural adaptation process. It demonstrated a suitable internal consistency, scored excellent for test-retest reliability, and had moderate to nearly perfect kappa coefficients ranging from 0.42 to 1.00. However, an inadequate sample size was a limitation. Exploratory factor analysis was performed to assess the unidimensionality of the instrument, and the result demonstrated the predominance of one factor. Similar to the original study, the hypothesis that OHL measured by the BREALD-30 is unidimensional was not confirmed, and at least seven factors were necessary to explain 50% of the total variance. However, no confirmatory factor analysis was performed, due to which the structural validity was indeterminate. Convergent validity accessed by correlating the BREALD-30 and scores with the level of general literacy measured by the National Functional Literacy Index and educational attainment was statistically significant (Table S8). The test for discriminant validity showed statistically significant differences according to the occupation (p = 0.004), a history of dental visits (p = 0.017) and monthly household income (p < 0.001). No significant correlation was found between the BREALD-30 and OHIP-14 scores (rs = −0.08; p = 0.198). The BREALD-30 score was significantly associated with the respondent's assessment of his/her child's oral health after adjusting for other covariates (p = 0.024).
BREALMD-20: BREALMD-20 rated positive for initial translation, synthesis, backtranslation, and pretesting steps required for an accurate translation and cross-cultural adaptation process. However, there was no information regarding the existence of an expert committee to verify the translated versions. The internal consistency was above the recommended level. Although the test-retest reliability assessed after one month was also considered sufficient (ICC = 0.73, 95% CI = 0.66-0.79), the methodological quality was lowered due to the inadequate sample size. The health literacy measured by REALMD-20 was found to be multidimensional. The first four factors accounted for 52.1% of the total variance. No confirmatory factor analysis was performed, due to which the rating for structural validity was indeterminate. A positive and significant correlation was found between the REALMD-20 and the BREALD-30 (rs = 0.73, p <0.001) and Brazilian National Functional Literacy Index (rs = 0.60, p < 0:001), reflecting very suitable convergent validity. When compared for discriminant validity across categories, BREALMD-20 scores were higher among health professionals, more educated people, individuals who reported good/excellent oral health conditions, and who sought preventive dental services.
OHLA-B: OHLA-B rated positive for all the steps required for an accurate translation and cross-cultural adaptation process. The psychometric analysis of OLHA-B was not performed.
Romanian
REALD-30 is the only tool that has been translated into the Romanian language [60]. RREALD-30 rated positive for forward translation, synthesis, and pretesting steps. However, back-translation into English was performed by a single translator, and although the existing committee agreed on the first Romanian version, the design was doubtful. The internal consistency and the test-retest reliability were adequate. However, the evidence for reliability was limited due to the small sample size used for the analysis.
RREALD-30 was tested for the original one-factor structure using principal component analysis, and the results showed RREALD-30 had a one-factor solution. The unidimensionality evaluated by Rasch model analysis demonstrated a discriminating ability for each RREALD-30 word. The sum score of RREALD-30 applied in a structural equation model showed an adequate fit (Table S8). RREALD-30 demonstrated suitable concurrent and predictive validity. A significant correlation (p < 0.001) of RREALD-30 scores was found with sex, education, and dental visits. The RREALD-30 scores had a statistically significant effect also on OHIP-14 (p = 0.004). An important drawback was the lack of comparison with other validated health literacy tools required to assess the convergent validity.
Russian
OHLI is the only tool that has been translated into the Russian language [67]. In contrast to all other translated versions, it failed to meet the criteria for a positive rating for any of the steps involved in the translation process. R-OHLI was translated from English to Russian by only one translator, followed by back-translation made by an independent translator. Two translators then evaluated the equivalence between the original and backtranslated versions. The translated versions did not do through review by an expert committee, which is a critical step in finalizing the prefinal and final versions. Moreover, the prefinal version was not tested in a sample of the population.
R-OHLI scored a sufficient (+) rating for the measurement properties such as internal consistency and test-retest reliability (Table S8). Despite this, the evidence for reliability was low because of the very small sample size used for the analysis. For construct validity, R-OHLI was compared with the oral health knowledge test, which showed a significant correlation (rs = 0.363, p< 0.001). A significant drawback was the lack of comparison with similar outcome measure instruments and the use of the non-validated tool for comparison. Therefore, methodological quality for construct validity was inadequate.
Spanish
OHLI [66] and REALD-30 [59] have been translated into the Spanish language. OHLI-Cl: OHLI-Cl rated positive for all the steps required for an accurate translation and crosscultural adaptation. For OHLI-CI, the internal consistency was high, and reliability was sufficient, as outlined in Table S8. However, the methodological quality for reliability was doubtful due to the inadequate sample size.
The Pearson and Spearman correlations to determine the convergent validity of OHLI-Cl for Oral Health Knowledge Test and for Short Assessment of Health Literacy for Spanish-speaking Adults (SAHLSA) were statistically significant. For predictive validity, the correlations of the OHLI-Cl with DMFT, CPI, Oral Hygiene Index Simplified (OHIS), and Oral Health Impact Profile of 49 items (OHIP-49) were determined, which were also statistically significant (p < 0.01). A significant disadvantage was the lack of comparison with a validated instrument measuring similar outcome measures, and hence the evidence for construct validity was inadequate.
REALD-30 for the Chilean population: The forward translation of the instrument by two independent native Spanish speakers met the criteria of an accurate translation process, followed by synthesis to produce a consensus. However, back-translation was not reported, and pretesting was not performed. Although an evaluation was made by four experts in dental public health, the design is doubtful. The internal consistency was high, and the reliability was sufficient in a retest performed after four weeks (Table S8). Despite that, the methodological quality for reliability was doubtful due to the inadequate sample size. For predictive and convergent validity, Pearson's r and Spearman's rho correlation coefficients were estimated. A significant strong positive association of Spanish REALD-30 was found with SAHLSA (r = 0.71; rs = 0.69 <0.01). The correlation with CPI, OHIS, DMFT, and OHIP-49sp was also statistically significant.
Thai
REALD-30 is the only tool that has been translated into the Thai language [68]. ThREALD-30 rated positive for the initial translation, synthesis, and pre-test steps of the cross-cultural adaptation process. The authors reported that the back-translation and expert committee review were performed; however, there was no information on the number of translators involved, and the review process was also doubtful.
The internal consistency and pre-posttest reliability of ThREALD-30 were excellent (Table S8). Even so, evidence for reliability is limited due to doubtful methodological quality. ThREALD-30 had a significant negative correlation with OHIP-14 Spearman's rank correlation coefficient, oral health status, DMFT, OHIS, and clinical attachment loss, respectively (Table S8). A significant drawback was the lack of comparison with other validated instruments to measure OHL, due to which the methodological quality for construct validity was inadequate.
Turkish
REALD-30 is the only tool that has been translated into the Turkish language [69] and rated positive for all the steps required for an accurate translation and cross-cultural adaptation.
The internal consistency and the test-retest reliability are well above the recommended levels (Table S8). Classical Test Theory and Rasch analysis were performed, and both suggested OHL as multidimensional. The results of CFA indicated that the two-factor model demonstrated a better fit than did the one-factor model (x 2 /df=1.34, CFI=0.89, TLI=0.89, and RMSEA=0.052). The Rasch analysis explained 37.9% of the total variance in this data set. However, the sample size used in the analysis was less than five times the number of items; hence the methodological quality was inadequate. TREALD-30 was positively and significantly associated with REALM, as well as with the participants' reading ability of hospital materials, thus indicating its convergent validity. TREALD-30 scores were weak but significantly correlated with the number of missing teeth, age, OHIP-14 score, years of schooling, self-rated oral health, and family monthly income. Further, there was a significant association between the use of dental floss and daily consumption of sugar-added food and beverages, suggesting the predictive validity of TREALD-30.
For the selection of instruments in different languages and cultures, WHO recommends the translation and cross-cultural adaptation of existing instruments [76], thereby improving communication between patients and healthcare providers and drawing international comparisons. All tools were included in the review [54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69]; however, only seven tools followed all the steps required for an accurate translation process [54,55,62,63,65,66,69]. The main reason behind poor ratings was the lack of detailed information provided for the cross-cultural adaptation process. A poor translation process creates inconsistencies between the translated and original versions of instruments, which can affect the validity of the instrument [77]. The results of this review indicate that the process of translation did not affect the reliability of an instrument. Despite poor translation and cross-cultural adaptation processes, most tools had high reliability and internal consistency reflected by the ICC and Cronbach's α values, respectively. Generally, the methodological quality of the translation process rated negative, mainly due to the involvement of a single translator both in the forward and backward translations. It is recommended that the translation processes should be performed by at least two independent translators to ensure the translated version reflects the same item content as the original version [78]. Moreover, many studies did not report on the clear existence of the expert committee and, if reported, were of doubtful design. It is recommended that an expert committee comprising of methodologists, health professionals, language professionals, and translators should reach a consensus on any discrepancy to develop a prefinal version for field testing, which is crucial to achieving cross-cultural equivalence [79].
The COSMIN checklist provides a separate determination of the methodological quality of the studies and their results to compare the psychometric properties of instruments during a systematic review. This approach provides an independent quality rating scores for each psychometric property, making it advantageous over other tools [80]. The most reported psychometric property was internal consistency expressed by Cronbach α, and the results were adequate [54][55][56][57][58][59][60][61][62][64][65][66][67][68][69]. Furthermore, the methodological quality of the studies for internal consistency was also generally very good. Internal consistency ascertains the uniformity of the measures [81]. Higher values represent a strong correlation between the items of a scale, and values higher than 0.9 indicate that some items are not essential and can be omitted to shorten the scale [82].
Among the nine studies that reported on structural validity [54][55][56][57][58][60][61][62]69], six had very good or adequate methodological quality [54][55][56][57]60,61]. The inadequate sample size was the common methodological shortcoming of the remaining studies [58,62,69]. Structural validity assesses the dimensional structure of an instrument through factor analyses [84]. The purpose of factor analysis is to provide data on variables and items of a questionnaire that can be reduced to facilitate the understanding of underlying concepts and their interpretation [85].
Although most of the studies reported statistically significant associations, there were only 10 studies that had adequate or very good methodological quality [54][55][56][57][58][59]62,65,66,69]. The common reasons for inadequate methodological quality for construct validity were lack of comparison with validated health literacy tools used in a similar setting, correlation coefficients not calculated, and inappropriate sample size. A correct hypothesis for the construct validity of an instrument provides evidence that the instrument measures what it is intended to measure [52].
From the results, it was found that there is no comprehensive OHL tool to examine all the domains and psychometric properties according to the COSMIN checklist. Despite high values for the reported psychometric properties, it is important to note that the quality of the majority of the studies was recorded as doubtful or inadequate. Moreover, none of the studies evaluated measurement error, cross-cultural validity, and responsiveness, which is very concerning. It is important to know the minimal important change in scores of OHL to understand whether the smallest measured changes in literacy level are meaningful and matter to the patients [86]. Information regarding the cross-cultural validity of the tools using measurement variance analysis is also required to know differences between group factors such as age, sex, or different patient populations. It is possible that differences between groups allow them to respond differently to a particular item [38]. Similarly, the responsiveness of an instrument is important to measure the changes in measurements over time [87]. However, due to the cross-sectional nature of the included studies, the ability of OHL tools to measure OHL over time was not reported.
The existing OHL tools measure word recognition, numeracy, and reading skills related to oral health context [42]. The wider range of skills required for decision making, communication, and health care use must be incorporated into the tools to capture all dimensions of OHL. Low health literacy, limited English proficiency, and cultural barriers are identified as a "triple threat" to effective communication between patients and healthcare providers [88]. Health literacy is an emerging field, and integration of cultural and linguistic is necessary to provide competent care. The original versions of tools are less useful to assess OHL levels in non-English speaking countries.
Strengths and Limitations
To the best of our knowledge, this is the first systematic review to evaluate the crosscultural adaptation and psychometric properties of OHL tools in languages other than English. This systematic review followed standard guidelines for the process of translation and cross-cultural adaptation and the COSMIN checklist in reporting psychometric analysis of the OHL tools, which provides a standardized, detailed, and transparent framework to evaluate measurement properties of health outcome measurement instruments. The COSMIN guidelines were preferred over other methods due to the advantage of assessing the quality of all domains of psychometric properties comprehensively, while other methods were designed for evaluating only limited aspects of psychometric properties such as criterion validity [89] or reliability [90]. Another strength is the review evaluated translated versions of OHL tools by language, which facilitates a selection of the best tool available in that language.
This systematic review has a few limitations. First, we only included studies published in the English language, and it is, therefore, possible that we might have missed some translated versions of OHL tools published in non-English journals. Second, one study was excluded due to inaccessibility despite repeated attempts to contact the authors, which could have provided further insights. Third, we used the "the worst score counts" principle, as per the COSMIN Risk of Bias checklist, which means the methodological quality is interpreted by taking the lowest score achieved for a psychometric property and that poor aspects of the study cannot be compensated by the suitable aspects of the study. For example, even if one of the several items in an instrument scores inadequate, then an overall rating of that psychometric property is reported as inadequate. Finally, only three databases were searched, so it is possible that some relevant studies may have been missed.
Implications
The findings of this review may provide important information to the relevant stakeholders, including oral and dental health professionals, treatment teams, and researchers, regarding the approaches to measure OHL in a culturally and linguistically diverse population. Since there is a lack of a substantial amount of information on psychometric properties and poor assessment or reporting of the studies, it is very challenging to recommend which tool is the best. Therefore, high-quality studies are required to fill gaps in knowledge regarding different aspects of cross-cultural adaptation and psychometric analysis of OHL tools. There is a need for accurately cross-culturally adapted tools with suitable reliability, validity, and responsiveness to measure OHL in non-English speaking countries where the prevalence of diseases is disproportionally higher.
Conclusions
The quality of translations and cross-cultural adaptation was poor, and none of the tools were evaluated for all the aspects of psychometric properties. A significant amount of information regarding the cross-cultural adaptation process and psychometric properties was missing or had doubtful methodological quality. There is no comprehensive tool that evaluates all aspects of psychometric properties in cross-cultural settings. Despite promising values for some measurement properties, the evidence for reliability and validity is limited due to methodological deficiencies. Future studies on cross-cultural adaptation should emphasize the use of multiple bilingual translators and expert panel roles in the process. Further work is required to develop tools by incorporating all aspects of psychometric analysis to ensure clinical utility and cultural competence.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/ijerph181910422/s1, Table S1: PRISMA Checklist, Table S2: MEDLINE (EBSCO) search strategy, Table S3: Reasons for excluded studies, Table S4: Guidelines for the process of the cross-cultural adaptation of self-reported measures (adapted from Costa et al.), Table S5: Updated criteria for suitable measurement properties, Table S6: Characteristics of oral health assessment instruments, Table S7: Assessment of quality of translation and cross-cultural adaptation of oral health literacy tools into different languages, Table S8: Rating of the measurement properties per language and tool using the criteria for the suitable psychometric properties and Table S9: Methodological Quality Assessment of Studies on Psychometric Properties of the Included Tools using COSMIN risk of bias checklist.
Data Availability Statement:
The data presented in this study are not publicly available due to privacy.
Acknowledgments:
We acknowledge the assistance of Katrina Chaudhary, School of Health Sciences librarian at Western Sydney University, in developing and testing the search strategy. We would like to acknowledge Mohit Tolani, who was involved in the initial stages of conceptualizing the research questions.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2018-04-03T00:05:51.600Z
|
2013-02-25T00:00:00.000
|
8950438
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0057635&type=printable",
"pdf_hash": "8261d4ae85826124fb47c27835df7f48a58e434c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45964",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"sha1": "8261d4ae85826124fb47c27835df7f48a58e434c",
"year": 2013
}
|
pes2o/s2orc
|
The Therapeutic Role of Monocyte Chemoattractant Protein-1 in a Renal Tissue Engineering Strategy for Diabetic Patients
In this study we aim to boost the functional output of the intra-kidney islet transplantation for diabetic patients using a tissue engineered polymeric scaffold. This highly porous electrospun scaffold featured randomly distributed fibers composed of polycaprolactone (PCL) and poliglecaprone (PGC). It successfully sustained murine islets in vitro for up to 4 weeks without detected cytotoxicity. The in vivo study showed that the islet population proliferated by 89% within 12 weeks when they were delivered by the scaffold but only 18% if freely injected. Correspondingly, the islet population delivered by the scaffold unleashed a greater capability to produce insulin that in turn further drove down the blood glucose within 12 weeks after the surgery. Islets delivered by the scaffold most effectively prevented diabetic deterioration of kidney as evidenced by the lack of a kidney or glomerular enlargement and physiological levels of creatinine, urea nitrogen and albumin through week 12 after the surgery. Unlike traditional wisdom in diabetic research, the mechanistic study suggested that monocytes chemoattractant protein-1 (MCP-1) was responsible for the improved preservation of renal functions. This study revealed a therapeutic role of MCP-1 in rescuing kidneys in diabetic patients, which can be integrated into a tissue engineered scaffold to simultaneously preserved renal functions and islet transplantation efficacy. Also, this study affords a simple yet effective solution to improve the clinical output of islet transplantation.
Introduction
According to the American Diabetes Association, the diabetes inflicts 25.8 million patients in the U.S in 2011 and will dramatically increase the likelihood of other diseases, such as heart diseases, kidney failure, nervous system diseases, etc. Particularly, type I diabetes resulting from the autoimmune destruction of functional pancreatic beta-cells responsible for producing insulin heavily burdens the children as well as adult patients. To restore the depressed or lost insulin production, scientists have extensively explored islet transplantation as a therapeutic solution in the past few decades but have met limited clinical success [1]. A number of challenges have thwarted this endeavor, including inflammation, lack of vasculature, etc, which account for the rapid loss of functional islet population after transplantation [2,3]. In addition, recent tissue engineering research has revealed the in vivo tissue regeneration and/or remodeling is heavily regulated by the immune system that is typically activated by the introduction of foreign materials and traumatic surgery [4].
In this study, we pioneered the employment of an electrospun composite scaffold of polycaprolactone (PCL) and poliglecaprone (PGC) as the delivery vehicle for syngeneic murine islet transplantation to improve the clinical performance of islet transplantation in diabetic patients. PGC and PCL are both FDA approved degradable suture materials and thus the composite scaffold is expected to provide a temporal structural support for the islet population to integrate with the host. Our investigation showed that the scaffold increased the proliferation of transplanted islets and their ability to regulate blood glucose and glomerular functions in diabetic mice compared to those that received freely injected islets. Furthermore, a mechanistic study revealed that monocytes chemoattractant protein-1 (MCP-1) was responsible for this improvement, suggesting a promising therapeutic candidate in future renal tissue engineering strategy for diabetes.
Materials and Methods
The Fabrication and the Morphological Characterization of the Electrospun Scaffold PGC (Advanced Inventory Management, Mokena, IL) and PCL (Absorbable Polymers, Birmingham, AL) were dissolved (weight ratio of 1:3) in 1,1,1,3,3,3-Hexafluoro-2-propanol (HFP) (Sigma Aldrich, St. Louis, MO) to achieve a total concentration of 12% (w/v). The solution was then loaded into a syringe capped with a 27 gauge blunt needle and distanced at 25 cm from the collection board. 0.5 mL of the solution was electrospun to the collection board at a voltage of 30 kV and a feeding rate of 3 ml/ hr. Thereafter, the scaffold was retrieved from the board and desiccated in vacuum for 24 hr prior to subsequent analyses. A square specimen measuring 1 cm61 cm was cut from the scaffold and sputter-coated by gold. A scanning electron microscope (SEM) (Philips SEM 510) was employed to take images of the specimen at an acceleration voltage of 30 kV.
MIP-luc Transgenic Mice and the Creation of Diabetic Mice
MIP-luc transgenic mice (in a C57BL/6 background) were generated, where the transgene comprises the MIP promoter fragment driving the expression of the firefly luciferase (MIP-luc) [5,6]. Beta cells from MIP-luc mice can be visualized using bioluminescent imaging and their mass is correlated with the bioluminescent signal [7]. Hemizygous MIP-luc transgenic mice (littermates from a single homozygous male MIP-luc mouse) were treated with a single intraperitoneal (IP) injection of Streptozotocin (STZ) (150 mg/kg, Sigma Chemical, St. Louis, MO) to induce diabetes. Diabetic mice with non-fasted blood glucose values .400 mg/dl for more than 2 consecutive days (SureStep; Lifescan, Milpitas, CA) were considered diabetic. The animal protocol of this study was approved by the Animal Care and Use committee of the Second Military Medical University and Shanghai Changzheng Hospital (Permit Number. 08-0086).
In vitro Biocompatibility Analysis and Measurement of Insulin
Syngeneic islets from MIP-luc transgenic C57BL/6 mice were isolated following intraductal collagenase digestion (Collagenase P, 0.3 mg/ml; Roche, Indianapolis, IN) and purification by Ficoll gradient centrifugation (Sigma Aldrich, St. Louis, MO) as previously described [8,9]. Circular specimens (D = 6.5 mm) were cut from desiccated scaffold and plated into a 96-well tissue culture plate (TCP). All specimens were incubated with 70% ethanol for at least 15 min and thoroughly rinsed with sterile phosphate buffer saline (PBS). Freshly collected islets were seeded (100 islet equivalents/scaffold) on the scaffold or directly on the bottom surface of the plate in islet growth media, and incubated for up to 4 weeks at 37uC with 5% CO 2 . The viability of the islet population was measured by the tetrazolium compound [3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium, inner salt] (MTS) (Promega Corporation, Madison, WI) weekly per the manufacturer's protocol. For the in vitro insulin secretion assay, freshly harvested islets were seeded on the scaffold (100 islet equivalents/scaffold) contained in a 96-well TCP and cultured for 24 hr at 37uC with 5% CO 2 . The supernatant of the cell media was collected at 3 hr and 24 hr, and measured by an RIA kit (Linco Research, Inc., Si. Charles, MO) per manufacturer's protocol. In the control group, 100 islet equivalents were directly cultured in the 96-well TCP.
Transplantation of ISLETS into Diabetic Mice
Islets were harvested and scaffold samples (1 mm61 mm) fabricated and sterilized as described above. Islets were seeded (100 islet equivalents/scaffold) on the scaffold in minimal volume of islet growth media and incubated for 3 hr at 37uC with 5% CO 2 prior to the surgical implantation. Recipient mice were anesthetized by vaporized 2.5% isoflurane via inhalation. The islets with or without a scaffold were surgically placed under the kidney capsule. Each mouse received one set of islet/scaffold or freely injected islets in one kidney. The experiment comprised four groups with 5 mice in each group: (1) sham group which received neither islets nor scaffolds; (2) scaffold group which received scaffolds only; (3) islet group which received freely injected islets; and (4) islet/scaffold group which received islets on the scaffold. All mice were sacrificed 12 weeks after the surgery.
In vivo Proliferation and Functions of Transplanted Islets
Bioluminescent optical imaging was performed using a Xenogen IVIS 200 imaging system (Xenogen, Alameda, CA) as previously described [6]. Briefly, MIP-luc mice were fasted for 4 h, shaved, and anesthetized with vaporized isofluorane using the Xenogen system. Mice were placed on their sides on the imaging stage and an overlay image was initially taken. Then mice were injected i.p. with 15 mg/ml D-luciferin in sterile PBS (150 mg/kg) and exactly 14 min after the injection, a bioluminescent image was captured with an exposure time of 1 minute. Subsequent image processing, including quantification of bioluminescence, was conducted using the Living Image Software v. 2.05 (Xenogen).
Blood samples were drawn from the tail vein of each mouse immediately before the surgery and then every 4 weeks postsurgery. All mice were non-fasted before the blood collection. Blood glucose was measured by a OneTouch Ultra glucometer (Lifescan, Johnson & Johnson, Milpitas, CA). Serum insulin was assayed with rat insulin ELISA kit with mouse insulin standards (Crystal Chem Inc., Chicago, IL) and serum C-peptide 2 assayed with a rat/mouse C-peptide 2 ELISA kit (Millipore, Bellirica, MA, USA) per manufacturers' protocols.
An oral glucose tolerance test (OGTT) was performed in mice 4 weeks after the surgery. Briefly, mice were fasted for 16 hr and blood samples were collected as described above to obtain the baseline glucose level (0 min). Thereafter, each mouse received 2 g/kg body weight of a 100 mg/ml glucose solution (Sigma Aldrich) in sterile water delivered by oral gavage. At 30, 60 and 120 min after the glucose administration, blood samples were collected to measure the glucose level as described above.
To confirm the effect of transplanted islets, 4 weeks after the surgery the kidney housing the islet and/or scaffold was surgically removed while the mice were anesthetized by vaporized isoflurane. The blood glucose, insulin and C-peptide 2 concentrations were measured 2 weeks after the kidney removal as described above.
Histological Evaluation of Retrieved Islets and Kidneys
After euthanasia, harvested kidneys were weighed on a toploading digital scale. Then kidneys and islets was embedded in Tissue-Tek OCT (Sakura Finetek, Torrance, CA), and snapfrozen in liquid nitrogen. Samples were sliced into 4 mm thick specimens that were serially collected on charged glass slides. The histology of samples was studied by staining with hematoxylin and eosin (H&E) (Sigma Aldrich). Insulin-secreting cells were stained with anti-insulin antibody (Abcam, Cambridge, MA, USA) and counterstained with hematoxylin. One slide from each mouse that showed at least 5 glomeruli was selected for the calculation of glomerular area. All mice from each group were included, thus giving at least 25 data points for each group in the glomerular area calculation. The area of glomeruli (the structure contained in the Bowman's capsule) was calculated in each harvested kidney tissue from every treatment group. All slides were scanned by a CRi Pannoramic Scan Whole Slide Scanner and images were processed using the Pannoramic Viewer (3D Histech, Budapest, Hungary), which allows a continuous selection of magnification.
Evaluation of Renal Functions after the Isle Transplantation
Blood and urine samples from each mouse were collected immediately before the surgery and then every 4 weeks after the surgery. Plasma cytokines of interest, including MCP-1, interlukin-6 (IL-6) and interferon gamma (IFNc), were analyzed by the Immunoassays & Multiplex Kits (EMD Millipore, Billerica, MA) per the manufacturer's protocol. Kidney proteins, including blood creatinine (Abcam), blood urea nitrogen (Bio Scientific Corp, Austin, TX), urine creatinine (Abcam), urine albumin (Abnova, Walnut, CA), were assayed following manufacturers' protocols.
The Mechanistic Study of MCP-1 on Retaining Renal Functions in Diabetic Mice
STZ was given at a dose of 50 mg/kg every two days for a total of 5 doses to induce diabetes and to allow a gradual renal deterioration before the administration of insulin and MCP-1. Thereafter, each mouse received one LinBit insulin tablet (LinShin Canada Inc, Toronto, Ontario) via subcutaneous implantation every 4 weeks. In addition, one group of insulin-treated mice (n = 5) received recombinant mouse MCP-1 (BD Pharmingen, San Jose, CA) dissolved in sterile phosphate buffered saline (PBS) by intraperitoneal injection every week until euthanasia (0.5 mL of 20 mg/mL). Kidney proteins were assayed every 4 weeks as described above. Mice were euthanized 12 weeks after the insulin administration and kidney tissues were analyzed as described above.
Statistical Analysis
All images were processed by ImageJ. All data was analyzed using student t-test or ANOVA with a Tukey test where applicable. The significance level was set at 95% (a = 0.05). All results were presented as mean6standard deviation (SD).
The Electrospun Scaffold could Sustain the Islet Population in vitro
The SEM micrograph showed that the scaffold featured randomly distributed fibers on the micro and nano-scale ( Figure 1A). The majority of fibers had a diameter between 800 nm to 1200 nm, physically analogous to those protein fibers and bundles in the native extracellular matrix. It should be noted that each individual fiber was a composite of PGC and PCL.
To assess the biocompatibility of electrospun PGC/PCL scaffold, we measured the proliferation of harvested mouse islets cultured on the scaffold for up to 4 weeks using the MTS assay ( Figure 1B). The islet population witnessed a steady growth within 4 weeks on the scaffold without detected cytotoxic effects from the scaffolding materials. Also, starting from week 2, the islet population on the scaffold outgrew its peer on the TCP, suggesting that the electrospun scaffold provided a more favorable biochemical and biophysical environment for islets to adhere and grow, and that these scaffolding materials, either intact or degraded, possessed no cytotoxicity to islets. The scaffold allowed a rapid and convenient delivery of islets in surgical procedures ( Figure S1 in the supplement).
A readily secretion of insulin by transplanted islet governs the ultimate success of this therapeutic strategy. The concentrations of insulin secreted by islets cultured on the scaffold were 10.2762.45 ng/100 islets at 3 hr and 33.4564.56 ng/100 islets at 24 hr, respectively. Correspondingly, islets on the TCP yielded insulin at 7.2463.11 ng/100 islets at 3 hr and 31.5463.89 ng/ 100 islets at 24 hr, respectively ( Figure 1C). Within both groups, a significant increase was observed between 3 hr and 24 hr, showing that the islets readily secreted insulin on both substrates. However, no difference was observed between the two groups at either time points, suggesting that the scaffold was an equally favorable substrate for islets to attach and to immediately secrete insulin. This result warrants the in vivo application of this scaffold as a delivery vehicle for islets transplantation in diabetic mice.
Islets Delivered by the Scaffold Witnessed the Greatest Proliferation and Functional Output
After the surgery, those mice that received transplanted islets were non-invasively monitored by bioluminescent imaging for 12 weeks (Figure 2A and 2B). The heat map of the transplantation site revealed that islet populations in both the islet group and islet/ scaffold group experienced a sustained increase. However, the islets in the islet/scaffold group consistently outnumbered their freely injected counterparts from week 4 post-surgery. Within a 12week period, the bioluminescent signal witnessed an increase of 89% in the islet/scaffold group but only 18% in the islet group, further attesting the beneficial effect conferred by the scaffold.
We also measured three critical factors that determined the success of this therapeutic strategy, the serum insulin, C-peptide 2 and blood glucose ( Figure 2C-2E). On week 4 the insulin concentration in the islet/scaffold group reached 210.54642.33 pg/ml and 150.27634.11 pg/ml in the islet group without a significant difference between the two groups. Thereafter, the insulin concentration in the islet/scaffold group reached 250.92637.34 pg/ml on week 8 and 230.21629.65 pg/ml on week 12, respectively. In contrast, the insulin concentration in the islet group reached 172.34629.38 pg/ml on week 8 and 164.53631.23 pg/ml on week 12, respectively. On both week 8 and week 12, a significant difference was observed between these two groups, suggesting that the scaffold promoted islet proliferation and functional output in vivo on a long term basis. The fact that no insulin was detected in the scaffold or sham groups suggested that the detected insulin in the islet and islet/scaffold groups were secreted by transplanted islets, rather than endogenous ones. Correspondingly, the profile of C-peptide 2 secretion paralleled that of insulin. No difference was observed on week 4 between the islet (32.4566.77 pM) and the islet/scaffold groups (43.2765.67 pM). However, a significant difference was observed on week 8 (37.2164.82 pM in the islet group and 53.5266.21 pM in the islet/scaffold group) and week 12 (36.3265.91 pM in the islet group and 51.2364.98 pM in the islet/scaffold group). No Cpeptide 2 was detected in either the sham or scaffold group at any time points within 12 weeks. The blood glucose level of the islet (323641 mg/dl) and islet/scaffold (253646 mg/dl) group was significantly lower than those of the sham (460625 mg/dl) and the scaffold (418628 mg/dl) groups on week 4. Furthermore, on week 8 and 12 the glucose level of the islet/scaffold group was significantly lower than the other three groups. On week 8, the glucose concentrations were 430641 mg/dl in the sham group, 483637 mg/dl in the scaffold group, 345631 mg/dl in the islet group and 223635 mg/dl in the islet/scaffold group. On week 12, the glucose concentrations were 492637 mg/dl in the sham group, 461640 mg/dl in the scaffold group, 329636 mg/dl in the islet group and 239633 mg/dl in the islet/scaffold group. The OGTT result further confirmed that transplanted islets effectively capped the glucose spike at 30 min after the oral administration and returned the glucose level to the physiological range by 120 min (Figure 2F). Mice without islet transplantation suffered from a significantly higher glucose spike at 30 min and a sustained high glucose level through 120 min. The result that the blood glucose of islet and islet/scaffold groups reached a comparable level to those of the sham and scaffold groups after 2 weeks after the transplanted islets were removed spoke to the fact that transplanted islets were the effective regulator of blood glucose in these mice ( Figure 2G). Meanwhile, the insulin and C-peptide concentrations were below the detection level.
The immunohistochemistry staining confirmed that transplanted islets in both the islet and islet/scaffold groups readily secreted insulin ( Figure 3A and 3B) through week 12. Moreover, the histology of islet transplants in both the islet and islet/scaffold groups showed comparable level of inflammatory cell invasion ( Figure 3C and 3D), suggesting that the employment of scaffold did not lead to an increased local inflammation.
Islets Delivered with Scaffolds Preserved Renal Functions
The histological study demonstrated that the edema in the kidney cortex, a typical complication in diabetic patients, was significant in the sham and scaffold groups while largely absent in the islet and islet/scaffold groups ( Figure 3A-3D). The glomerular areas from the sham, scaffold, islet and islet/scaffold groups were 55436492 mm 2 , 58206375 mm 2 , 37526423 mm 2 and 30846372 mm 2 , respectively ( Figure 3E). The islet/scaffold group had the smallest glomerular area followed by the islet group with the sham and scaffold groups featuring the largest ones. Similarly, the kidney weights from the sham, scaffold, islet and islet/scaffold groups were 82612 mg, 7569 mg, 4867 mg and 4268 mg, respectively ( Figure 4F). The kidneys from the islet and islet/ scaffold groups were significantly smaller than those from the sham and scaffold groups.
Plasma and urine proteins in the sham and scaffold groups saw a sustained increase through week 12 after the surgery due to the deteriorating renal functions. In contrast, mice that underwent islet transplantation largely weathered through. By week 12 after the surgery, the blood creatinine concentration was 2.0060.24 mg/dl in the sham group, 1. Figure 5D). The concentrations of blood urea nitrogen and urine albumin in mice from the islet and islet/scaffold group were significantly lower than those from the sham and scaffold group but no difference was observed between the islet and islet/scaffold groups. The physiological levels of all these proteins were given in Table 1 in the supplement.
MCP-1 was Responsible for the Improved Renal Functions in Mice from the Islet/Scaffold Group
The activation of the immune system due to the transplantation surgery brings about a ripple effect across the entire in vivo system [10]. We hypothesized that immunological cytokines perturbed by the transplanted islets and/or scaffolding materials accounted for the improved renal functions. Therefore, we assayed prominent immunological cytokines involved in post-surgery tissue regeneration, including MCP-1, IL-6, and IFNc ( Figure 6). We discovered that the MCP-1 concentration in mice from the islet/scaffold group saw a rapid increase from 228.37622.56 pg/ml to 380.21635.09 pg/ml in the first 4 weeks after the surgery and outnumbered the other groups on week 4. This advantage was retained on week 8, thus exhibiting a drastically different profile of MCP-1 compared to those in the sham and scaffold groups. No difference of IL-6 or IFNc among different treatment groups was observed through week 12. These findings led us to further investigate the mediating role of MCP-1 in retaining renal functions.
The blood glucose level in diabetic mice was first allowed to exceed 400 mg/dl in an 8-week period to allow the compromise of renal functions while inducing diabetes. Thereafter, insulin was administered either with or without recombinant MCP-1 to recover those compromised renal functions. The administered insulin effectively drove down the blood glucose level to the physiological range by week 12 ( Figure 7A). In the meantime, The SEM micrograph of the electrospun scaffold. The scaffold comprised randomly distributed micro-fibers with a highly porous micro-structure, analogous to native extracellular matrix. (B) Proliferation assay of islets seeded on the scaffold and tissue culture plate (TCP). Islets cultured on the scaffold saw a steady increase through week 4 and outgrew its peer on the TCP as early as on week 2. This proliferative advantage was sustained through week 4, evidencing that the scaffold provided a more favorable biophysical environment than standard tissue culture surface for islet growth. A star indicates a statistical difference between scaffold and TCP groups on respective time points (n = 5). (C) In vitro insulin secretion by islets on the scaffold and TCP at 3 hr and 24 hr after seeding. A difference was observed between 3 hr and 24 hr within each substrate group but no difference between groups at either time points. The comparable insulin concentrations between the islet/scaffold and islet/TCP groups at both time points confirmed that the scaffold could equally recoup the critical insulin secretion capability within 24 hr. doi:10.1371/journal.pone.0057635.g001 blood creatinine, urine creatinine, blood urea nitrogen and urine albumin all were reduced to near physiological levels, respectively ( Figure 7B-7E). By week 12, the creatinine concentration in blood and urine in mice from the insulin/MCP-1 group were significantly lower than those from the insulin group. In addition, the glomerular area in mice from the insulin/MCP-1 group was significantly smaller than those from the insulin group but no difference was observed in the kidney weight between the two groups ( Figure 7F-7I).
Discussion
Tissue engineered polymeric scaffolds have been widely used to confer regenerational benefits to a great variety of native tissues, such as vasculatures, cardiac patch, bones, cartilage, etc [11][12][13][14][15][16][17][18][19][20][21][22][23]. Some tissue engineered scaffolds had led to remarkable clinical success [24]. Among the various fabrication methods, electrospinning has remained one of the most popular one and proved to be particularly useful in soft tissue regeneration. Its success in bone and cardiovascular tissue engineering has shed new lights on the therapeutic solution for diabetes. Autologous islet transplantation remains the most effective treatment for type I diabetes but suffers from a limited donor supply and poor survival rate after transplantation [1][2][3]25]. To that end, we explored whether an electrospun scaffold could improve the survival and functional output of transplanted islets.
The highly porous microstructure of the electrospun scaffold is supposed to facilitate the islet adhesion, survival and proliferation for increased functional output like insulin secretion. PCL has been known to be the most durable material for electrospun scaffold with an in vivo degradation time around six months [26]. To keep pace with the production of native extracellular matrix, we incorporated PGC, which degrades much faster than PCL, into the scaffold to achieve an optimal degradation. Our in vitro results evidenced that the scaffolding materials possessed no cytotoxity for up to 4 weeks and that the physical microenvironment was more The computational quantification of bioluminescent signal showed that the islet population in the islet/scaffold group started to outnumber its peers from the islet group on week 4 and sustained this advantage through week 12. The growing difference between mice from the islet/scaffold and islet groups within this 12-week time window suggests that the scaffold unleashed a greater capability to promote the islet proliferation. (C) Insulin production by transplanted islets. (D) C-peptide 2 production by transplanted islets. Mice from both the islet/scaffold and islet groups witnessed a large increase of insulin and C-peptide 2 within the first 4 weeks. However, on week 8 and week 12, islets in the islet/scaffold group outperformed those in the islet group. (E) Blood glucose production. By week 4, mice in the islet and islet/scaffold groups witnessed a significant decrease of blood glucose compared to their counterparts in the sham and scaffold groups. However, no difference was observed between the islet and islet/scaffold groups. On week 8 and 12, the blood glucose in mice from the islet/scaffold group was significantly lower than those from the islet group with mice from the sham and scaffold groups suffering from hyperglycemia. (F) Oral glucose tolerance test (OGTT). Following the oral administration of glucose, mice from the islet and islet/scaffold groups saw a temporary increase of glucose level within the first 30 min but the glucose level rapidly fell down to physiological levels by 120 min. On the contrary, mice from the sham and scaffold groups suffered a consistent high glucose level despite a minor decrease by 120 min. (G) Glucose levels after the removal of transplanted islets. Two weeks after the removal of transplanted islets, mice from the islet and islet/scaffold group yielded a comparably high blood glucose level to those in the sham and scaffold groups. A star indicates a statistical difference among groups at respective time points (n = 5). The black line (sham group) was masked by the red line (scaffold group) in panel B, C and D because the readout from these two groups were nearly identical. doi:10.1371/journal.pone.0057635.g002 favorable for islets to adhere and grow than standard TCP surface. Moreover, islets cultured on the scaffold retained its critical physiological function by secreting insulin at a comparable level to their counterparts on the TCP on both 3 hr and 24 hr, suggesting that scaffolding materials did not compromise physiological functions of islets.
The most formidable challenge in tissue engineering research is to sustain implanted cells in vivo and to retain their normal functions for therapeutic purpose. To achieve this, it typically requires the rapid integration with the host, particularly the connection to the capillary network that is responsible for nutritional supply and removal of metabolic waste, followed by remodeling of in vitro engineered tissue to evolve into its native version [27,28]. A proper engagement of the immune system is also demonstrated to be critical for the overall success of in vivo tissue regeneration [4]. To test the clinical potential of our novel strategy, we performed an extensive in vivo investigation to gauge the therapeutic effect of islets transplanted with scaffolds. The bioluminescent images and quantification showed that the scaffold greatly promoted the growth of islets compared to those freely The histology showed significant edema in the kidney cortex in mice from the sham and scaffold groups, which resulted in an increase of glomerular area and overall kidney weight. On the contrary, transplanted islets in mice from the islet and islet/scaffold groups largely prevented the edema. The glomerular area in mice from the islet/scaffold injected within 12 weeks after the surgery as evidenced by the sustained increase of bioluminescent signal, which correlates with the mass of beta cells from MIP-luc mice mass [7]. By the end of week 12, the islet population had grown by 89% in the islet/ scaffold group compared to just 18% in the islet group. These promising results were further strengthened by the quantitative measurements of serum insulin, C-peptide 2 and blood glucose concentration through week 12 post-surgery. On both week 8 and 12, the insulin and C-peptide 2 concentrations in the islet/scaffold group were consistently higher than those in the islet group. Correspondingly, the blood glucose in the islet/scaffold group was consistently lower than that in the islet group in the same time window. These results spoke to the fact that the scaffold promoted the functional output of transplanted islets. In addition, the OGTT result confirmed that transplanted islets were able to effectively prevent sudden glucose challenge. Moreover, two weeks after the transplanted islets were removed, the blood glucose in the islet and islet/scaffold groups increased to a comparable level to those in the sham and scaffold groups, evidencing that the transplanted islets were the effective regulator of blood glucose. This exciting phenomenon can be attributed to the fact that islets pre-seeded on the scaffold enjoyed a rapid growth after the transplantation, which translated into an increased functional output. In patients with advanced diabetes, the kidney would suffer from edema in the cortex with the glomerular basement membrane thickening, leading to an enlarged kidney. This is generally believed to compensate the reduced filtering capacity of kidney in diabetic patients. Our study demonstrated that the electrospun scaffold could prevent the pathological increase of glomeruli and kidney by retaining the function of transplanted islets, which pre-empted the hyperglycemia. The increased insulin secretion by islets delivered with the scaffold depressed the glucose level, which greatly alleviated the stress on renal tissues. And this achievement was documented by the most effective control of creatinine in the blood and urine in the islet/scaffold group. The results showing that mice from the islet/scaffold group outperformed the other three groups in regulating creatinine concentrations speak to the fact that the electrospun scaffold was a convenient way to boost the effectiveness of islet transplantation and thus contribute to the preservation of renal functions in diabetic patients.
Our discovery that MCP-1 spiked most in the islet/scaffold group on week 4 and 8, outnumbering its peers in the other three groups, led us to re-consider its role in diabetes. It is long believed that an increase of MCP-1 in diabetic patients is caused by renal inflammation that will gradually jeopardize the kidney [29][30][31]. In addition, previous study also showed that transplanted islets would increase MCP-1 concentration [10]. Interestingly, regenerative medicine research suggests that MCP-1 is up-regulated in tissue group was smaller than those from the islet group, suggesting that the increased functional output of islets delivered by the scaffold better protected renal tissues. The significant glomerular enlargement in mice from the sham and scaffold groups could be attributed to compensate compromised renal functions, which is typically observed in diabetic patients. A star indicates a statistical difference between groups connected by a hanging bar. doi:10.1371/journal.pone.0057635.g004 Figure 5. Plasma and urine protein concentrations following the islet transplantation. The concentrations of blood creatinine (A), urine creatinine (B), blood urea nitrogen (C) and urine albumin (D). All four proteins in mice from the sham and scaffold groups underwent a significant increase through week 12, suggesting the loss of renal functions. In contrast, transplanted islets in mice from the islet and islet/scaffold groups successfully contained these protein concentrations through week 12. Particularly, on week 12, the blood and urine creatinine concentrations were significantly lower in mice from the islet/scaffold group than those from the islet group, suggesting that the scaffold provided a long-term benefit for islet transplantation. A star indicates a statistical difference among groups at respective time points (n = 5 in all panels). The blue line (islet group) was masked by the teal line (islet/scaffold group) in panel D because the readouts were nearly identical. doi:10.1371/journal.pone.0057635.g005 regeneration and would accumulate at injured vascular sites to initiate the cascade of immuno-responses that governs the ultimate tissue re-building [32]. Based on these findings, we tested whether the administration of MCP-1 could alleviate the pathological stress on renal tissues exerted due to diabetes. Our results showed that the administration of MCP-1 along with insulin into diabetic mice with moderate kidney failure restored the blood glucose, creatinine, urea nitrogen and urine albumin to physiological levels. Particularly, the creatinine concentrations in both blood and urine from the insulin/MCP-1 group were significantly lower than those from the insulin group, suggesting that the MCP-1 more potently recouped renal functions. Moreover, mice that received MCP-1 demonstrated a lesser degree of renal compromise as evidenced by the lack of glomerular enlargement. These Figure 6. The concentration of immunological cytokines following the islet transplantation. Concentration of MCP-1 (A), IL-6 (B) and IFNc (C). The MCP-1 concentrations in the sham and scaffold group steadily grew over 12 weeks, suggesting an increase of renal inflammation. In contrast, in the islet and islet/scaffold groups, the MCP-1 saw a temporary increase on week 4 and 8, which could be attributed to transplanted islets. A significant difference was observed between the islet and islet/scaffold groups on week 4 and 8. Thereafter, the MCP-1 concentration declined by week 12, which might be due to the down regulation from restored renal functions. No difference of IL-6 and IFNc was observed among groups through week 12. A star denotes a statistical difference among groups at respective time points (n = 5 in all panels). doi:10.1371/journal.pone.0057635.g006 results might be attributed to an MCP-1 mediated reno-vascular regeneration that accounted for the increased filtering capacity of glomeruli. The administration of MCP-1 together with insulin depressed the creatinine, blood urea nitrogen and urine albumin concentrations to a comparable level to those diabetic mice that underwent islet transplantation with scaffold. This suggests that a combination of the scaffold and MCP-1 might be able to boost the functional output of transplanted islets while preserve the renal functions.
Conclusion
In this study we pioneered the employment of an electrospun scaffold to boost the functional output of transplanted autologous murine islets to treat type I diabetes. The in vitro results confirmed that it possessed no cytotoxity, could promote the islet proliferation and support the secretion of insulin for clinical applications. The in vivo results strongly evidenced that the scaffold promoted the growth of transplanted islets in STZ diabetic mice and restored insulin level in the blood, which effectively drove down the blood glucose concentration. As a result, the renal functions was maximally preserved. The mechanistic investigation held the increase of MCP-1 due to transplanted islets responsible for this improvement. These prominent results afford physicians a simple yet convenient alternative to traditional islet transplantation and shed new lights on the therapeutic use of MCP-1 to relieve kidney failure in diabetic patients. After confirming MCP-1 as a promising pharmaceutical candidate, we are now actively investigating how to engineer an MCP-1-eluting scaffold to simultaneously boost the function of transplanted islet and preserve renal functions. Table S1 Concentrations of plasma and urine proteins in non-diabetic C57BL/6 mice (age and sex matched).
|
v3-fos-license
|
2018-06-23T16:31:59.019Z
|
2016-07-13T00:00:00.000
|
49359469
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=69106",
"pdf_hash": "884e7f0c146c2240d2ce6ea70ed95c891a947f9e",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45965",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "884e7f0c146c2240d2ce6ea70ed95c891a947f9e",
"year": 2016
}
|
pes2o/s2orc
|
Application of NMR Relaxometry to Study Nanostructured Poly ( vinyl alcohol ) / MMT / Cephalexin Materials for Use in Drug Delivery Systems
Polymers containing nanoparticles dispersed and distributed in the matrix can be used for control of drug release. In this work, hydrophilic matrix systems were prepared using poly(vinyl alcohol) and unmodified clay containing the same amount of cephalexin. The materials were obtained through in situ polymerization and were characterized by the conventional technique of FTIR and NMR relaxometry, through determination of proton spin-lattice relaxation time, in order to understand the molecular behavior of the new materials. The NMR relaxometry data showed that the new materials containing low quantities of clay (0.25% and 0.75%) and the same amount of cephalexin (0.5 g) had very good dispersion and distribution of the clay and drug in the polymer matrix. The combination of clay and cephalexin formed a more homogenous material with a narrow domain curve and low relaxation values. The material containing 0.25% clay presented a mixed morphology, with part exfoliated and part intercalated, as could be seen from the relaxation domain distribution, which was larger than that for material with 0.75% clay.
Introduction
The generation of nanostructured systems for application in the pharmaceutical industry has been a subject of many studies around the world, particularly to develop drug delivery systems [1]- [3].
Many types of matrices are used in drug delivery systems.If the polymer matrix is insoluble, an inert matrix will be formed, with a porous structure, from where the drug can be dispersed, but the material needs to have the same apparent surface area during the dissolution and diffusion to control these factors [1].Therefore, the properties of hydrophobic matrices will depend on both the drug and excipient used for determination of the release mechanism, which can occur by erosion or diffusion through the pores.However, if the matrix formulation contains a hydrophilic polymer, it can provide an appropriate combination of swelling dissolution or erosion mechanisms.The predominance of one of these mechanisms invariably depends on the properties of the polymer used in the system, among which poly(vinyl alcohol) (PVAL) stands out.In this work we prepared a hydrophilic nanostructured system using poly(vinyl alcohol), because this polymer had very good characteristics for use as a hydrophilic matrix for delivery of hydrophilic drugs like cephalexin.
Poly(vinyl alcohol) is a water-soluble synthetic polymer that is used in papermakers, textiles, and a variety of coatings.It is resistant to oil, grease and solvents.It also has high tensile strength and flexibility, as well as high oxygen and aroma barrier properties.However, these properties are dependent on humidity.In other words, with higher humidity more water is absorbed.The water, which acts as a plasticizer, then reduces the material's tensile strength, but increases its elongation and tear strength.PVA has a melting point of 230˚C [4].
Cefalexin is a semi synthetic cephalosporin antibiotic for oral administration.It is indicated for the treatment of respiratory tract infections; otitis media; skin and soft tissue infections; bone and joint infections; genito-urinary infections, including acute prostatitis; and dental infections.Cefalexin is active against the following organisms in vitro: β-haemolytic streptococci; staphylococci, including coagulase-positive, coagulase-negative and penicillinase-producing strains; Streptococcus pneumoniae; Escherichia coli; Proteus mirabilis; Klebsiella species; Haemophilus influenzae; and Branhamella catarrhalis.Most strains of enterococci (Streptococcus faecalis) and a few strains of staphylococci are resistant to cefalexin.The drug is inactive against most strains of enterobacteria, morganella morganii, pr.Vulgaris, Colstridium difficule, and the following species: legionella, campylobacter, pseudomonas or herellea species.When tested by in vitro methods, staphylococci exhibit cross-resistance between cefalexin and methicillin-type antibiotics [5].
The main objective of this work was to prepare hydrophilic matrix systems using poly(vinyl alcohol) and unmodified clay and cephalexin.The second objective was to evaluate the interaction among the components and the dispersion and distribution of clay and cephalexin in the polymer matrix, focusing on the characterization of the new materials, especially from nuclear magnetic resonance (NMR) relaxometry.
NMR relaxometry is chosen because it involves measurement of proton relaxation times, which are sensitive to changes in molecular mobility.As a consequence, proton spin-lattice relaxation time indicates the morphologies of the material, such as intercalated and exfoliated structures in the case of clay nanoparticles [6]- [10].Generally speaking, the decrease in the proton spin-lattice relaxation time comes from the clay lamellae's exfoliation because the presence of paramagnetic metals in the presence of clay structure interferes in the near proton relaxation mechanism.The contrary effect in proton relaxation time is expected for the samples containing polymer chains intercalated in the clay lamellae because of the restriction of their molecular mobility [7]- [12].
Sample Preparation
The PVAL/clay nanocomposites were prepared by in situ polymerization, with varied clay content (0.25%, 0.75%, 1.0%, 1.5% and 3.0%).The drug was also added in the in situ polymerization.The PVAL/clay/cephalexin nanocomposites containing the five different amounts of clay and with the same cephalexin content were obtained.
FTIR Measurements
The infrared spectra of the nanomaterial films were recorded with a Varian Excalibur 3100 FTIR spectrometer with Pike Technologies MIRacle ATR accessory, with 100 scans, resolution of 4 cm −1 , from 400 to 4000 cm −1 wavelength.
NMR Relaxometry
All NMR relaxation measurements were carried out with a Maran Ultra 23 Spectrometer (Oxford Instruments, UK), operating at 23 MHz (for protons) and equipped with an 18 Nm NMR tube.Proton spin-lattice relaxation times with a time constant (T 1 H) were determined by inversion-recovery (recycle delay-180˚-τ-90˚ acquisition data) pulse sequence.The temperature was 27˚C and the range of τ varied from 0.1 to 10.000 ms, with a recycle delay of 5 s.The 90˚ pulse was automatically calibrated and the relaxation values were obtained by fitting with WINFIT, while the domain distribution was obtained by the WINDXP software, which comes with the spectrometer.
FTIR Analysis
Figure 1 shows the structure of cephalexin.The major absorption bands in the FITR cephalexin spectrum are observed at about 3500 cm −1 , attributed to the hydroxyl groups; 3000 cm −1 , related to the methyl group, and an intense band between 1800 -1700 cm −1 , corresponding to the carbonyl group from the beta lactam ring.
Figures 2-6 show the FTIR spectra of the PVAL/Clay/cephalexin systems containing different amount of clay, but the same amount of cephalexin.These samples present characteristics of both PVAL and cephalexin compounds, the peak located at 3420 cm −1 is associated with the hydroxyl group of PVAL.The presence of wide bands corresponding to axial deformation of OH (alcohol) located in the range of 3550 -3200 cm −1 indicates the presence of hydrogen bonding, while the absence of strong absorption in the 3650 -3584 cm −1 range is characteristic of free OH indicates a high concentration of intermolecular linked hydroxyl.The absorption with broad band between 3300 -2500 cm −1 , characteristic of axial deformation of OH free carboxylic acid (cephalexin), is present in the spectra.We also noted absence of an aliphatic C=O absorption band between 1750 -1753 cm −1 .The interpretation of the spectra indicates there was negligible interaction between the polymer matrix and drug (cephalexin).
NMR Relaxometry Measurements
Table 1 shows the proton spin-lattice relaxation data of the PVAL/Clay/cephalexin systems.It can be seen that the increase clay concentration from 0.25% to 3.0% caused the samples to present particular behavior.The proton relaxation times decreased very quickly with the addition of only 0.25%, revealing formation of a mixed nanostructured material containing part exfoliated and part intercalated.The addition of 0.75% caused greater decrease in the relaxation time, derived from the increase in the exfoliated part in the new material.This behavior comes from the more exfoliated domain, which causes an increase in the chains' molecular mobility, since they are free to move around between the clay lamellae.As the amount of clay increased, the relaxation time showed a tendency to remain the same or increase slightly.The results revealed that all samples presented good exfoliation.Therefore, the addition of only 0.25% clay caused a dramatic change in the relaxation parameter, while a small increase in the quantity to 0.75% caused another decrease in this parameter, showing that both systems underwent changes in the PVAL/Clay/cephalexin chains' molecular mobility, because of the better dispersion and distribution of clay in the polymer matrix.Therefore, addition of 0.25% clay promoted a substantial change in the PVAL/Clay/cephalexin organization, while addition of 0.75% clay generated materials with higher proportion of exfoliation.The decrease in the relaxation times is related to the freedom of movement of the chains between clay lamellae, and only exfoliation of clay makes this possible.This behavior has already been seen for other systems [6]- [10] corroborating the data obtained for the system analyzed here.
We also can say that the relaxation time data support the behavior observed in the Fourier-transform infrared spectra.
The domain curve distribution of relaxation parameters determined for the PVAL/Clay/cephalexin systems are seen in Figure 7. From the behavior of these curves, we can obtain information on the samples' homogeneity.
In Figure 7, the polymer itself showed a narrow domain distribution curve, which is in accordance with the chains' organization, since its molecular organization is high.As the clay and drug were added in the polymerization process, the new materials presented different molecular organization due to the new intermolecular interactions.The addition of 0.25 clay caused a large change in the molecular organization due to the wideness of the domain curve baseline; this amount of clay caused some heterogeneity in the polymer nanostructure organization, probably due to the formation of a mixture of systems containing part exfoliated and part intercalated in the clay structures.The addition of 0.75% clay produced a narrower domain curve than with 0.25% clay, indicating that in this proportion a more homogenous domain was formed, probably a major quantity of exfoliated clay lamellae occurred.With the addition of 1.0% and 1.5% clay, the domain curves show the same behavior and are wider than with 0.75% clay added, suggesting a more heterogeneous mix of systems was formed.With addition of 3% clay an even more heterogeneous system was formed than with 1% and 1.5%.The behavior of domain curves distribution corroborates with the relaxation time results.
Figure 8 shows the behavior of the proton spin-lattice relaxation time with increasing clay content.There was a deep decrease in the value of relaxation time for the smallest amount of clay (0.25%), followed by another decrease after the addition of 0.75%, after which the values remained the same with increase of clay content up to 3%.This behavior shows that there is a dependence with relaxation data and clay proportion up to 0.75% of clay addition, for the others quantity no influence in this parameter was observed.
The MNR relaxometry was very effective in analyzing the PVAL/Clay/cephalexin systems; making it a good alternative technique to evaluate polymer/nanoparticles systems, since it allows observing the sample in various modes and the results obtained can explain the molecular dynamics of the sample and indicate the organization mode.This technique can also support the data obtained from other techniques commonly used to evaluate nanomaterials, and therefore gives a good response regarding dispersion and distribution of nanoparticles.
Conclusions
A series of nanocomposite materials composed of layers of montmorillonite (MMT) clay can be prepared by dispersion in PVAL, via in situ polymerization of vinyl acetate monomer, followed by hydrolysis with NaOH solution.The system investigated shows a marked degree of exfoliation, which demonstrates the possibility of controlling drug delivery by varying the clay content, since the insertion alters the degree of crystallinity of PVAL, while there is little interaction of the drug with the polymer matrix, meaning it is possible to modulate the rate of delivery according to the structural arrangement.
The relaxation measurements provide relevant information-a profound structural modification in the polymer matrix is observed for the samples containing 0.25% clay in PVAL.
Figure 2 .
Figure 2. FTIR spectrum of the system formed by PVAL/Clay/cephalexin containing 0.25% clay and 0.5 g of cephalexin.
Figure 3 .
Figure 3. FTIR spectrum of the system formed by PVAL/Clay/cephalexin containing 0.75% clay and 0.5 g of cephalexin.
Figura 4 .
Figura 4. FTIR spectrum of the system formed by PVAL/Clay/cephalexin containing 1.0% clay and 0.5 g of cephalexin.
Figure 5 .
Figure 5. FTIR spectrum of the system formed by PVAL/Clay/cephalexin containing 1.5% clay and 0.5 g of cephalexin.
Figure 6 .
Figure 6.FTIR spectrum of the system formed by PVAL/Clay/cephalexin containing 3.0% clay and 0.5 g of cephalexin.
Figure 8 .
Figure 8.The behavior of T 1 H values versus clay quantity.
Table 1 .
T 1 H values for the PVAL/Clay/cephalexin systems.
|
v3-fos-license
|
2018-12-07T05:56:03.701Z
|
2009-01-01T00:00:00.000
|
55785956
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.scielo.br/j/jbchs/a/6yqpKkdgVTJ5pnVftstCt3N/?format=pdf&lang=en",
"pdf_hash": "4afc17c4de0664a4d2d35df5a7192347d8ecf2b4",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45966",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"sha1": "4afc17c4de0664a4d2d35df5a7192347d8ecf2b4",
"year": 2009
}
|
pes2o/s2orc
|
Structural, Morphological and Vibrational Properties of Titanate Nanotubes and Nanoribbons
Este trabalho relata o estudo das propriedades estruturais, morfológicas e vibracionais dos nanotubos e das nanofitas de titanato obtidos a partir de estruturas bidimensionais por meio de tratamento hidrotérmico do TiO 2 em solução aquosa de NaOH. As propriedades físicas destas nanoestruturas, como preparadas e tratadas termicamente, são discutidas e comparadas com seus similares “bulks” (Na 2 Ti 3 O 7 e Na 2 Ti 6 O 13 ). Os resultados obtidos por várias técnicas de caracterização nos permitem concluir que as paredes dos nanotubos e das nanofitas, como preparados, são isoestruturais ao composto Na 2 Ti 3 O 7 . Contudo, nos nanotubos as ligações químicas são deformadas por causa da curvatura das paredes, enquanto nas nanofitas as camadas apresentam apenas uma desordem estrutural provocada pelo efeito de tamanho. As características térmicas das nanofitas são similares àquelas observadas para os nanotubos, nos quais as mudanças estruturais e morfológicas levam para a formação de grandes bastões (bulks) com uma mistura de fase Na 2 Ti 3 O 7 e Na 2 Ti 6 O 13 . Concluímos que os nanotubos e as nanofitas de titanato têm a mesma composição química, igual a Na 2-x H x Ti 3 O 7 •nH 2 O (0 ≤ x ≤ 2). Também podemos sugerir que a espectroscopia Raman pode ser usada para uma fácil e rápida identificação das mudanças morfológicas e estruturais das nanoestruturas de titanato.
Introduction
Intensive investigations have been carried out on the physical and chemical properties of inorganic nanosized materials in the recent years. 1 Researchers are realizing that one-dimensional nanostructures made from inorganic materials have intriguing properties different from those of carbon nanostructures and thus have a variety of potential applications. 2,35][6][7] Titanate nanotubes have also been tested for use as catalysts in heterogeneous photocatalysis and have shown excellent performance for degrading textile dyes, which makes them very important ecomaterials. 8,9itanate nanotubes and nanoribbons have mostly been prepared through hydrothermal treatment of TiO 2 powders in aqueous NaOH solutions. 10This method is very simple, inexpensive and efficient for obtaining samples with good morphological yields.4][15][16] On the other hand, there is a strong controversy regarding the composition and microscopic formation mechanism whereby nanostructures with different morphologies, such as nanotubes, nanorods, nanofibers or nanoribbons, are prepared using a similar hydrothermal process.][19][20][21][22] A new chemical composition was proposed for the as-prepared titanate nanotubes. 19The composition Na 2-x H x Ti 3 O 7 • nH 2 O (0 ≤ x ≤ 2), where x depends on the washing conditions was recently proposed, based on chemical reactions and thermal decomposition properties. 19,23Furthermore, the nanotube walls and nanoribbons should have atomic structures similar to those observed in the layers of Na 2 Ti 3 O 7 trititanate bulk.]19,23,24 The nanoribbons would be formed through cutting of layers between (100) and (010) planes followed by their stacking in the (001) direction. 24n this paper, we present studies of structural, morphological and vibrational properties of different titanate nanostructures (nanotubes and nanoribbons).Based on the results obtained by several techniques, we show that both the morphology and structure of the titanate nanotubes (hereafter NTTiOx) and nanoribbons (hereafter NRTiOx) prepared by Kasuga's method can be well characterized by vibrational spectroscopy. 10Based on chemical analysis and thermal treatment results, we show that similar to NTTiOx, the NRTiOx walls have a Na 2-x H x Ti 3 O 7 • nH 2 O chemical composition and that the symmetry of the atomic arrangements in the layers is similar to that of a Na 2 Ti 3 O 7 trititanate bulk. 19Temperature-dependent data indicate that the atomic layers of the nanoribbons undergo the transitions to a mixing of the Na 2 Ti 3 O 7 and Na 2 Ti 6 O 13 bulk phases.This structural change is similar to those observed to Na 2 Ti 3 O 7 bulk when thermally treated in the same conditions. 25
Experimental
All chemicals (reagent grade, Aldrich, Merck or Baker's Analyzed) were used as received, without further purification processes.All solutions were prepared with deionized water.
Nanosized titanates preparation
Titanate nanotubes were prepared as described previously. 19n a typical synthesis, 2.00 g (25.0 mmol) of TiO 2 (anatase) were suspended in 60 mL of 10 mol L -1 aqueous NaOH solution for 30 min.The white suspension formed was transferred to a 90 mL Teflon-lined stainless steel autoclave and kept at 165 ± 5 ºC for 170 h.After cooling to room temperature, the resulting white solid was washed several times with deionized water until pH 11-12 (NTTiO x ).For nanoribbon preparation, the same procedure was carried out, except that the temperature that was kept at 190 ± 5 ºC (NRTiO x ).After cooling to room temperature, the resulting white solid was washed several times with deionized water until pH 11-12.Both samples were dried at 60 ± 10 ºC for 24 h.
Bulk titanate preparation
The bulk samples of Na 2 Ti 3 O 7 and Na 2 Ti 6 O 13 were synthesized by solid state reactions from the stoichiometric weights of Na 2 CO 3 and TiO 2 (anatase), molar ratios of 1:3 and 1:6, respectively, followed by a thermal treatment in tubular furnace at 800 ºC for 20 h, in static air.
Thermal treatment of the nanoribbons
As-prepared nanoribbons (NRTiO x ) were thermally treated in the 100-1000 ºC temperature range in static air using a Mufle type furnace.Each sample was kept at a given temperature for 1 h in static air.
Characterization techniques
Determination of Na and Ti were carried out by inductively coupled plasma optical emission spectrometry (ICP OES), using a Perkin Elmer Optima 3000 DV, after dissolution of the sample in 0.1 mol L -1 HCl.Transmission electron microscopy (TEM) images were obtained using a Vol.20, No. 1, 2009 Carl Zeiss CEM-902 setup.Scanning electron microscopy (SEM) images were obtained using a JEOL 6360LV instrument.Energy dispersive X-ray spectroscopy (EDS) data were collected using a Noran system S1X (Thermo Electron Corporation, model 6714A01SUS-SN) probe attached to the scanning electron microscope.X-ray powder diffraction (XRD) patterns were obtained with Shimadzu XRD7000 diffractometer, using Cu K α (λ = 1.5406Å) radiation operating with 30 mA and 40 kV.A scan rate of 1° min -1 was employed.Fourier transform infrared (FTIR) spectroscopy was recorded using the KBr pellet technique (from 370 up to 1000 cm -1 ) or Nujol mulls between CsI windows (from 250 up to 370 cm -1 ) on a Bomen FTLA 2000 spectrometer.A total of 32 scans and a resolution of 4 cm -1 were employed for getting spectra with good signal to noise ratios.Ex situ Raman spectra were obtained on a Renishaw System 3000 Raman Imaging Microscope using a He-Ne laser (632.8 nm).In situ Raman spectra data were collected using a Jobin-Yvon T64000 spectrometer, equipped with a cooled charge coupled deviced (CCD) detector using a cw Ar + -ion laser (514.5 nm).Low laser power density was used in order to not overheat the samples.A spectral resolution of 2 cm -1 was used and measurements were performed using a backscattering geometry.DTA-TGA analyses were carried out using a TA equipment, model SDT Q600 over the 25-1000 °C temperature range with a heating rate of 10 °C min -1 under an air flow of 100 mL min -1 .
Structure, composition and morphology
Figure 1a and 1b shows TEM images of the as-prepared NTTiOx and NRTiOx, respectively.Both samples were prepared via hydrothermal treatment of anatase TiO 2 powder in a strongly alkaline (NaOH) environment.Figure 1(a) shows that NTTiOx have a tubular morphology and they are multi-walled with an average outer (inner) diameter of 9 nm (5 nm) and a length of several tens of nm.The NTTiOx are open ended (see inset to Figure 1a) with a uniform diameter distribution.In Figure 1b we can observe that the NRTiOx samples have a ribbonlike morphology.The NRTiOx are long, uniform and multi-walled with a typical average width of approximately 100 nm, and a length of some micrometers.The nanoribbons present a wide width distribution varying from 20 to 200 nm.
In order to compare the morphological aspects of nanostructured titanates and their bulk counterparts, we show in Figure 2a and 2b SEM images of the as-prepared sodium trititanate (Na 2 Ti 3 O 7 ) and hexatitanate (Na 2 Ti 6 O 13 ), respectively.Both samples have the same morphology and an average particle size of 300 nm with very large size dispersion.
The chemical compositions of NTTiOx and NRTiOx samples were investigated by EDS and ICP OES.Table 1 shows the Na/Ti molar ratio and the compositions of the samples based on these analyses.The Na/Ti ratio found in both NTTiOx and NRTiOx samples probed by EDS are in good agreement with the proposed Na 2-x H x Ti 3 O 7 chemical composition. 19Chemical analysis results of NTTiOx and NRTiOx samples indicated a Na/Ti ratio that is consistent with a composition of Na 2 Ti 3 O 7 trititanate rather than bulk Na 2 Ti 6 O 13 hexatitanate.
In Figure 3, we show the XRD patterns of bulk (Na 2 Ti 3 O 7 and Na 2 Ti 6 O 13 as prepared) and nanostructured titanates (NTTiOx and NRTiOx as prepared) samples.7][28] In the XRD pattern of Na 2 Ti 3 O 7 (curve c) was observed the presence of small amounts of TiO 2 (remainder of the initial solid solution) and Na 2 Ti 6 O 13 (produced by thermal decomposition of Na 2 Ti 3 O 7 ).The XRD pattern of NTTiOx (curve a) is closer to those observed by Chen et al., 16 with some discrepancies, suggesting a crystalline structure closer to that of H 2 Ti 3 O 7 .Thus, we can suggest that NTTiOx has a structure similar to the Na 2 Ti 3 O 7 because of the high concentration of sodium cited in the Table 1.
The XRD pattern of NRTiOx (curve b) shows typical peaks that agree well with the XRD pattern of Na 2 Ti 3 O 7 (curve c), given some assumptions.The differences observed in the relative intensity of diffraction peaks for NRTiOx and Na 2 Ti 3 O 7 bulk samples can be related to both texture and size induced effects.The morphology of NTTiOx and NRTiOx (nanometric thickness) samples would prevent the observation of some diffraction peaks owing the breaking of the long range order.The morphology of NRTiOx (large aspect ratio because of the needle shape) certainly would make the powder-like sample more texturized than the bulk where the particles exhibit a more spherical-like shape.Note the possibility of the appearance of some planes of growth in greater intensity than Na 2 Ti 3 O 7 bulk.Regarding NTTiOx, it is expected that the size induced-effects should be more pronounced in a long range probe technique such as X-ray diffraction.The nanometric size is responsible for broadening the diffraction peaks and only the most intense ones observed in the bulk structure can be clearly observed and identified in the XRD patterns of NTTiOx and NRTiOx samples.Furthermore, the peaks close to 10° are shifted towards lower 2θ values, indicating an increase in the interlayer distances because of the curvature effects (in NTTiOx) and ionic balance in the layers.In case of NTTiOx samples the curvature effects induces a unit cell distortion which is one of those responsible for asymmetrical broadening of the peaks as compared with bulk titanates. 29
Vibrational properties
In order to study the vibrational properties of titanate nanostructures we first review the Group Theory Analysis of vibrations for Na 2 )) with two formulas per unit cell (Z = 2), whose vibrational mode distribution is 12 A g + 18 A u + 12 B g + 18 B u .Both A g (A u ) and B g (B u ) are Raman (IR) active representations.
In Figure 4, we show FTIR spectra for titanate (bulk and nanostructured) samples.The FTIR spectrum of Na 2 Ti 3 O 7 (curve c) and Na 2 Ti 6 O 13 (curve d) are distinctively different from each other and they are in agreement with previous reports. 25,30,31The FTIR spectrum of Na 2 Ti 3 O 7 shows eight bands and two shoulders (408 and 445 cm -1 ) which can be assigned (in order of increasing wavenumber) as being lattice modes and Na-O-Ti bonds (below 300 cm -1 ), TiO 6 octahedron modes (in the 300-1000 cm -1 range).Besides a less intensity, the spectrum of Na 2 Ti 6 O 13 shows five broad bands whose assignment is similar to that of Na 2 Ti 3 O 7 .It is clear that the number of vibrational bands observed for Na 2 Ti 3 O 7 is larger than for Na 2 Ti 6 O 13 , in agreement with group theory prediction.The FTIR spectrum of NTTiOx (curve a) is characterized by three broad bands located at about 287, 470 and 895 cm -1 and two shoulders at 340 and 520 cm -1 .The spectrum of NRTiOx (curve b) shows five bands located at about 297, 338, 465, 673 and 905 cm -1 and shoulders in 780 and 845 cm -1 .NTTiOx and NRTiOx bands are very close in energy to those of bulk Na 2 Ti 3 O 7 except for the bands located at about 338 cm -1 (for NRTiOx).The differences between the nanoparticles (NTTiOx and NRTiOx) and Na 2 Ti 3 O 7 can be attributed to the diameter of a few nanometers, the curvature effects, unit cell distortion in the [010] direction and the stress placed on the surface of the nanometric particles.The FTIR bands of NTTiOx and NRTiOx are closer in energy density to the trititanate spectrum, but they cannot be appointed as having similar structure to the Na 2 Ti 3 O 7 bulk based in FTIR measurements. 11,14,32n Figure 5, we show Raman spectra for both nanostructured and bulk titanate samples.The Raman spectra for nanostructured samples are much weaker in intensity than for bulk samples.4][35] Again the relative number of modes observed for Na 2 Ti 3 O 7 and Na 2 Ti 6 O 13 are in agreement with group theory predictions.We observed that both NTTiOx and NRTiOx have different Raman spectra with respect to the number of bands, but they are similar with regard to location of energy bands.The differences can be understood through the following statements.First, the bands are broader in NTTiOx and this can be understood in terms of size-induced effects which break the q ca.0 momentum conservation rule that allow phonons from the interior of the Brillouin zone to contribute to the Raman response.Second, disorder and curvature effects which induce unit cell distortion would also contribute to such broadening.Based on Raman spectroscopy and on these statements we can state that the structure of NTTiOx and NRTiOx are similar, differing only by the definition of their Raman bands.The Raman spectrum of NTTiOx exhibits vibrational frequencies at about 156, 193 and 276 cm -1 which can be assigned to lattice modes and Na-O-Ti modes.7][38][39][40][41][42] In the Raman spectrum of NRTiOx the peaks are more defined than those for NTTiOx because of their larger size and less distortion of the layers (no curvature effect) thus showing a better atomic ordering in the layers.Different from the NTTiOx, the Raman spectrum of NRTiOx presents more vibrational peaks and this is related to the better crystallinity which allows more modes to be solved in the Raman spectrum.The assignment of the modes is similar to what was proposed for NTTiOx.The Raman mode at about 920 cm -1 which is assigned to terminal Ti-O bonds, is also observed in NRTiOx.The relative intensity of this mode for both NRTiOx and NTTiOx is higher than for bulk Na 2 Ti 3 O 7 , which is consistent with the morphology since low dimensional structures would have larger surface to volume ratios. 36,38,41Furthermore, this band is more pronounced for NTTiOx than for NRTiOx, thus indicating that, for nanotubes, there are more Ti-O terminal bonds because they can be directed either inward or outward from the layer.We cannot determine with good precision the structure of the NTTiOx and NRTiOx through vibrational spectroscopy at room temperature.Thus, in the next section we studied the behavior of the NRTiOx through it thermal decomposition and the structural changes facing the thermal treatment can be determined with greater precision.
Thermal decomposition behavior
In Figure 6a, we show the in situ Raman spectrum of NRTiOx with the temperature varying from room temperature (RT) up to 550 °C.Changes in the Raman bands for the NRTiOx are observed for temperature close to 200 °C.Two well resolved bands (175 and 200 cm -1 ) become broader collapsing to an asymmetric band.These results point to the structural water release, which is consistent with DTA-TGA measurements (not shown here).By further increasing the temperature, we note gradual changes in the Raman bands regarding the broadening of the peaks.However, close to 300 °C we can observe that the structural water release induced a structural disorder in the system.This can be associated to the onset of a gradual phase transformation at higher temperatures.It is clear that heating to 550 °C introduced changes in the sample structure.We show at the top of Figure 6a the Raman spectrum of the NRTiOx at room temperature after cooling (RT cooled).It is clear that the structure of the sample has been modified because the peaks marked with arrows in the spectrum of the as prepared sample are absent in the annealed sample.This issue is further addressed by ex situ Raman data for which the samples were treated at higher temperatures, as we discuss next.
In Figure 6b, we show ex situ Raman data, for which samples were annealed in the 100 to 1000 °C temperature range.We can also observe changes in the Raman bands close to the annealing temperature of 200 °C, related to structural water release.This water release probably generates changes in the structure of the samples that were affected by this phenomenon.In the 300-600 °C temperature range annealing we can observe only slight changes that can be related with release of hydroxyl groups adsorbed on the surface of the titanate nanoribbons, these same slight changes were observed in the titanates nanotubes. 19In the Raman bands from the sample heated to close to 800 °C we can notice a remarkable change in the Raman spectrum (marked by up arrows), indicating a partial phase transformation to bulk Na 2 Ti 6 O 13 .In the same temperature range was complicated to observe the emergence of the bands related to the bulk Na 2 Ti 3 O 7 because of the superposition of bulk bands.The spectral features of bulk Na 2 Ti 6 O 13 and Na 2 Ti 3 O 7 gain intensity close to 1000 °C, indicating that NRTiOx were converted to bulk structures.This is further confirmed by the disappearance of the 920 cm -1 peak (see down arrows) which is a spectral signature of NRTiOx morphology.This peak can be used to identify the titanate morphology and its changes upon thermal treatment.
In order to support this conclusion we also investigated the thermal decomposition behavior of NRTiOx using XRD.From the X-ray diffraction patterns shown in Figure 7 the release of structural water from the NRTiOx structure at about 200 °C is clearly observed.Water removal for samples treated at 200 and 300 °C caused the diffraction peak at 10° to be shifted toward larger 2θ values, thus indicating an interlayer contraction.For samples treated at 300 °C (see up arrow) we can observe the appearance of a peak close to 2θ = 12° which is typical of the bulk Na 2 Ti 6 O 13 structure and a sharp drop in the relative intensity of the peak at about 2θ = 10° which is typical of a Na 2 Ti 3 O 7 phase.This can confirm that NRTiOx sample, which has structure close to Na 2-x H x Ti 3 O 7 , when thermally treated between 200 °C and 300 °C some of the nanoribbons loose interlayer water.This water releasing allowed a gradual phase transformation to occur in the nanoribbons leading to structure which is similar to the Na 2 Ti 6 O 13 bulk for temperatures higher than 600 °C.The rest of the nanoribbons which are anhydrous or contains a small amount of water due to a higher Na concentration does not undergo both phase and morphological changes up to 600 °C.This behavior leads to diffraction pattern to be dominated by peaks from Na 2 Ti 6 O 13 bulk, because they have a higher long range order than the NRTiOx nanoparticles.It is observed that in the 300-600 °C temperature range a structural disorder develops in the XRD pattern and it can be understood as being due to structural rearrangements of the part of NRTiOx as a consequence of water and hydroxyl release which possibly induce morphological changes in the nanoribbons.At 800 °C a strong change in the NRTiOx X-ray pattern is observed in agreement with Raman spectroscopy data.The NRTiOx X-ray pattern changes to a pattern that resembles that of a phase mixing of bulk Na 2 Ti 3 O 7 and Na 2 Ti 6 O 13 .This assertion is based on the appearance of the diffraction peaks close to 2θ = 10° and 26° (marked with x) which are typical of bulk Na 2 Ti 3 O 7 as well as the appearance of three peaks (marked with o in Figure 7) close to 12°, 25° and 31° that are typical of bulk Na 2 Ti 6 O 13 .At 1000 °C the peaks become well defined and we clearly identify Na 2 Ti 3 O 7 (marked with x) and Na 2 Ti 6 O 13 (marked with o) phase mixing.It is observed that the amount of bulk phase Na 2 Ti 3 O 7 is larger than that of Na 2 Ti 6 O 13 .This could be indicative of the gradual conversion of a portion (lower portion) of the NRTiOx nanoparticles after dehydration to the bulk Na 2 Ti 6 O 13 from 300 °C and the conversion of the remaining NRTiOx nanoparticles to bulk phase Na 2 Ti 3 O 7 from about 800 °C, as discussed for NTTiOx in the literature. 23]30,43,44 Papp et al. 25 studied the thermal evolution of the Na 2 Ti 3 O 7 bulk, and they observed that as temperature increase the Na 2 Ti 3 O 7 structure is partially dymerized to form the Na 2 Ti 6 O 13 , which is similar to what we observed for NRTiOx samples.This provides further evidence that the atomic arrangement in the layers of NRTiOx is similar to Na 2 Ti 3 O 7 bulk.
In Figure 8, we show ex situ FTIR spectra of NRTiOx thermally treated in the 100 to 1000 °C temperature interval.From room temperature up to 800 °C the FTIR spectra do not exhibit any significant changes (up arrows) that might be observed.At 1000 °C we can verify (down arrows) the direct phase transformation which results in a phase mixing of bulk Na 2 Ti 3 O 7 and Na 2 Ti 6 O 13 for the thermally treated NRTiOx sample.
In Figure 9a we show SEM images of as-prepared NRTiOx and in 9b samples treated at 800 and in 9c at 1000 °C.We can clearly observe that thermal annealing at 1000 °C induced morphology changes thus indicating the formation of large rods (bulk-like) with a phase mixing of bulk Na 2 Ti 3 O 7 and Na 2 Ti 6 O 13 .By comparing the results obtained from Raman, FTIR, and XRD we can see that Raman spectroscopy can track the structure and morphology changes very clearly.Since these experiments are simpler, less time consuming and non-destructive, we can use Raman spectroscopy for investigating both structure and morphology of titanates nanostructures.
Conclusions
We have discussed the structural, morphological and vibrational properties of titanate nanostructures (nanotubes and nanoribbons).The vibrational spectra of NTTiOx and NRTiOx exhibit clear signatures which mean that both the morphology and the size of these nanostructures play key roles in the vibrational modes, allowing the use of both Raman and FTIR to identify the morphologies of the titanates nanostructures.
We also have studied the thermal decomposition properties of titanate nanoribbons obtained via the hydrothermal method.The thermal annealing of these titanate nanostructures at increasing temperatures leads to several structural modifications.At about 200 °C the interlamelar water is released and the interlayer distance is reduced.At about 300 °C and with the release of the water the material starts to undergo a gradual phase transformation to the bulk Na 2 Ti 6 O 13 , observed in both Raman and X-ray data, which show that the samples get disordered.Part of the nanoribbons (the dominant fraction) does not undergo any phase transformation up to 600 °C, and after this temperature they change directly to Na 2 Ti 3 O 7 bulk.By further increasing the annealing temperature the structural changes become prominent and at about 800 °C the presence of bulk Na 2 Ti 3 O 7 and Na 2 Ti 6 O 13 phase mixing is observed.For samples treated at 1000 °C we observed an increase in the amount of NRTiOx transformed in bulk Na 2 Ti 3 O 7 phase.This may be related to a greater concentration of interlamellar sodium in each NRTiOx, generating a greater stability with the increase of the treatment temperature.Besides the structural changes of the nanoribbons samples thermally treated are directly related to changes in their morphology that evolves to large rods are also observed.Both structural and morphological properties that have changed with thermal treatment were clearly captured in Raman spectra, thus suggesting that this technique can be used to precisely monitor the thermal behavior of titanate nanoribbons.Finally, this paper contributes to improve the understanding of titanate nanostructures through an assessment of their thermal decomposition behavior.The NTTiOx and NRTiOx have similar layer structures that corroborate with the Na 2-x H x Ti 3 O 7 composition already mentioned.
Figure 6 .
Figure 6.Raman spectra (a) in situ for NRTiOx for the room temperature (RT) to 550 °C temperature range and (b) ex situ for NRTiOx in the 100-1000 °C temperature range.
Figure 9 .
Figure 9. SEM images of (a) as-prepared NRTiOx at room temperature and thermally treated at (b) 800 and (c) 1000 °C.
Table 1 .
EDS average counts (Na/Ti ratio) for the as-prepared NTTiOx, NRTiOx, Na 2 Ti 3 O 7 and Na 2 Ti 6 O 13 and reference values.Na/Ti molar ratios in NTTiOx and NRTiOx obtained by ICP OES are also shown Ti 3 O 7 and Na 2 Ti 6 O 13 bulk structures.Bulk Na 2 Ti 3 O 7 at room temperature exhibits a lamellar structure (monoclinic, space group P2 1 /m (C 2h 2 )) with two formulas per unit cell (Z = 2).Group theory predicts that Na 2 Ti 3 O 7 would exhibit 69 vibrational modes distributed among the irreducible representations as follows: 15 A g + 20 A u + 15 B g + 19 B u .Bulk Na 2 Ti 6 O 13 at room temperature has a tunnel-like structure (base-centered monoclinic, space group C2/m (C 2h 3
|
v3-fos-license
|
2018-04-03T04:18:53.793Z
|
2014-04-16T00:00:00.000
|
23694474
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://journals.iucr.org/e/issues/2014/05/00/hy2643/hy2643.pdf",
"pdf_hash": "a4726940795ebfeb73d1ec30f808eb7e1fc36166",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45967",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "a4726940795ebfeb73d1ec30f808eb7e1fc36166",
"year": 2014
}
|
pes2o/s2orc
|
Diaquabis(nicotinamide-κN 1)bis(thiocyanato-κN)nickel(II)
In the title complex, [Ni(NCS)2(C6H6N2O)2(H2O)2], the NiII ion is located on an inversion center and is coordinated in a distorted octahedral environment by two N atoms from two nicotinamide ligands and two water molecules in the equatorial plane, and two N atoms from two thiocyanate anions in the axial positions, all acting as monodentate ligands. In the crystal, weak N—H⋯S hydrogen bonds between the amino groups and the thiocyanate anions form an R 4 2(8) motif. The complex molecules are linked by O—H⋯O, O—H⋯S, and N—H⋯S hydrogen bonds into a three-dimensional supramolecular structure. Weak π–π interactions between the pyridine rings is also found [centroid–centroid distance = 3.8578 (14) Å].
In the title complex, [Ni(NCS) 2 (C 6 H 6 N 2 O) 2 (H 2 O) 2 ], the Ni II ion is located on an inversion center and is coordinated in a distorted octahedral environment by two N atoms from two nicotinamide ligands and two water molecules in the equatorial plane, and two N atoms from two thiocyanate anions in the axial positions, all acting as monodentate ligands. In the crystal, weak N-HÁ Á ÁS hydrogen bonds between the amino groups and the thiocyanate anions form an R 4 2 (8) motif. The complex molecules are linked by O-HÁ Á ÁO, O-HÁ Á ÁS, and N-HÁ Á ÁS hydrogen bonds into a three-dimensional supramolecular structure. Weakinteractions between the pyridine rings is also found [centroid-centroid distance = 3.8578 (14) Å ].
Related literature
For background to the applications of transition metal complexes with biochemically active ligands, see: Antolini et al. (1982); Krishnamachari (1974).
Comment
Transition metal complexes with biochemically active ligands frequently show interesting physical and/or chemical properties, as a result they may find applications in biological systems (Antolini et al., 1982). As ligands, nicotinamide (NA) and thiocyanate are interesting due to their potential formation of metal coordination complexes as they exhibit multifunctional coordination modes due to the presence of S and N donor atoms. With reference to the hard and soft acids and bases concept, the soft cations show a pronounced affinity for coordination with the softer ligands, while hard cations prefer coordination with harder ligands (Hökelek, Dal et al., 2009;Hökelek, Yilmaz et al., 2009;Özbek et al., 2009;Zhu et al., 2006). NA is one form of niacin and a deficiency of this vitamin leads to loss of copper from body, known as pellagra disease. The nicotinic acid derivative N,N-diethylnicotinamide (DENA) is an important respiratory stimulant.
In the title complex, the Ni II ion is located on an inversion center and coordinated by two equatorial N atoms from two NA ligands and two equatorial O atoms from water molecules, and two axial N donor from thiocyanate ligands, as can be seen in Fig. 1. The Ni-O1W bond distance is 2.088 (2) Å, which is very close to the Ni-N3(thiocyanate) distance of 2.090 (2) As can be seen from the packing diagram (Fig. 2), the complex molecules are linked by intermolecular O-H···O, O-H···S and N-H···S hydrogen bonds (Table 1), forming a supramolecular structure. The discrete molecules are connected by O1W-H···O1 and O1W-H···S1 hydrogen bonds into a two-dimensional layer parallel to (010). The thiocyanate S1 atom also accepts the other two hydrogen bonds from two different amide N atoms, completing an overall threedimensional supramolecular structure.
Greenish blue colour solution was obtained. After filtration the final clear solution left undisturbed at room temperature for slow evaporation. Next day, needle shaped greenish blue crystals were collected and dried in vacuo over silica gel.
Crystals suitable for single crystal X-ray diffraction were manually selected and immersed in silicon oil. map and refined isotropically.
Figure 2
Packing diagram of the title complex. Hydrogen bonds are shown as dashed lines.
Special details
Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
|
v3-fos-license
|
2020-10-29T09:08:56.097Z
|
2020-10-20T00:00:00.000
|
226348275
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.ccsenet.org/journal/index.php/ijbm/article/download/0/0/43968/46273",
"pdf_hash": "fcc6b42ae99754a4c798ea0785d283587bf9db51",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45972",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"sha1": "48e2cb0e7c46e73db2a17e8d681e98e475386ca7",
"year": 2020
}
|
pes2o/s2orc
|
Integrating Diffusion of Innovations and Theory of Planned Behavior to Predict Intention to Adopt Electric Vehicles
Electric vehicles (EVs) are recognized as effective solutions to the global air pollution problem, attracting much attention from businesses, governments, and consumers. Despite the heightened interest, EV penetration rates remain low. This study thus focuses on consumers’ evaluation of EV innovation to provide implications for promoting EV adoption by proposing a theoretical model that integrates the diffusion of innovations theory and the theory of planned behavior to examine the relationship between consumers’ perceived innovation characteristics and the adoption of EVs; the study findings indicate that the evaluation of consumers’ EV innovation has a significant impact on consumers’ attitudes toward and intention for EV adoption. Several important innovation characteristics promote practical implications for spreading EV acceptance.
Introduction
The world is facing environmental problems due to global warming and climate change caused by excessive use of fossil fuels. In the U.S., >30% of total greenhouse gases are emitted from the transportation sector, of which >60% are emitted by light-duty vehicles (U.S. Environmental Protection Agency, 2015). Gasoline-fueled vehicles and existing internal combustion engine vehicles are suggested to be the main culprit of environmental pollution (White & Sintov, 2017).
Businesses and governments are aware of the seriousness of environmental pollution and strive to adopt new, preventive environmental technologies. Alternative fuel vehicles (AFVs) have been proposed as promising measures to resolve air pollution-related problems (White & Sintov, 2017); among AFVs, the electric vehicle (EV) does not use fuel, operates a motor through batteries, and generates minimal exhaust gas, and hence, they are attracting attention with respect to solving the greenhouse gas-related problems. Several countries have initiated policies to stimulate the production and acceptance of EVs and reduce CO 2 emissions to meet their sustainability goals (Brady & O'Mahony, 2011;Rezvani et al., 2015).
To stimulate the EV market, the Korean government is implementing eco-friendly policies such as subsidies for EV purchases and tax reductions; despite the Korean government's efforts and the environmental improvement of EVs, the spread of EVs is very slow. However, the penetration rate of EVs increased steadily from 2011 to 2017, exceeding 10,000 units, but has decreased since 2018. Currently, the penetration rate of EVs is far below that of internal combustion engine vehicles (Korea Ministry of Environment, 2019). One of the interpretations for this aspect is the anxiety among consumers about the lack of EV-charging facilities and EVs' limited driving range, thereby implying that public acceptance of EVs depends on consumer perception (Schuitema et al., 2013); therefore, it is important to understand and tackle consumers' perceptions of EVs to promote its acceptance (Rezvani et al., 2015). Moreover, the performance (success or failure) of an innovative product is determined by consumers' evaluations of the product's innovation and acceptance of the product (Olshavsky & Spreng, 1996). Companies should pay more attention to product innovations as evaluated by consumers. Therefore, to stimulate EV demand, studies on the relationship between consumer evaluation and acceptance of EVs are required, alongside the development of effective marketing strategies. examining the relationships between factors affecting EV acceptance and purchase intention (Delang & Cheng, 2012;Egbue & Long, 2012;Hidrue et al., 2011;Morton et al., 2016;Orlov & Kallbekken, 2019;White & Sintov, 2017) and reviewing the current state of the EV market (Egbue & Long, 2012;Heffner et al., 2007;Larson et al., 2014;Plötz et al., 2014). However, these studies did not consider the evaluation of EVs from consumers' perspective (Jansson, 2011), and few attempts have been made to understand consumer perceptions based on theoretical models of consumer behavior. To address this gap, this study proposes a theoretical model integrating theory of planned behavior (TPB) (Ajzen, 1991) and diffusion of innovations (DOI) theory (Rogers, 2003) to examine the relationship between consumers' evaluation of innovativeness for EVs and the adoption of EVs.
TPB has been widely applied in various domains to predict human behavior and intentions and postulates that human behavioral intention is formed by three determining factors: attitude toward the behavior, subjective norm, and perceived behavioral control (PBC) (Ajzen, 1991). Thus, these three determinants can be used effectively to explain and predict intention to adopt EVs. EVs are recognized as eco-innovations because of their potential to reduce transportation sector's environmental problems; thus, they are regarded as the innovation acceptance behavior (Jansson, 2011;Liao et al., 2017). DOI provides a useful framework for studying the success of eco-innovations from the consumers' perspectives and aids in explaining as to why marketing strategies have varying impacts on eco-innovation success (Driessen & Hillebrand, 2002). This theory identifies five innovation characteristics: relative advantage, compatibility, complexity, observability, and trialability; they are subjected to the perception of potential adopters (Driessen & Hillebrand, 2002) and can be used as indicators by the TPB model to improve the prediction of EV adoption. In this context, this study proposes a DOI-TPB integration model to better determine the adoption of EVs when analyzing the perceived innovation characteristics (PICs) of EVs on the attitude toward and acceptance of EVs.
As marketers strive to study consumer perceptions of new technological innovations to predict consumer behaviors and reactions (Rogers, 2003), the study results will help in explaining EV adoption decisions and provide useful implications for marketing strategies to promote EV adoption.
TPB
TPB has been employed to elucidate social behaviors in diverse settings, including consumer behaviors, and has proved its usefulness with long-accumulated empirical research (Zhou et al., 2013); it is an extension of Fishbein and Ajzen's (1975) theory of reasoned action (TRA), which predicts an individual's behavioral intention by attitude toward behavior and subjective norm. TPB adds PBC to the previous TRA model to overcome the limitations of TRA such as situations where people's volitional control is incomplete (Ajzen, 1991). When people require information, skills, opportunities, and other resources to perform a behavior, they perceive barriers and obstacles depending on the availability of the required resources. Under these circumstances, the TPB, which accounts for the ease and difficulty of carrying out the behavior, is more appropriate (Hansen, 2008). PBC needs to be considered in predicting EV adoption, as it requires not only internal resources such as individual abilities and self-efficacy but also external resources such as opportunities and information; it also constitutes the effects of social impact, a variable traditionally associated with DOI (Crespo & del Bosque, 2008;Liao et al., 2017).
According to TPB, individual behavior is assumed to be the result of behavioral intention, and behavioral intention is formed by behavioral attitude, subjective norm, and PBC (Ajzen, 1991). Attitude is a latent disposition or tendency that an individual has about a particular behavior and refers to the degree to which a person evaluates the outcome of that behavior either positively or negatively. Sutcliffe et al. (2008) andDe Groot andSteg (2007) argued that attitude toward products has a positive effect on consumers' intentions to use them. Crespo and del Bosque (2008), Fang et al. (2009), andYang (2012) analyzed the relationship between user attitude and intention to adopt web technologies; all of these studies reveal that attitude has a significant impact on intention. On the basis of the previous studies, the relationship between attitude toward EV adoption and intention to adopt EVs is hypothesized as in the following manner: H1. Attitude toward EV adoption is positively related to intention to adopt EVs.
Subjective norm refers to the social pressures that influence an individual whether or not to perform a particular behavior and is related to the acceptance of such social pressure to perform the behavior in a particular way. It can be defined as an individual's perception of what important referents such as family members or friends expect the individual to do (Adnan et al., 2017), making it an important variable that significantly influences behavioral intention (Chen & Tung, 2014;Han et al., 2010;Hansen, 2008, Lowe & Alpert, 2015. Bockarjova and Steg (2014) and Chen and Tung (2014) argued that high social pressures from those who expect an individual to perform a particular behavior increase the individual's intention to perform it; therefore, subjective norm is hypothesized to have a positive effect on intention to adopt EVs.
H2. Subjective norm is positively related to the intention to adopt EVs. PBC refers to an individual's belief about how easy or difficult it is to perform a particular behavior (Ajzen, 1991). It particularly relates to an individual's internal (e.g., self-efficacy, confidence, or ability) and external (e.g., economic conditions or time) resources to perform a specific behavior (Taylor & Todd, 1995). If an individual assesses that he or she has sufficient resources or competencies to perform a particular behavior, then the individual perceives fewer difficulties in performing it. López-Mosquera and Sánchez (2012) reported that the higher the level of consumers' behavioral control, the higher the behavioral intention. Therefore, it is hypothesized that PBC has a positive effect on intention to adopt EVs.
H3. PBC is positively related to the intention to adopt EVs.
DOI
As consumer acceptance of innovative products, ideas, and practices are crucial for triggering DOI, adoption and diffusion of innovations have received much attention in marketing and consumer literature (Jansson, 2011). Rogers's (2003) DOI theory has been widely used to explain and predict consumer behaviors related to innovation adoption (Chou et al., 2012). According to DOI, the innovation-decision process has five stages: (1) initial knowledge of the innovation, (2) persuasion by forming a favorable attitude, (3) decision to adopt the innovation, (4) implementation by using the innovation, and (5) confirmation and continuous use of the innovation (Rogers, 2003). The first two steps are important for understanding consumer adoption behavior of innovation because attitude toward innovation is developed. In this attitude forming process of innovation, potential adopters' personality traits, particularly their innovativeness, and their perceptions of the characteristics of the innovation will play an important role (Jansson, 2011).
On the basis of these findings, this study and others focus on the relationship between consumers' PICs of EVs and intention to adopt EVs with a proposed DOI-TPB integration model. Marcati et al. (2008) proposed an integrated framework to improve the prediction of entrepreneurs' intention to adopt innovation, and Chou et al. (2012) used a similar approach in analyzing the adoption of green practices in the restaurant industry. Jackson et al. (2013) used a DOI-TPB integration model to predict intention to adopt technological innovation. This study attempts to integrate the five PICs proposed by Rogers (2003) with TPB to better explain EV adoption decisions and is the first attempt to apply a DOI-TPB integration model at the consumer level, in contrast to previous organization-level studies.
DOI assumes that potential adopters form their attitudes toward innovation centered on their perceptions of the five PICs (Jansson, 2011): relative advantage, compatibility, complexity, trialability, and observability. Chou et al. (2012) claimed that PICs capture approximately 49%-87% of the variance of innovation adoption. Specific descriptions of PICs are given below (Adnan et al., 2017;Chou et al., 2012;Rogers, 2003).
(1) Relative advantage is the extent to which innovation is perceived to be superior to existing products or ideas (Rogers, 2003). The level of added value of an innovative product is determined by its relative economic, social, and technical advantages compared with the existing products, spurring rapid adoption and diffusion. In the case of EV adoption, innovation cost and motivation for social status are anticipated to operate as essential relative advantage factors. Innovators, early adopters, and early majority are more likely to be motivated by these factors.
(2) Compatibility is the extent to which innovative products are perceived to be consistent with consumer needs, beliefs, values, and experiences (Rogers, 2003). As innovation is more meaningful to potential consumers and more compatible with their existing behavioral patterns and experiences, innovation adoption is more likely. In contrast, if innovation differs from the requirements of the consumers and lacks compatibility, it negatively affects adoption (McKenzie, 2001). In several DOI studies, relative advantages and compatibility are considered conceptually different but similar. Holak and Lehmann (1990) argued that relative advantage and compatibility, among the five PICs, have the most significant impact on innovation adoption. (3) Complexity is the extent to which innovative products are perceived as hard to understand and use (Rogers, 2003). This characteristic is related to the problems arising when using the innovative product, thus implying that complexity has a negative correlation with the adoption of innovative products. On the contrary, if innovations are user-friendly, then they can be successfully adopted (Martin, 2003). (4) Observability, often referred to as visibility, is defined as the extent to which results of adopting an innovative product are visible to others (Rogers, 2003). Peer observation is a key motivator for the adoption and diffusion of innovation (Parisot, 1997). Frequent exposure of an innovative product to potential consumers increases familiarity, spreads word-of-mouth, and accelerates the adoption and diffusion process. (5) Trialability is the ease with which potential consumers can try an innovative product. ijbm.ccsenet.org International Journal of Business and Management Vol. 15, No. 11;2020 Innovative products with high trialability are more quickly adopted in the market. Furthermore, innovations can be altered or modified while potential consumers are trying the product, leading to reinvention. This aspect further improves innovation and facilitates and accelerates adoption. According to Rogers (2003), trialability is an important factor for later adopters who deliberately embrace innovation. It can be enhanced by providing potential consumers with information for verification or allowing adaptive tests. Providing test-driving opportunities for EVs increases perceived trialability, thereby enhancing EV adoption.
Several studies suggested that PICs have a direct impact on innovation adoption (Agarwal & Prasad, 1997;Damanpour & Schneider, 2009;Hebert & Benbasat, 1994). However, Chou et al. (2012) argued that PICs can be modeled more effectively by assuming that they affect innovation adoption indirectly with the mediation of attitude, as PICs are cognitive indicators of attitude toward innovation adoption. Rogers (2003) asserted that the formation of a specific attitude toward innovation is a major factor influencing the decision process related to adoption of the innovation. Attitude predicts consumption behavior better than other factors (Brunsø et al., 2004); it mediates the influences and stimuli of external variables on behavioral intention and thus varies according to the area of consumption (Frambach & Schillewaert, 2002). Thus, attitude represents context-specific dispositions in which an individual's perception is linked to actual consumption behaviors (Jansson et al., 2017). Therefore, PICs more effectively predict adoption intention when mediated through attitude.
Putzer and Park (2012) found that PICs are a significant antecedent variable of attitude toward emerging mobile technologies. Lowe and Alpert (2015) found that consumer perception of innovativeness has a positive effect on attitude. In a study on the adoption of AFVs, Jansson (2011) noted that the adopters perceive AFVs as more advantageous, compatible, observable, and less complex than the non-adopters. Smerecnik and Andersen (2011) revealed that relative advantage and complexity are related to the sustainability of innovation in hotels and ski resorts. Chou et al. (2012) examined the effect of four PIC variables (relative advantage, compatibility, complexity, and observability) on attitude toward adopting green practices in the restaurant industry.
On the basis of previous research, this study postulates that because consumers perceive EVs as having an advantage over existing vehicles, consumer attitude toward EV adoption will be more positive.
H4.
Relative advantage is positively related to attitude toward EVs.
H5.
Compatibility is positively related to attitude toward EVs.
H6.
Complexity is negatively related to attitude toward EVs.
H7.
Observability is positively related to attitude toward EVs.
H8. Trialability is positively related to attitude toward EVs.
Consumer Innovativeness
Consumer innovativeness is a key concept in DOI along with PICs (Rogers, 2003). It refers to the inherent and reveals the propensity of a consumer to adopt innovative products, whereas PICs represent perceptions of consumers of the innovative nature of products. Consumer innovativeness is the degree to which an individual adopts technologies in a relatively early stage, prior to adoption by others (Rogers, 2003). Consumer innovativeness has played an important role in understanding the early adoption behaviors of consumers particularly because it reflects desires of early adopters who have less antipathy to innovative technologies and want to purchase innovative products (Bartels & Reinders, 2011;Morton et al., 2016;Roehrich, 2004).
Although consumer innovativeness and evaluation of product innovativeness are different concepts, several studies identify a linkage between them (Lu et al., 2005;Venkatesh et al., 2003). Jackson et al. (2013) verified that consumers' perceived product evaluations vary according to their innovativeness, i.e., the higher the consumer innovativeness, the more likely consumers are to perceive product innovation positively. Yang (2012) examined the moderating effects of consumer innovativeness in the relationship between consumer product evaluation and attitude toward mobile shopping. These findings imply that consumer innovativeness can have a moderating influence on the relationship between PICs and attitude toward innovation adoption.
As consumers with higher innovativeness are more likely to adopt new innovative products and gain experiences by using varied innovative products, they are more confident in their evaluation of product innovativeness; therefore, consumers with higher innovativeness are expected to reinforce their attitudes toward EV adoption. Hypotheses about the moderating effects of consumer innovativeness are given below: H9 (a). Consumer innovativeness moderates the relationship between relative advantage and attitude toward EV adoption.
H9 (c).
Consumer innovativeness moderates the relationship between complexity and attitude toward EV adoption.
H9 (d).
Consumer innovativeness moderates the relationship between observability and attitude toward EV adoption.
H9 (e). Consumer innovativeness moderates the relationship between trialability and attitude toward EV adoption.
Moderating Effects of Demographic Characteristics
TPB defines the factors that directly influence behavioral intention but is sufficiently flexible in allowing for the inclusion of regulatory variables in its relationships. According to previous studies, the influence of antecedent variables of behavioral intention is either strong or weak, depending on the situation and individual characteristics (Ajzen, 1988;Keith & McWilliams, 1999;Van Hooft et al., 2005). De Groot and Steg (2007) argued that the relative importance of attitude, subjective norm, and PBC differ among the target groups, and these differences inhibit general conclusions about behavioral prediction.
Several empirical studies that analyzed the differences in innovation adoption by demographic variables are inconclusive or conflicting (White & Sintov, 2017). According to surveys on EVs, younger people are generally more interested and have a higher intention to purchase EVs. Hyundai Motor's survey of 748 electric car buyers in 2016 found that people in their 20s-30s exceed 40% of the total market share (Kim, 2016). Hyundai Motor believed that the survey results indicated that young people have less fear of new things and are more innovative than their older counterparts. In contrast, Plötz et al. (2014) demonstrated that the proportion of consumers who used or intended to use EVs was the highest for consumers in their 40s and the lowest for consumers in their 20s. Consequently, existing literature does not provide consistent predictions on EV adoption among different age groups.
This study includes age as a moderator between intention of EV adoption and three direct antecedents of intention to examine the relationships among key variables of TPB across different age groups. The hypotheses to examine the moderating effect of age on the relationship between the key variables of TPB are given below: H10 (a). Age moderates the relationship between attitude and intention to adopt EVs.
H10 (b).
Age moderates the relationship between social norm and intention to adopt EVs.
Participants and Procedure
This study surveyed potential EV consumers above the age of 25 years residing in Daegu and Gyeongbuk province, Korea, using a convenience sampling method. The survey questionnaire was constructed and distributed using a Google survey form and collected from voluntary respondents through a mobile social network. Data were collected for one month, i.e., from June 1 to 30, 2019, from a total of 176 respondents. Structural equation modeling (SEM) was employed for data analysis. Among SEM techniques, this study used partial least squares (PLS) SEM, which is considered more appropriate than covariance-based SEM when a model is complex with many constructs and the sample size is relatively small. In a PLS-SEM analysis, the "10-times rule" is frequently used as a barometer for estimating the minimum sample size required. The rule simply states that the sample size should be the larger of the following two values, (1) the maximum number of formative indicators of a variable multiplied by 10 and (2) the maximum number of structural paths directed to a variable multiplied by 10 (Hair et al., 2017). The collected data were further analyzed using IBM SPSS Statistics 22.0 (SPSS), and SmartPLS 3.0. SPSS was used to obtain descriptive statistics including frequency analysis; SmartPLS 3.0 was also used to verify the relationships among constructs in the SEM model. The analysis of the PLS-SEM model in this study was performed based on the procedures and criteria presented by Hair et al. (2017).
Measures
Appendix 1 exhibits all the questionnaire items. The questionnaire survey was measured on a 5-point Likert scale, ranging from "very unlikely" (1) to "very likely" (5). All measurement items were adopted from previous research in which their validity and reliability were established.
PICs include five dimensions, and the measurement items were constructed based on Jansson (2011). PICs were measured using 17 items: 4 for relative advantage, 3 for compatibility, 4 for complexity, 3 for trialability, and 3 for observability. Measurements of attitude, subjective norm, PBC, and intention included in TPB are based on the TPB questionnaire presented by Ajzen (2019). The PBC items in the TPB model were constructed based on Wang et al. (2016) and Yadav et al. (2019). Attitude toward behavior was measured by 4 items, subjective norm by 3 items, PBC by 3 items, and intention by 3 items. Finally, the measurement of consumer innovativeness was constructed based on Lu et al. (2005) and consisted of 4 items.
For the analysis of the structural model to be meaningful, the reliability and validity of all indicators must be secured and the collinearity problem between these variables must be resolved. The analysis results for the reliability of the measurement model are presented in Table 1. Table 1; overall, the measurement model seems to have internal consistency reliability except for several indicators. Indicators RA1 and COMPAT2 were slightly below the criteria value, but they were included in the model, as the deviations were less. In contrast, indicators COMPLEX1 and COMPLEX3 were excluded from the analysis because their reliability measures were far below the criteria values. After excluding both the items, the complexity construct satisfied the judging criteria with a Cronbach's alpha of 0.640, and an average variance (AVE) of 0.626 was extracted.
Further, discriminant validity among constructs was analyzed, and the results are presented in Table 2. The correlation coefficient is smaller than the square root of the AVE, which confirms discriminant validity among the model constructs. The collinearity problem among constructs was examined, and the results are presented in Table 3, confirming no collinearity problem.
Structural Model and Test of Hypotheses
The SEM analysis results for the integration model of EV adoption are presented in Figure 2. The values on the arrows between the latent variables are the path coefficients, and the values indicated on the arrows between the measured indicators and the latent variables represent the outer loadings. The numbers in the circle are the coefficients of determination (R 2 value), representing the variance of the construct explained by other variables, of which 0.75, 0.50, or 0.25 for latent variables can be described as substantial, moderate, or weak (Hair et al., 2017). In this study, the coefficient of determination for EV adoption was 0.734, which indicates that the explanatory power of the model is very high. ijbm.ccsenet.org International Journal of Business and Management Vol. 15, No. 11;2020 Figure 2. Results of path analysis of consumers' intention to adopt electric vehicles To evaluate the significance of the path coefficient in the structural model of this study, the SmartPLS bootstrapping technique was applied. The results, presented in Table 4, indicate that all three determinants of the TPB model have significant effects on the intention for EV adoption, in order of subjective norm, attitude, and PBC. Thus, H1, H2, and H3 are all accepted.
Among the five characteristics of PICs, compatibility, relative advantage, and observability have a significant influence on attitude toward EVs, whereas complexity and trialability have no effect. Thus, H4, H5, and H7 are accepted, but H6 and H8 are rejected To analyze the moderating effects of consumer innovativeness on the relationships between PICs and attitude toward EV adoption, the SmartPLS bootstrapping procedure was applied, and the results are summarized in Table 5. The sample was divided into two groups based on the median value of innovativeness scores resulting in a higher innovativeness group (>3 points) of 102 respondents and a lower innovativeness group (≤3 points) of 74 respondents. The results of the difference analysis between the path coefficients of the two groups demonstrate that the level of innovativeness has a moderating effect on the relationships between relative advantage and attitude and between compatibility and attitude at a 5% significance level. Thus, H9 (a) and H9 (b) are supported. However, the relationship between compatibility and attitude is strengthened at the higher level of innovativeness, while the relationship between relative advantage and attitude is strengthened at the lower level. As respondents in the higher innovativeness group evaluate EV adoption more favorably, they perceive compatibility of EV adoption more positively, while respondents in the lower innovativeness group evaluate EV adoption more positively, they perceive the relative advantage of EV more favorably. The results indicate that compatibility affects attitude more for those with more innovativeness, while relative advantages affects attitude for those with less innovativeness. Note. p (1) and p (2) are path coefficients for Groups 1 and 2, respectively. se (1) and se (2) are standard errors of p (1) and, p (2) , respectively. * p < .1, ** p < .05, *** p < .01, NS: not significant.
Next, the moderating effect of age on the relationships among the three determinants of TPB and intention for EV adoption was analyzed, and the results are summarized in Table 6. The moderating effect of age is not obvious when the sample is divided into five age groups from 20s to 60s. However, when the sample is divided into two age groups (younger: 20s-30s and older: 40s-60s), the analysis for path coefficient differences reveals several moderating effects. The results demonstrate that the relationship between attitude and intention is further strengthened in the younger age group, while the relationship between subjective norm and intention is stronger in the older age group. This implies that older people are more likely to be influenced by people of similar status or social groups when they consider EV adoption, and that, for younger people, attitude toward EV adoption is the critical determining factor. Thus, H10 (a) and H10 (b) are supported. Note. p (1) and p (2) are path coefficients for Groups 1 and 2, respectively.
To verify the predictive accuracy of the model, the SmartPLS blindfolding procedure was used to obtain Q 2 values of all intrinsic latent variables of the model, as summarized in Table 7. The Q 2 values of all intrinsic latent variables, including intention and all three direct antecedents of intention, are greater than zero, suggesting that the TPB applied in this study has predictive relevance in the EV adoption context. The effect sizes of f 2 and q 2 were obtained (Table 8) to examine the relative predictive and explanatory effects of the three determinants of TPB. The effect size f 2 measures a predictor variable's contribution to a dependent variable's R 2 value as a relative indicator of explanatory relevance, and the effect size q 2 measures a predictor variable's contribution to a dependent variable's Q 2 value as a relative indicator of predictive relevance. In general, values of 0.02, 0.15, and 0.35 for both f 2 and q 2 represent low, medium, and high effects (Hair et al., 2017). Results reveal that social norm has high explanatory and predictive relevance, while attitude and behavioral control have comparatively low relevance on intention for EV adoption.
Discussion and Conclusions
This study examines how consumers' perception of the innovative characteristics of EVs affects their attitudes toward EV adoption and intentions to adopt EVs. A theoretical model integrating TPB (Ajzen, 1991) and DOI (Rogers, 2003) was developed to explain consumer behaviors for the adoption of innovative products in terms of their perception of the innovativeness of the products. This will provide the basis for further research for enhancing EV adoption. The results indicate that all three determinants of TPB have positive effects on intention to adopt EVs. Furthermore, relative advantage, compatibility, and observability positively influence attitude toward EV adoption. Consumer innovativeness functions partly as a moderator between PICs and attitude, i.e., The higher the consumer's innovativeness, the stronger the relationship between compatibility and attitude, and the lower the consumer's innovativeness, the stronger the relationship between relative advantage and attitude. Younger consumers (20s-30s) are associated with a stronger relationship between attitude and intention to adopt EVs than older consumers (40s-60s), who are associated with a stronger relationship between subjective norm and intention to adopt EVs.
On the basis of the research results, the following practical implications for promoting demand for EVs are suggested. Among the three determinants of TPB, subjective norm has the greatest impact on the adoption intention of EVs. Groups and societies have a large influence on an individual's behavior (Venkatesh & Davis, 2000). Social norm has a significant influence on intention (Bamberg, 2003). Therefore, there is a need to activate social networks for sharing information and knowledge of EVs among society, and it is necessary to have a strategy to reduce misunderstanding and dually increase knowledge of EVs by transmitting relevant information using interpersonal media.
PBC for EV adoption is also significant for intentions of EV adoption. The perceived ease and obstacles of EV adoption are essential. Bamberg (2003) revealed that the belief that an individual can control a situation has a large impact on behavioral intention. Thus, the perceived difficulty in adopting EVs acts as a barrier to EV adoption, and hence, a strategy for solving economic and situational factors that impede EV adoption is required such as providing financial benefits (e.g., incentives), purchase subsidies to lower purchasing barriers, or provide information about EVs to lower uncertainties. To this end, an EV company could operate an EV consultation center to provide EV knowledge.
Positive attitude toward EV adoption has a significant effect on adoption intention. Therefore, A need for marketing strategies related to specific PICs, which are important in forming a positive attitude toward EV adoption, prevails. A positive attitude is formed when respondents perceive that adoption of EVs is not much different from that of existing vehicles or when EVs satisfy their desire for a car. EVs are perceived as having a higher relative advantage in terms of economic, technical, and environmental aspects compared to existing vehicles and the more people perceive EVs as observable by others, the more they form a positive attitude toward EV adoption. In contrast, the perceived complexity of EVs and trialability do not significantly affect attitude, with respondents revealing that the use of EV is not difficult or complicated compared to existing vehicles. However, trialability has the highest mean of 3.9 among the five PICs and this implies that respondents consider the possibility of test-driving EVs as important. On the basis of these findings, innovative products such as EVs require marketing strategies that focus on relative benefits, compatibility, and observability.
Creating early market strategies that target people with higher innovativeness (or early adopters) can help promote EV adoption due to early adopters having a more positive attitude toward EV adoption and a higher perception of EV compatibility. After early adopters have spread EVs in the market to some extent, a strategy that emphasizes the relative advantages of electric vehicles over existing vehicles may be more appropriate. In that case, the emphasis of marketing strategies needs to be placed on economic aspects such as fuel economy and tax benefits, technical aspects such as less noise and energy efficiency, and environmental aspects such as less harm to the environment. demonstrate that, for younger respondents, the relationship between attitude and intention is enhanced, while for older respondents, the relationship between social norm and intention is enhanced. Based on these findings, for the older age group, it is desirable to appeal to them through advertising and providing information through social community networks, while for younger age group it is necessary to focus more on product innovation characteristics that can influence the positive evaluation of EV adoption.
Further Research
Finally, further research directions are presented. To increase the explanatory and predictive power of the EV adoption model, additional variables other than PICs, such as consumer characteristics (e.g., involvement) and specific product attributes (e.g., price, brand, and vehicle options) should be considered. A comparative study of the characteristics between EV adopters and non-adopters should be conducted. COMPLEX2 Prior to driving an electric car, I would be required to take a special course.
COMPLEX3 It is hard to lend an electric car, as it is very complicated.
COMPLEX4 The concept behind an electric car is difficult for me to understand.
OBS1
By using an electric car, I show that I care about the environment.
OBS2
If I use an electric car, it would be noticed by people close to me.
OBS3
An electric car stands out visibly.
TRIA1
Prior to buying an electric car, it would be important to test-drive it.
TRIA2
Prior to buying an electric car, I would like to borrow it for a day or two.
TRIA3
Prior to buying an electric car, I would like to try a friend's car.
ATT1
For me, using an electric car is favorable.
ATT2
For me, using an electric car is desirable.
ATT3
For me, using an electric car is pleasant.
ATT4
For me, using an electric car is positive.
Subjective norm
SN1
Most people who are important to me think I should buy an electric car in the near future.
SN2
If I purchase an electric car, then most people who are important to me would also buy an electric car.
SN3
People whose opinions I value would prefer that I buy an electric car in the near future.
Perceived behavioral control PBC1 Whether or not to purchase an electric car is completely up to me.
PBC2
I am confident that if I want I can buy an electric car.
PBC3
I have resources, time, and opportunities to buy an electric car.
Intention to adopt EV INT1 I am willing to buy an electric car in the near future.
INT2
I intend to buy an electric car in the near future.
INT3
I plan to buy an electric car in the near future.
Copyrights
Copyright for this article is retained by the author(s), with first publication rights granted to the journal.
This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).
|
v3-fos-license
|
2021-10-24T15:13:47.261Z
|
2021-10-21T00:00:00.000
|
239527211
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/wcmc/2021/1609612.pdf",
"pdf_hash": "ecbbce56193a7f289c38a27567d0b9784dacd60d",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45973",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "d989d35f3017eed1ccc1c574aa15cd5ed05a16a1",
"year": 2021
}
|
pes2o/s2orc
|
The Mobile Water Quality Monitoring System Based on Low- Power Wide Area Network and Unmanned Surface Vehicle
The increasingly serious water pollution problem makes efficient and information-based water quality monitoring equipment particularly important. To cover the shortcomings of existing water quality monitoring methods, in this paper, a mobile water quality monitoring system was designed based on LoRa communication and USV. In this system, the USV carrying water quality sensors was used as a platform. Firstly, the LoRa network is used to monitor water quality over a large area. Secondly, the unmanned surface vessel controls the position error within ±20m and the velocity error within ±1m/s based on the Kalman filter algorithm. Thirdly, the genetic algorithm based on improved crossover operators is used to determine the optimal operational path, which effectively improves the iterative efficiency of the classical genetic algorithm and avoids falling into local convergence. In the actual water surface test, its packet loss probability within a working range of 1.5 km was below 10%, and the USV could accurately navigate according to the preset optimal path. The test results proved that the system has a relatively large working range and high efficiency. This study is of high significance in water pollution prevention and ecological protection.
Introduction
Today, the water quality in coastal and inland lakes is deteriorating under the influence of increasing human social and economic activities. The ill-being water environment has caused irreparable losses to human health, production, and living, so that the protection of the water ecological environment demands immediate attention [1].
The data collection and monitoring of the water area take an important part in the protection and management decision-making of the water ecosystem. In spite of the research achievements, several problems in the field of water quality monitoring still need to be tackled.
(i) Monitoring scope: for large natural reserves of hundreds or even thousands of square kilometers, expensive economic costs will be incurred if monitoring and sensing device is arranged.
(ii) Remote location of the monitored area: special communication infrastructure should be erected for monitoring device in such areas.
(iii) Daily maintenance of device: a traditional monitoring device needs manual inspection and maintenance one by one, which will cause high human resource costs [2]. To address these issues, this paper investigates efficient water quality monitoring methods.
Related Works
There are three main approaches to current water quality monitoring: labor-intensive manual sampling, construction of fixed monitoring stations in monitoring waters, and mobile water quality monitoring through autonomous robots such as unmanned boats. Current water quality monitoring methods based on the second approach are many; for example, Ruan and Tang combined solar charging and wireless sensor network technology to design an energy-saving low-carbon water quality monitoring system [3]. Nam et al. designed a wireless sensor network system based on CDMA (Code Sector Multiple Access) and ZigBee technology when monitoring the water environment inhabited by coastal fish [4]. Wiebke et al. [5] deployed a wireless sensor network in coastal fishing grounds with low-cost compact buoys equipped with water quality monitoring sensors to analyze the impact of water quality parameters on aquaculture. These methods above have significant drawbacks; they can only monitor fixed locations and still require manual repositioning of node locations if the monitoring target is to be changed. In response to above drawbacks, many studies have been made on the application of unmanned vessels for water quality monitoring. Cao et al. designed a 4G technology-based automatic navigation water quality monitoring unmanned boat, which can navigate to the preset monitoring point and collect water quality information at the location [6]. Siyang and Kerdcharoen realized the upload and storage of water quality monitoring data from unmanned boat based on Zigbee network, and the effective monitoring distance is about within 300 m [7]. Yang et al. in Taiwan designed a double-hulled water quality monitoring unmanned boat, which is stable and able to conduct water sampling on the basis of water quality collection. Bălănescu et al. combined blockchain technology with unmanned boat water quality monitoring work, providing a high security data collection, transmission, and management scheme [8]. However, these current research results are either limited by the shortcomings of ZigBee or WiFi, such as short communication distance and weak antiinterference ability, or limited by the high power consumption of 4G or GPRS, and the need for additional fees.
To cope with these existing problems, the emerging LoRa technology that boasts low cost, long communication distance, and strong endurance was introduced in this paper [9]. Further, the USV can navigate autonomously on the water surface, by which the efficiency of water surface working can be enhanced to a great degree [10]. In this paper, a mobile water quality monitoring system that works based on USV and LoRa was designed to effectively cover the shortcomings of existing methods. Featuring good energy-saving performance, low cost, and wide monitoring range, this system can be used to collect and monitor the data of the target water area by any environmental protection agency and individual, presenting extremely important research significance in the prevention and treatment of water pollution and the construction of a good aquatic ecology.
Architecture of the Mobile Water Quality
Monitoring System The mobile water quality monitoring system proposed in this paper integrates path planning, autonomous navigation, real-time water quality monitoring, and remote monitoring. This system is composed of parts: shipboard system, LoRa gateway, and monitoring terminal. The system architecture is shown in Figure 1. First, the shipboard system mainly consists of positioning and navigation system, power system, LoRa communication system, and main control board. It has the functions of obtaining GPS positioning coordinates, reading heading, and speed information of unmanned surface vessels. The shipboard system interacts with the shore-based monitoring platform for data commands through LoRa network and with the water quality monitoring system for control commands through RS232 interface. The shipboard system according to the shore-based monitoring platform to send control commands for independent cruise or remote control action controls the water quality monitoring system for water quality testing.
Second, the water quality detection system consists of a main control board and four elements of water quality sensors such as temperature, turbidity, pH, and conductivity. It receives the instructions that sent by the shipboard system to collect water body information on a regular basis.
Third, the shore-based platform is mainly developed based on windows operating system, including monitoring software programs and data processing algorithms. The host computer is mainly based on the human-computer interaction interface, completing LoRa communication, algorithm call, map display, data sending and receiving, and display functions; the algorithm application is mainly through the C++ call MATLAB algorithm, sending the calculated operation path coordinates to the unmanned ship, so as to achieve the application of multipoint monitoring path planning.
The shipboard system of the USV is designed with a catamaran with a length of 0.8 m, a width of 0.68 m, and a maximum speed of 8 km/h. Structured with two parallel hulls of equal size, the catamaran has a wider beam and a shallower draft than a monohull, making the course of the entire ship more stable [11]. The structure of the unmanned surface vessel is shown in Figure 2.
Two DC motors are provided to serve as the kinetic drive device of the USV and coordinated with the motor driver; the hull steering can be adjusted through the differential control method. The USV is equipped with the positioning module ATK1218-BD and the attitude detection module MPU9250. The loose coupling integrated navigation algorithm was introduced to calculate the hull attitude and position. To meet the basic monitoring requirements, temperature, pH, turbidity, and conductivity were selected as the monitoring elements in this paper. The E-201C pHwater temperature composite sensor, the TSW-30 turbidity sensor, and the DJS-12 conductivity electrode sensor are employed and paralleled via RS485 communication protocol, and these sensors can be replaced and configured according to actual use condition. STM32F103ZET6 acts as the central controller to complete data collection and calculation.
The LoRa communication technology used for USV can effectively expand the monitoring scope and save costs. In this paper, close attention was paid to the design of the ship-shore LoRa communication system and the path 2 Wireless Communications and Mobile Computing planning algorithm of USV was improved. In addition, ONENET-China Mobile IoT Development Platform-was used as the monitoring terminal of this system to help realize the device monitoring and configuration, real-time data monitoring, data storage, and other functions [12].
LoRa Communication System
4.1. Shipboard LoRa Nodes 4.1.1. Communication Node Hardware. This system is designed with LoRa radio frequency module to receive and transmit the water quality data. The programming was conducted in the MCU to control the LoRa module and send the encoded information through the module. The integrated +20 dBm power amplifier is provided to ensure that the long-distance wireless communication is available under the condition of sensitivity as low as -148 dBm. The circuit schematic diagram of the LoRa communication module is shown in Figure 3.
Communication Node
Software. In this study, the control program of the STM32 single-chip microcomputer was designed in the Keil integrated development environment and finally programmed to the MCU to complete the development after compilation, simulation, and debugging. The program was written to complete such functions as data collection, signal conversion, and uploading data. The flow of node program is shown in Figure 4. After the program starts, it is divided into the following steps: first, initialize the device and make the LoRa module connect to the network; second, enter the main program and wait for the timer to trigger the detection task; third, complete the trigger and collect the water body data; fourth, carry out MCU data processing and ADC conversion; fifth, store the data in the buffer after processing is completed and wait for sending; sixth, after sending, enter the sleep state and wait for the timer to trigger the detection command again.
4.2. Shore-Based LoRa Gateway 4.2.1. Gateway Hardware. The embedded gateway designed in this study was composed of an industrial-control core board, a LoRa gateway module, and a 4G LTE mobile data board, as shown in Figure 5.
In order to cope with large-scale distributed monitoring scenarios, it is necessary to extend the downlink channel of gateway. A symmetrical channel processor refers to the coexistence of multiple SX1278s by combiners based on a communication channel provided by the existing SX1278 RF chip. Connect the module with the serial port of LoRa gateway, and control frequency band, power, spreading factor, and other RF parameters of each channel by application program to operate the serial port, and fix all channels of modules as downlink channels. This makes up for the lack Gyro Accelerometer Integrated navigation system
Gateway
Software. The workflow of the gateway in this study is shown in Figure 6. After the gateway program starts, it is divided into the following steps: first, initialize the gateway device and check whether the initialization is successful; second, the gateway enters listening mode; third, determine whether to obtain the send request; fourth, receive the packet after successfully obtaining the send request, otherwise continue listening; fifth, parse the packet and determine whether the parsing is successful; sixth, if the parsing is
Ship-Shore Interaction LoRa Communication Protocol.
The parsing of the communication protocol by the onboard system is mainly implemented by a finite state machine. We preset three states: stop, autonomous, and remote control. When the shipboard system receives the protocol frame from the shore-based monitoring platform, it parses the operation mode field in the protocol. The finite state machine further processes the data in the data frame by the different operational states. In the case of remote control mode, the speed and heading information in the data frame is parsed and the remote control procedure is executed. In the case of autonomous mode, the GPS coordinates of the target point in the data frame are resolved. Then, the heading is calculated based on the current GPS coordinates, and navigation to the target point is performed. In case of stop mode, the operation is terminated. During the operation of the unmanned surface vessel, water body data and navigation status are regularly collected and uploaded to the user monitoring platform. The user monitoring platform receives the data packets and parses, displays, and stores them. This protocol frame contains 64 bytes, composed of data frame header, data packet length, device type, target ID number, operating mode, GPS data, pose data, speed data, water quality sensor data, reserved bits, XOR check bit, and data frame tail.
Protocol Frame Sent by Monitoring Terminal.
The communication when the monitoring terminal sends data to the USV node was in line with the protocol frame format shown in Figure 8.
The protocol frame contains 21-28 bytes, composed of data frame header, data packet length, device type, target ID number, operating mode, direction and speed data, target GPS position, reserved bit, sensor switch bit, XOR check bit, and data frame tail.
Wireless Communications and Mobile Computing
The protocol frame is a data frame with variable length. When the remote control mode is selected as the working mode, the direction and speed data were transmitted in the data bit, and when the automatic working mode was selected, the target GPS location data were transmitted in the data bit.
Integrated Navigation Method for USV
For water quality monitoring cruise operations, the navigation algorithm is crucial. GPS navigation is vulnerable to interference from the external environment and limited by the low update frequency. It cannot meet the demand for continuous and stable positioning. The inertial navigation method calculates position and velocity by direct integration of inertial module measurement information data, and the accumulated error will occur and gradually increase with time. To address the above problems, we use an improved integrated navigation method based on the Kalman filter to achieve the fusion of GPS and inertial data. It can overcome the shortcomings of inertial sensor dispersion over time. The inertial sensors provide the acceleration and angular velocity of the ship, and the speed and position information of the ship can be calculated using the laws of physics. The Kalman filtering algorithm reduces the effect of cumulative errors while reducing the complexity of implementation. This improved integrated navigation method has the advantages of simple structure and small computational effort, which is well suited for low-cost unmanned ship navigation applications. Its schematic block diagram is shown in Figure 9.
Neglecting the velocity error and position error of the sky direction, the state variables of the combined navigation are: where φ E , φ N , φ U denote the platform angle error in the east, north, and sky directions, respectively; ∂V E , ∂V N denote the velocity error in the east and north directions, respectively; ∂L, ∂λ denote the longitude and latitude error, respectively; ε bx , ε by , ε bz denote the constant drift of gyroscope in the east, north, and sky directions, respectively; and Δ x , Δ y , Δ z denote the constant drift of acceleration in the east, north, and sky directions, respectively.
The system noise vectors are W = ω εE , ω εN , ω εU , ω aE , ω aN , 0, 0, 0, 0, 0, 0, 0, 0 where ω εE , ω εN , ω εU denote the random drift of the gyroscope in the east, north, and sky directions, respectively, and ω aE , ω aN denote the random drift of the accelerometer in the east and north directions, respectively. Assume that the gyroscope and accelerometer drift obey Gaussian distribution, that is, Also considering the unmanned ship work actual, the system works in the horizontal plane; then, the system state equation is where V E , V N indicate the speed of the unmanned ship in the east and north directions, respectively; ω ie indicates the angular velocity of the earth's rotation; and f E , f N , f U indicate the specific force felt by the accelerometer in the east, north, and sky directions, respectively. Combining the above equations, the system equation of state can be written aŝ whereXðtÞ indicates the estimated state, FðtÞ indicates the state matrix, XðtÞ indicates the current state, and WðtÞ indicates the noise vectors. In the navigation system, there are two groups of observation equations: one group is the position observation value, which is the difference between the latitude and longitude information given by the inertial guidance system and the corresponding position information given by the GPS receiver; the other group of observation value is the difference between the velocity in each direction given by the two systems.
The position measurement information of INS can be expressed as the sum of the true value and the error:
Wireless Communications and Mobile Computing
Similarly, the position information of GPS can be expressed as the sum of the true value and the error: Then, the position observation equation is
Similarly, the velocity observation equation of INS is
6. Study on USV Path Planning Based on Improved Genetic Algorithm 6.1. Problem Description. Path planning is the key technology for a mobile water quality monitoring system to work efficiently. The available optional paths from the starting point to the end point in the water quality monitoring were countless. This paper is aimed at finding out the optimal working path with high efficiency and energy saving as indicators.
Based on the working characteristics of the USV traversing multiple monitoring points over the course of water quality monitoring, the water quality monitoring process was abstracted as the Traveling Salesman Problem (TSP) in this paper. Observe Figure 10; the essence of the problem is to find a Hamilton loop with the smallest weight in a weighted completely undirected graph.
The classic TSP could be described as follows: when the number of target cities N and the distance between any cities are all known, the shortest path traversed all cities once and only once and returned to the starting city [13]. The problem can be described via the following mathematical model. Len where N is the number of cities, D is the distance matrix, d ij is the distance between two cities, Inf is a large enough positive number, i, j are the city numbers, Tour is the set of paths, T pðlÞ is the lth city in the path, and LenðT p Þ is the total length of the path T p . It can be seen that the optimal solution of TSP is the minimum value of solving LenðT p Þ. Then, an Node 1 2 3 4 5 6 7 1 Inf 29 82 46 68 52 15 2 29 Inf 55 46 42 43 43 3 82 55 Inf 68 63 20 23 4 46 46 68 Inf 82 15 72 5 68 42 63 82 Inf 74 23 6 52 43 20 15 74 Inf 61 7 15 43 23 72 23 9 Wireless Communications and Mobile Computing improved genetic algorithm was introduced in this paper to solve this problem.
6.2. Basic Genetic Algorithm Principle. In this paper, the basic genetic algorithm is improved so as to calculate the optimal path of the unmanned ship. The basic genetic algorithm is one kind of iterative algorithm, which is an intelligent computational model to complete the simulation of biological evolution process in nature with the help of computer technology. Compared with the traditional algorithms, the genetic algorithm not only has the characteristics of selflearning, self-organization, and self-adaptation but also has high robustness and wide applicability, which can effectively deal with the complex problems that are difficult to be solved by traditional optimization algorithms without the limitation of problem nature.
The main operators that manipulate chromosomes in genetic algorithms are as follows: population initialization operator, selection operator, crossover operator, and mutation operator. The population initialization operator prepares for the improvement of the feasible solution and the iterative process of the algorithm, mainly for the task of transforming the feasible solution of the problem into the corresponding chromosomes based on the coding scheme. The purpose of the selection operator is to select individuals of good breed. The operator is based on the evaluation of the fitness of the individuals and, according to a specific method, selects a part of the contemporary population with high fitness in preparation for the generation of the new generation. The crossover operator is an important operator that enables genetic algorithms to search for new solutions over a very wide space, in order to obtain new and better individuals. The crossover process is a process performed by a certain probability to replace and reorganize some genes of the chromosomes of the individuals already selected from the population into new individuals. The mutation operator ensures the diversity of individuals in the population, thus suppressing to some extent the problem of early convergence in the genetic algorithm.
The basic execution steps of the genetic algorithm are as follows: (Step 1) In population initialization, first, complete the setting of the control parameters of the genetic algorithm, and then establish the first generation population using the population initialization operator. ( Step 2) Calculate the individual fitness. Construct the corresponding fitness function, and calculate the fitness of all individuals in the population according to the problem being addressed.
Genetic algorithm with improved crossover operator Input: N, G, FðtÞ Output: Achieve the chromosome with the best fitness value 1: Initialize the population 2: t ⟵ 0 3: while t < = G do 4: Calculate fitness F(t) 6: end for 7: for i = 0 ⟶ N do 8: Selection operations 9: end for 10: for i = 0 ⟶ N/2 do 11: Get parent chromosomes p1, p2 12: Select the crossover point and cut the parent chromosome into three segments 13: Select the first segment of p1 and the last segment of p2 insert into offspring 1 14: Select the first segment of p2 and the last segment of p1 insert into offspring 2 15: Calculate 10 Wireless Communications and Mobile Computing ( Step 3) Determine whether to terminate the iteration. When the number of evolutionary generations is less than the maximum number of iterations, Step 4 is executed; otherwise, the iteration is terminated and the optimal individuals in the population are output.
(Step 4) Execute the selection operation. Use the selection operator to select some individuals from the population. ( Step 5) Execute the crossover operation. Compare the crossover probability, and perform the update operation on the selected individuals using the crossover operator.
(Step 6) Perform the variation operation. In contrast to the mutation probability, a new generation of population is obtained by performing the mutation operation on the selected individuals, updating the evolutionary generation and moving to Step 2.
In order to be able to make the offspring obtained by crossover jump out of the local optimal solution while retaining the excellent genes of the parent, this paper proposes an improved triple crossover operator, which divides the parent chromosome into three segments and then compares the recombination, so as to obtain better individuals.
6.3. Improved Genetic Algorithm. The high efficiency of the basic genetic algorithm is mainly attributed to its operators: replication, crossover, and mutation [14]. But the basic genetic algorithm is still exposed to such shorting such as poor local search ability and slow convergence speed [15]. In this paper, the iterative efficiency of genetic algorithm in the path planning of USVs was lifted by improving the multipoint crossover operator.
In the process of species initializing, each chromosome was described as a series of nodes (each node was a monitoring target point). If there were seven monitoring target points in the job, the length of the chromosome would be set to 7, as shown in Figure 11. The cost matrix could be used to calculate the distance cost between two monitoring target points.
The fitness function that used the basic genetic algorithm was reserved:
Wireless Communications and Mobile Computing
where f ðxÞ is the objective function of the monitoring path cost described by the chromosome. The smaller objective function value would lead to the larger fitness function value and would be closer to the desired optimal solution.
The new solution space could be completed by crossover operations. First, the father generation was randomly selected, and then position 3 and position 7 were set as the intersection point; the father generation chromosome was cut into three segments and named as the first segment, the middle segment, and the tail segment. To determine the optimal chromosome part of the father generation, these three segments were calculated and compared, and finally, the relatively good offspring was acquired. The flow chart of the entire improved crossover process is shown in Figure 12.
(Step 1) Select p1 and p2 from the father generation randomly, and measure their costs. The cost matrix is shown in Table 1, and the random generation chromosomes are shown in Figure 13. group with the lower cost (for example, in 5-7-6 and 6-1-3, the former that had a lower cost was selected) as the middle section of offspring 1. Select the tail segment from p2 again to fill in the size of the chromosome. Then, create offspring 2 in the same way. Select the first segment of p2 to compare, and select the smaller middle segment of p1 and p2, and finally select the last segment of p1. The offspring changes are shown in Figure 14. ( Step 3) Delete the duplicate nodes in the offspring. As shown in Figure 15, delete node 3 of offspring 1 and node 1 of offspring 2.
(Step 4) Search for the missing node. Insert the missing node 1 in offspring 1, and compare it with the first and last nodes to acquire the one with a smaller cost. For example, the cost of 1-4 was 46, and the cost of 2-1 was 29. Therefore, node 1 was inserted after node 2, and the same steps were applied to offspring 2. The resulting offspring is shown in Figure 16.
The pseudocode of the algorithm is shown in Algorithm 1, where N is the number of individuals in the population, G is the number of iterations, and FðtÞ is the fitness value.
Experimental Results and Analysis
In this chapter, we mainly conducted 4 sets of experiments. First, we conducted computer simulations on the integrated navigation algorithm and path planning algorithm proposed above; second, we controlled the communication distance and conducted experiments on the LoRa communication quality; third, we used unmanned surface vessels for actual surface operations. And we compared and analyzed the actual path and the expected path; fourth, we collected a set of water quality data at the target point in the actual operation and made an analysis of the experimental results. tem designed above, a simulation study was carried out to illustrate the simulation results with actual data. Assume that the first order Markov drift of gyroscope is 30°/h, the first order Markov zero bias of accelerometer is 0.5 mg, the root mean square value of GPS position white noise is 10 m, the root mean square value of GPS velocity white noise is 0.2 m/s, the root mean square value of random drift of gyroscope is 30°/h, the root mean square value of random zero bias of accelerometer is 0.5 mg, and the simulation time is 500 s. water motion, so the height channel is not simulated in the build simulation. The results are as follows: The results are shown in Figures 17 and 18, in the presence of noise and zero drift, the GPS correction of the inertial guidance through the Kalman filter is obvious, and the velocity and position can be stabilised within ±1 m/s and ±20 m and remain stable. The feasibility of the optimised combination has been verified and can guide the programming design accordingly. Therefore, the simplified GPS/INS combination can be used by unmanned ships. In order to theoretically verify the effectiveness of the proposed algorithm for the TSP model of water quality monitoring operations, we choose the classical genetic algorithm to compare with the proposed algorithm [16]. The USV passes 50 points in the 70 × 70 simulated water area, and then the USV returns to the initial point to form a closed loop. The parameters of the algorithm are set as follows. In the above, M is the population size, Tm is the number of iteration terminations, Crossover is the crossover probability, and Mutation is the mutation probability.
As shown in Figures 19 and 20, the global path planning results and convergence curves corresponding to the two algorithms can be obtained. It is observed that the proposed algorithm quickly converges to the global minimum fitness value in Figure 20. The path calculated by the proposed algorithm is a closed loop, and there is no cross path. Fewer crossing points mean stronger traversal and less travel waste. Figure 20 and Table 2 show that the shortest path distance obtained by the proposed algorithm is 458, the calculation time of the proposed algorithm is 9.2 s, and the number of iterations is 106. Although the calculation time of the classical genetic algorithm is similar, it does not reach the minimum number of iterations and distance. In short, the proposed algorithm is acceptable in terms of computing time and has better computing performance than the classic algorithm.
Communication Quality Experiment.
The data transmission quality of the USV and the gateway was changed at an interval of 400 m to test the pack loss probability of the LoRa communication link deteriorates with the distance increasing [17]. Therefore, the RSSI and packet loss probability were tested at different distances in this paper. The data size of each packet was 9 bytes and checked based on CRC, the test distance was 0-2 km, the transmitting power of the radio frequency module was 20 dBm, and the antenna gain was 3 dbi. 1000 data packets were sent and received for each test distance, and the test was performed twice. The experiment results are shown in Figures 21 and 22.
From Figures 21 and 22, it can be seen that the RSSI of the shipboard LoRa node is greater than -81 dBm within 2 km, and the signal reception strength is reliable and stable. From the packet loss rate, there is no packet loss within 14 Wireless Communications and Mobile Computing 400 m, and the packet loss rate does not exceed 10% within 1600 m, and the packet loss rate is effectively controlled within 25% within 1600-2000 m. Through the above experimental analysis, it can be concluded that the water quality monitoring unmanned boat designed in this paper, based on LoRa network, can achieve a long distance and reliable data transmission link within 2 km, fully enhancing the monitoring range and reliability of the unmanned boat water quality monitoring operation.
Surface Operation Experiment.
In order to verify the reliability of the unmanned surface vessel in actual operation, we set six target monitoring points (these point coordinates are shown in Table 3) in advance in Tianyin Lake of Nanjing Institute of Technology. The user monitoring platform calculates the optimal traversal sequence through the proposed path planning algorithm and sends it to the unmanned surface vessel. After the unmanned surface vessel receives the coordinate set, it traverses one by one and returns to the current coordinate regularly. By fitting the coordinate points, the trajectory diagram of the USV was obtained, as shown in Figure 23.
USV adopts an integrated navigation algorithm based on the Kalman filter and PID heading control algorithm to achieve access to all target detection points. When turning from the current coordinate to the next coordinate, the USV's motion trajectory will show a small deviation, but it can be adjusted to the correct route in time. In short, the unmanned surface vessel designed in this paper can complete the water quality monitoring operation according to the preset path.
7.4. Water Quality Monitoring Results and Analysis. In January 2021, a field test was performed in a local lake. The field test environment is shown in Figure 24.
The USV was set to return its GPS coordinates at an interval of 1 s and collect water quality information when it arrived at the monitoring point. The water quality data collected at each monitoring point is shown in Table 3.
The experimental site is located in the nature reserve inside the school, and monitoring points 3 and 4 are located in the center of the lake. According to the water quality monitoring results in Table 3, it can be seen that the turbidity of monitoring points 3 and 4 is lower compared to other monitoring points, and it can be seen that the water in the middle of the lake is clearer, and the water quality of other monitoring points, although slightly worse, is in the normal water quality range, and it can be concluded that the overall water quality of the lake is relatively good.
Conclusion
To cover the deficiencies of the existing water quality monitoring system, a mobile water quality monitoring system has been designed based on LoRa communication and USV in this paper. The USV in this system is equipped with LoRa nodes and water quality sensors, and the data is transmitted through the LoRa gateway on the shore, by which the realtime monitoring in large areas are realized. Moreover, an improved genetic algorithm is introduced to plan the working path. The test results have proved that the mobile water quality monitoring system designed in this paper can complete the autonomous cruise work in large areas and can monitor the water quality at the target point in real time. Compared with the traditional water quality monitoring mode, the mobile water quality monitoring system proposed in this paper can save costs and labor, enhance working efficiency, and expand the monitoring scope to a great extent.
In the future, we plan to equip the unmanned surface vessel with solar charging panels to improve endurance. And a more complete information-based monitoring platform will be designed around it.
Data Availability
The data included in this paper are available without any restriction.
|
v3-fos-license
|
2021-03-04T14:22:05.825Z
|
2021-03-04T00:00:00.000
|
232107065
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2021.649262/pdf",
"pdf_hash": "067dbf5f23d801b89000721d1aacca154c6e0809",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45974",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"sha1": "067dbf5f23d801b89000721d1aacca154c6e0809",
"year": 2021
}
|
pes2o/s2orc
|
The Theta Rhythm of the Hippocampus: From Neuronal and Circuit Mechanisms to Behavior
This review focuses on the neuronal and circuit mechanisms involved in the generation of the theta (θ) rhythm and of its participation in behavior. Data have accumulated indicating that θ arises from interactions between medial septum-diagonal band of Broca (MS-DbB) and intra-hippocampal circuits. The intrinsic properties of MS-DbB and hippocampal neurons have also been shown to play a key role in θ generation. A growing number of studies suggest that θ may represent a timing mechanism to temporally organize movement sequences, memory encoding, or planned trajectories for spatial navigation. To accomplish those functions, θ and gamma (γ) oscillations interact during the awake state and REM sleep, which are considered to be critical for learning and memory processes. Further, we discuss that the loss of this interaction is at the base of various neurophatological conditions.
INTRODUCTION
The hippocampus is the main structure involved in the generation of the 4-to 12-Hz theta (θ) rhythm, which is one of the most regular EEG oscillation that can be recorded from the mammalian brain. Interest in θ has flourished from the abundant data indicating that the rhythm is linked with integrative processes critical for higher cognitive functions. Thus, neuronal spiking in widespread brain regions is phase locked to hippocampal θ oscillations, including somatosensory, entorhinal, or prefrontal cortices Nuñez et al., 1991Nuñez et al., , 2006Kocsis and Vertes, 1994;Hanada et al., 1999;Sirota et al., 2008).
A slow oscillatory activity with the properties of θ was first described in the hippocampus by Jung and Kornmüller (1938). However, the original detailed analysis of hippocampal θ was provided by Green and Arduini (1954). Green and Arduini (1954) showed that θ was associated with an irregular desynchronized activity in the cortex, whereas in contrast, synchronized cortical activity was concurrent with irregular activity in the hippocampus. This link between hippocampal and cortical activities suggested a close relation of θ with attention, information processing, higher brain functions, and cognition, giving rise to a rapidly growing interest in θ. Indeed, more than 2,000 articles that mention hippocampal θ have been published from 1950 to 2020, with more than 50 articles published in 2020.
Although more than 150 reviews on hippocampal θ have been published, several aspects of the neuronal and circuit mechanisms involved in the generation of the rhythm and particularly its participation in behavior remain unknown or controversial. Nevertheless, new experimental data and modern interpretation of former results provide insight to those unresolved issues. Therefore, this review focuses on those debated basic mechanisms of θ genesis and of its relationships with behavior. However, it is of key importance to consider that ''after 50 years and hundreds of experiments, there is no widely accepted term that would unequivocally describe behavioral correlate(s) of hippocampal θ oscillation'' (Buzsaki, 2020). Moreover, the physiological inputs triggering the selective activation of hippocampal pyramidal neurons in natural conditions are undefined. Therefore, a large amount of future imaginative research will be required to unravel the functional correlation between behavior and hippocampal θ.
MEDIAL SEPTUM-DIAGONAL BAND OF BROCA AND THETA
A key contribution was the discovery by Petsche et al. (1962) and Stumpf et al. (1962) of the role of the medial septum (MS) in controlling the activity of the hippocampus and in generating θ. It has been shown that glutamatergic, cholinergic, and GABAergic MS and diagonal band of Broca (DbB) neurons project to the hippocampus (Monmaur and Thomson, 1983;Colom et al., 2005;Desikan et al., 2018;Unal et al., 2018). Indeed, evidence has accumulated suggesting that θ arises from interactions between MS-DbB and intra-hippocampal neuronal and circuital oscillators McLennan and Miller, 1974;Stewart and Fox, 1989;Hangya et al., 2009;Huh et al., 2010;Müller and Remy, 2018). Strong support to the above interpretation was provided by lessions of the MS-DbB that blocked hippocampal θ and rhythmic neuronal activity (Green and Arduini, 1954;Buño et al., 1978;Mitchell et al., 1982). However, the rules of innervation of hippocampus by diverse MS-DbB neurons are unknown, and the contribution of hippocampal neuronal populations and circuits in the genesis of θ are a matter of debate.
Paired recordings of MS and dorsal hippocampal neurons together with recordings of the hippocampal field activity in anesthetized rats suggested that rhythmic hippocampal neuronal activity and field θ depended upon the rhythmic activity of MS neurons Roig et al., 1970). Importantly, field θ and rhythmic activity of hippocampal neurons was only present when MS neurons fired rhythmically. However, a group of MS neurons fired rhythmic bursts in the absence of hippocampal θ. Therefore, rhythmicity in the MS can be independent from the hippocampal θ, but the hippocampal θ rhythm narrowly depends on MS neuron rhythmicity (Figures 1A,B). The rhythmic MS neurons that fire both with or without field θ could play a leading role in triggering the rhythmic activity of other MS neurons leading to the generation of hippocampal θ. In addition to the MS-DbB neurons that fire rhythmic bursts, there were also non-rhythmic MS-DbB neurons Gaztelu and Buño, 1982;Barrenechea et al., 1995;Pedemonte et al., 1998).
The analysis of septo-hippocampal pairs revealed three main behaviors: (i) when both neurons were rhythmic (θ-pairs), they fired bursts phase locked with the θ waves and showed rhythmic cross-correlation histograms; (ii) pairs with a rhythmic and a non-rhythmic neuron (mixed pairs) showed periodic cross-correlation histograms, and both neurons showed firing relationships with the field θ; and (iii) finally, non-rhythmic pairs were uncorrelated and did not show a relationship with field θ. Importantly, similar types of rhythmic and of non-rhythmic neurons with and without a relationship with θ were found in the hippocampus (see the ''Phase Reset of Theta'' section below).
Simultaneous intracellular recordings of MS-DbB neurons and of the hippocampal field activity in anesthetized rats revealed neurons that showed large continuous periodic membrane potential (Vm) oscillations and action potential bursts (i.e., intracellular θ; Pedemonte et al., 1998). These neurons displayed essentially identical activity both in the absence or in the presence of field θ, suggesting that they played a key role in hippocampal θ genesis. Other neurons that showed intracellular θ could sporadically fire high amplitude slow spikes and action potential bursts. A third group displayed lower amplitude intracellular θ and briefer bursts. These two last groups only showed rhythmicity during field θ.
Using a similar experimental design, Barrenechea et al. (1995) showed that neurons in the lateral septum could also display intracellular θ and fire in close relation to the field θ. Taken together, the above results suggest that hippocampal θ oscillations critically depend on the rhythmic neuronal activity of the MS-DbB. The functional role of the non-rhythmic septo-hippocampal neurons that fire in relation with the θ rhythm and the network mechanism that underlie this unexpected correlation are intriguing and remain to be determined. Accordingly, these neurons, although nonrhythmic, carry information about θ oscillations. Tentatively, the non-rhythmic neurons could evoke transient changes of the rhythm such as phase reset (see the ''Phase Reset of Theta'' section below).
The cholinergic effects on septo-hippocampal activity are behavior dependent, since selective optogenetic activation of cholinergic MS-DbB neurons triggered strong network effects during inactive behavioral states and weak effects during active behaviors in rats (Mamad et al., 2015). In addition, non-selective MS-DbB theta-burst electrical stimulation increased field θ synchronization and power, reset hippocampal rhythmic bursting neurons, entrained hippocampal place cells, and tuned the spatial properties (Figures 1D,E). Furthermore, optogenetic activation of cholinergic MS-DbB neurons at θ frequencies increased the power but caused poor entrainment of the hippocampal θ, an effect that was behavior dependent since it was stronger under anesthesia compared to awake mice (Vandecasteele et al., 2014). Taken together the above findings support the idea that when cholinergic neurons are highly activated during active exploration, additional activation has a reduced effect on hippocampal oscillations.
Recent evidence obtained with optogenetic entrainment at θ frequencies of specific GABAergic MS neurons in behaving mice revealed a vital involvement of these neurons in the generation and entrainment of the hippocampal field θ and of rhythmic action potential bursting of hippocampal neurons, whereas optogenetic silencing of these neurons strongly reduced field θ (Bender et al., 2015;Boyce et al., 2016;Gangadharan et al., 2016). The entrainment was not modified behavior, suggesting that although the synchronization of GABAergic MS neurons plays a vital role in the generation of θ, the rhythmically entrained circuits do not participate in behavior. Although the hippocampal field could follow the optogenetic activation of GABAergic MS neurons at θ frequencies, hippocampal neurons could also discharge at higher frequencies (Zutshi et al., 2018). The different behaviors of the field and neuronal activities suggest that GABAergic MS neurons trigger circuit mechanisms that accelerate the rhythmicity of hippocampal neurons. Indeed, MS GABAergic neurons regulate circuit activity via rhythmic disinhibition of pyramidal neurons (Freund, 1992;Smythe et al., 1992;Tóth et al., 1997;Buzsaki, 2002;Unal et al., 2012Unal et al., , 2018. These studies provide strong support to the notion that septal GABAergic projections regulate the hippocampal field potential oscillations via θ hippocampal interneurons.
The Glutamatergic Medial Septum-Diagonal Band of Broca Input
Although excitatory MS-DbB inputs had previously been shown to participate in the production of θ (Núñez et al., 1987(Núñez et al., , 1990bLeung and Shen, 2004;Colom et al., 2005;Huh et al., 2010;Khakpai et al., 2016), recent experimental evidence has provided strong support to the contribution of glutamatergic MS-DbB neurons in the genesis of hippocampal θ. Accordingly, optogenetic activation of specific glutamatergic excitatory MS-DbB neurons both in vitro and in behaving animals showed that those neurons provided a critical contribution to the genesis of hippocampal θ, mainly through local modulation of septal neurons (Robinson et al., 2016). Huh et al. (2010), using both in vitro slice and septo-hippocampal preparations, reported that activation of identified glutamatergic MS-DbB neurons led to fast AMPA-mediated synaptic responses in hippocampal pyramidal neurons. In addition, activation of MS-DbB neurons with NMDA microinjections induced rhythmic bursting at θ frequencies both in hippocampal and MS-DbB neurons (see ''Intrinsic Pyramidal Neuron Properties and Theta'' and ''Intrahippocampal Circuits and Theta'' sections below).
In vivo Studies
Although the activity of the septo-hippocampal pathway and intrahippocampal circuits provide an important contribution to θ genesis, data has accumulated suggesting that intrinsic properties of hippocampal neurons also participate. Indeed, slow spikes were recorded in CA3-CA1 neurons of anesthetized rats together with the usual Na + type action potential bursts during field θ. Slow spikes fired an overriding high-frequency burst of fast Na + -mediated action potentials (Núñez et al., 1987(Núñez et al., , 1990aFigures 2B,C). Depolarizing pulses triggered rhythmic slow spikes at rates that increased with depolarization. In the absence of field θ, strong dendritic depolarization by current injection induced large amplitude oscillation in the θ frequency range and resulted in a voltage-dependent phase precession of the action potentials as occurs in behaving animals .
In similar in vivo conditions, inhibition of Na + conductance by QX-314-loading prevented fast Na + -mediated action potentials, but slow QX-314-resistant putative Ca 2+ spikes remained (Nuñez and Buño, 1992; Figure 2D). In terms of their possible participation in the generation of θ, the slow QX-314-resistant events display the correct frequency and duration and can oscillate intrinsically. Accordingly, rhythmic bursts at θ frequency, similar to those triggered with intracellular recorded slow spikes, have been recorded in vivo in hippocampal neurons during periods of θ rhythm induced by sensory stimulation in anesthetized rats (Núñez et al., 1987) or when head-restrained mice, running on a spherical treadmill, entered in the recorded cell's place field (Harvey et al., 2009). Accordingly, slow spikes provide pyramidal neurons with intrinsic conductance, which can boost θ oscillations generated by network properties.
In vitro Studies
Recordings of CA1 pyramidal neurons in rat hippocampal slices revealed that plateau potentials and rhythmic θ-like intrinsic Vm oscillations were evoked by depolarizing current pulses in an important proportion of the neurons (García-Muñoz et al., 1993). Rhythmic high threshold slow TTX-resistant spikes were triggered and entrained by imposed sinusoidal transmembrane currents at θ frequencies. The above findings suggest that NMDA-and Ca 2+ -mediated spikes provide an important depolarizing drive, that boosts the otherwise small depolarization supplied by EPSPs, to generate the high-frequency rhythmic bursts of fast Na + -mediated action potentials that typifies pyramidal neuron activity during field θ.
In addition to the slow NMDA-and Ca 2+ -mediated spikes, voltage-gated Na + -mediated conductance underlie subthreshold rhythmic membrane oscillation in entorhinal cortex neurons, which could play an important part in the genesis of the θ rhythm (Alonso and Llinas, 1989;Dickson et al., 2000). These subthreshold Na + -dependent oscillations can also be recorded from CA1 hippocampal pyramidal neurons, suggesting a more direct contribution to hippocampal θ oscillations (García-Muñoz et al., 1993).
INTRAHIPPOCAMPAL CIRCUITS AND THETA Analysis In vivo
Due to the small amplitude of unitary EPSPs that on the average do not reach the firing threshold of hippocampal pyramidal neurons (Sayer et al., 1989;Fernández de Sevilla et al., 2002), action potential firing in CA1 pyramidal neurons is scarce during desynchronized hippocampal field activity (Schwartzkroin, 1975;Buzsaki and Eidelberg, 1983;Núñez et al., 1987;Csicsvari et al., 1999). However, during θ, there is strong increase in synchronized rhythmic activity of MS-DbB neurons that results in temporal and spatial Frontiers in Cellular Neuroscience | www.frontiersin.org Bonansco and Buño (2003); (B,C) modified from Núñez et al. (1987); (D) modified from Nuñez and Buño (1992).
summation of excitatory postsynaptic potentials (EPSPs) that exceed the firing threshold of hippocampal pyramidal neurons. The excitatory input of MS-DbB neurons onto hippocampal pyramidal neurons is paced by the rhythmic GABAergic inhibitory postsynaptic potentials (IPSPs) from both inhibitory MS-DbB and intrahippocampal interneurons. Interestingly, different interneuron types innervate distinctive domains of pyramidal neurons and exhibit specific firing patterns during θ, contributing differentially to hippocampal θ and ripple oscillations (Klausberger et al., 2003). The diversity of hippocampal interneurons could coordinate the activity of pyramidal cells in different behavioral states. Accordingly, the rhythmic interactions between EPSPs and IPSPs generate the intracellular θ and action potential bursting that typifies pyramidal neurons during θ (Andersen and Eccles, 1962;Núñez et al., 1987Núñez et al., , 1990aMacVicar and Tse, 1989;Fujita, 1991;Cobb et al., 1995;Csicsvari et al., 1999).
Infusion of NMDA in the entorhinal cortex (Leung and Shen, 2004;Gu et al., 2017) and microiontophoresis of NMDA close to the apical dendrites of hippocampal pyramidal neurons in anesthetized rats induced the generation of hippocampal θ, demonstrating the involvement of glutamatergic NMDA receptors in intrahippocampal circuit activity (Puma et al., 1996;Bland et al., 2007). In addition, the hippocampal θ rhythm was markedly reduced by infusion of the specific NMDAR blocker AP5 into the lateral ventricles of behaving rats (Leung and Desborough, 1988). Therefore, the glutamatergic MS-DbB neurons can contribute to field θ acting through NMDARs in hippocampal neurons.
The Theta Rhythm In vitro
Hippocampal oscillations frequencies within and exceeding the θ range can be induced in vitro by changes of the ionic environment, activation of ionotropic, and of metabotropic receptors. Superfusion of ACh muscarinic agonists in vitro can induce the intracellular θ and action potential bursting that typifies CA1 pyramidal neuron activity during θ oscillations in the natural condition (Konopacki et al., 1987;Bland et al., 1988;Fernández de Sevilla et al., 2006). The muscarinic rhythm is paced by local inhibitory interneurons, which are connected through dendritic electrical synapses and tend to fire synchronously (Traub et al., 2004;Konopacki et al., 2014;Posuszny, 2014;Schoenfeld et al., 2014). Electrical coupling between pyramidal neuron axons have also been reported to contribute (Traub et al., 2004). Superfusion of NMDA in hippocampal slices induces θ-like oscillations, suggesting an important contribution of circuital excitatory synaptic interaction through NMDARs in the genesis of θ (Kazmierska and Konopacki, 2013).
Tetanic stimulation of Schaffer collaterals (SCs) and microiontophoresis of glutamate at CA1 pyramidal neuron apical dendrites evoked rhythmic Vm oscillations and action potential bursts at θ frequencies in vitro (Bonansco et al., 2002). Oscillations were clear cut in pyramidal neurons placed close to the midline of the dorsal CA1, but not in lateral neurons that fired single-action potentials. Medial neurons exhibited a higher NMDAR density at the apical dendritic shafts than lateral neurons and a larger NMDA current component under voltage-clamp, suggesting that these differences underlie the dissimilar responses of both neuron groups.
NMDA microiontophoresis at the apical dendrites of CA1 pyramidal neurons induced θ-like Vm oscillations and rhythmic action potential bursts in vitro (Bonansco and Buño, 2003). However, in the absence of NMDAR activation, imposed membrane depolarization and microiontophoresis of AMPA depolarized, but never induced, rhythmic oscillations and bursts. Rhythmic Vm oscillations and bursts induced by NMDA remained under blockade of GABA-mediated inhibition with picrotoxin and of AMPA receptors with CNQX. In contrast, oscillations and bursts were prevented by inhibition of NMDARs with AP5 and in Mg 2+ -free solutions. Importantly, NMDARmediated Vm oscillations persisted, but action potentials were prevented under blockade of Na + -mediated action potentials with tetrodotoxin (Figure 2A).
Taken together, the above results suggest that Vm oscillations in hippocampal CA1 pyramidal neurons induced by microiontophoresis of NMDA, glutamate, and by tetanic stimulation of SCs do not depend on circuital interactions. The results suggest that NMDA-induced oscillations relied on the negative slope conductance of the NMDA channel caused by the voltage-dependent Mg 2+ block that underlies NMDA spikes (Schiller and Schiller, 2001;Antic et al., 2010) and on high-threshold Ca 2+ spikes mediated by activation of L-type voltage-dependent Ca 2+ channels (VDCC; Bonansco and Buño, 2003). The large Ca 2+ -mediated depolarization triggers the high-frequency action potential burst that backpropagates into the apical dendrites of pyramidal neurons inducing a supralinear Ca 2+ influx into spines that can induce long-term synaptic plasticity (Lisman, 2017;Sakmann, 2017;Fernández de Sevilla et al., 2020).
PHASE RESET AND ENTRAINMENT OF THETA
To be considered an oscillator, a neuron or network must be intrinsically rhythmic, and the rhythm must be acceptably regular. Neural oscillators display two distinctive behaviors when perturbed by brief inputs, namely, phase reset and entrainment (e.g., Winfree, 1977;Barrio and Buño, 1990;McClellan and Jang, 1993). Phase reset and entrainment result from the continuously varying excitability of the neural oscillator during the oscillation cycle. In spiking and bursting neural oscillators, the phase is the normalized time since the last action potential. In field recordings of neural oscillations, the phase is the normalized time between successive peaks or successive troughs of the oscillation.
Phase Reset of Theta
Phase reset is evoked by stimulating an input (i.e., a perturbation) and computing the phase shift or reset of the perturbed rhythm. The perturbation phase-locks the oscillation and results in periodic averages and cross-correlations of the rhythmic field and action potential activity ( Figure 1C). Phase reset of θ and rhythmical hippocampal units can be induced by electrical stimulation of hippocampal afferents both in anesthetized and in behaving rats, and during rapid eye movement (REM) in sleep (García-Sánchez et al., 1978;Gaztelu and Buño, 1982;Lerma and García-Austt, 1985;Núñez et al., 1990b;Vinogradova, 1995;Givens, 1996;McCartney et al., 2004;Jackson et al., 2008). Importantly, reset was consistently evoked by stimulation of structures with rich contacts with the MS-DbB, but abolished by destruction of the MS and fornix, suggesting that it could be induced by reset of MS-DbB neurons triggered by input from connected structures Brazhnik et al., 1985). An alternative possibility is that direct inhibition of MS neurons by synchronized hippocampal output originally elicited by septal inputs may be the reset mechanism. Indeed, the genesis and tuning of the 0 rhythm is a complex process in which feedback control of the septal pacemaker by hippocampal rhythmic neurons is an important process Müller and Remy, 2018).
Stimulation of areas connected to the MS-DbB could elicit reset with higher frequencies than the spontaneous θ García-Sánchez et al., 1978;Gaztelu and Buño, 1982;Núñez et al., 1990b), suggesting independent θ generator systems, which can produce different rhythms (Kramis et al., 1975). Electrical stimulation of structures projecting to MS-DbB tended to induce phase reset of the field θ and of the rhythmic bursting neuronal activity. Remarkably, in close agreement with the expected behavior of coupled oscillators, afferent stimulation also resets the MS-DbB rhythmic neuronal activity, resulting in phase reset of both septal and hippocampal oscillations (Gaztelu and Buño, 1982;Barrenechea et al., 1995;Pedemonte et al., 1998; Figure 1C). Accordingly, phase reset of θ has been found during conditioning, operant behavior, and special navigation (see the ''Memory and Theta'' section below).
Entrainment of Theta
Entrainment also results from the phase sensitivity of neural oscillators and follows the rules of phase reset typified by phase locking (Winfree, 1977;Barrio and Buño, 1990;McClellan and Jang, 1993;Lakatos et al., 2019). Periodic electrical stimulation of the MS-DbB and lateral septum between 4 and 12 Hz evoked 1:1 entrainment and reset of field θ and phase locking of action potential bursts in hippocampal interneurons in behaving rats (Mamad et al., 2015 ; Figures 1D,E). Stimulation beyond the θ range evoked bursts at higher or lower frequency than stimulation frequencies, suggesting that the septo-hippocampal circuitry is tuned to oscillate in the θ range (Brazhnik et al., 1985;García-Muñoz et al., 1993).
Interestingly, in agreement with the important influence of inhibition in θ genesis, reset and entrainment can be induced by activation of individual GABAergic interneurons in hippocampal slices, a mechanism that can synchronize the firing of pyramidal cells (Cobb et al., 1995). Phase reset and entrainment links external stimuli with neuronal rhythmic events (Lakatos et al., 2019). This link has been analyzed in a simple Buño and Velluti (1977); (B) modified from Joshi and Somogyi (2020). neural pacemaker where it enables the detection of specific input characteristics that depend on the properties and frequency of the oscillator (Buño et al., 1984).
HIPPOCAMPAL THETA RHYTHM AND BEHAVIOR
Among the many functions that have been attributed to hippocampal θ, we will center on the relationships with locomotion, memory, and spatial navigation (Cherubini and Miles, 2015). A growing number of studies suggest that θ may represent a timing mechanism where hippocampal pyramidal neurons fire with a higher probability at the phase of the θ cycle when excitation by MS-DbB glutamatergic neurons added with the sustained excitation supplied by cholinergic inputs is maximal, and periodic perisomatic inhibition is minimal (Núñez et al., 1990b;Freund and Buzsaki, 1996;Csicsvari et al., 1999;Klausberger and Somogyi, 2008;Müller and Remy, 2018; see the ''Analysis In vivo'' section above). This timing system may represent a general mechanism devised to organize into temporal series, fragmented by θ oscillations in a moment-by-moment basis of movement sequences, memory encoding, and planned trajectories for spatial navigation.
Locomotion and Theta
The correlation between locomotion and the amplitude and frequency of the hippocampal θ have suggested that movement sequences could be regulated by the θ oscillations (Whishaw and Vanderwolf, 1973;Buño and Velluti, 1977;Vanderwolf et al., 1977;Fuhrmann et al., 2015;Lu et al., 2020). Running speed in behaving animals shows a close correlation with the amplitude and frequency of the θ oscillations (Rivas et al., 1996;Ahmed and Mehta, 2012;Bender et al., 2015;Lu et al., 2020). Likewise, movements tend to occur at a specified phase of the θ oscillation (Buño and Velluti, 1977;Semba and Komisaruk, 1978;Joshi and Somogyi, 2020).
In freely behaving rats, bar pressing for electrical self-stimulation of the lateral hypothalamus, averages of hippocampal field activity triggered by the onset of bar pressings, revealed periodic waves with frequencies within the θ band (5-8 Hz; Figures 3A,B; Buño and Velluti, 1977). The periodic waves in the pre-pressing epoch are only observed if bar pressings tend to occur during a particular phase of the θ wave, implying that θ is phase-locked before pressing onsets. Accordingly, the periodic averages suggest that bar pressing onsets occurred at a defined phase of the ongoing field θ waves. There were also phase-locked θ waves following pressings superimposed on a potential evoked by the hypothalamic electrical self-stimulation ( Figure 3A). Introduction of a delay (0.9 s) between bar pressings and the electrical self-stimulation delayed the evoked potential, and averages showed phase-locked θ oscillations before and after bar pressings (Figure 3B1).
Electrolytic lesions of the septum or superior fornix abolished θ and significantly increased the frequency of bar pressing. Accordingly, the septo-hippocampal mechanisms, which generate θ are not necessary to maintain self-stimulation (Ward, 1960). However, the increased lever pressing rates following lesions suggest a possible participation in motor timing mechanisms through an inhibitory influence of the septum (Gage et al., 1978).
The above results taken together suggest that phase-locked θ may be a corollary of motor mechanisms and perhaps of the timing of motor sequences controlled by θ cycles. Phaselocked θ oscillations could contain predictive information about the planning of motor activity (i.e., the future) and of its execution on a moment-by-moment basis that ends when the goal is reached (Whishaw and Vanderwolf, 1973;Wyble et al., 2000;Fuhrmann et al., 2015;Wikenheiser and Redish, 2015). Interestingly, step-cycles during walking in mice show a temporal correlation with θ oscillations and with the firing of MS neurons, suggesting that rhythmic firing MS cells could coordinate θ and stepping-related locomotor activity (Joshi and Somogyi, 2020; Figures 3C,D).
These findings fit in with the argument that phase locking of θ oscillations could regulate the planning and execution of motor activity on a moment-by-moment basis, where information is fragmented and organized into temporal sequences by θ oscillations. Hippocampal θ phase locking may thus represent a general neurophysiological mechanism supporting memory formation. It has been proposed that hippocampal neurons operate with future, present, and past events in the θ cycle sequences that hold spatio-temporal representations (Cei et al., 2014). The phase relationship of bar pressings with θ and of the subsequent hypothalamic stimulation may result in synaptic plasticity that is favored during a specific phase of the θ cycle (Buño and Velluti, 1977;Huerta and Lisman, 1995;McCartney et al., 2004).
Memory and Theta
Although it has been firmly established that hippocampal θ carries spatial information (see the ''Spatial Navigation and Theta'' section below), a growing number of studies indicate that non-spatial information can be also conveyed by the hippocampal θ rhythm, suggesting that θ operates as a general mechanism for encoding continuous, task-relevant information (Radulovacki and Adey, 1965;Adey, 1967;Wood et al., 1999;Moita et al., 2003;MacDonald et al., 2011;Aronov et al., 2017). In addition, several studies have provided conclusive data suggesting that hippocampal lesions interfere with learning and memory (Barbizet, 1963;Adey, 1967;Zola and Squire, 2001;Burgess et al., 2002).
In an interesting study using a T-maze discriminative response paradigm, Radulovacki and Adey (1965) showed that during discrimination, averages triggered by a brief tone exhibited periodic θ waves (phase reset), whereas the amplitude and regularity of the waves were reduced or altogether absent when the animal was orienting. The results of Radulovacki and Adey (1965) suggest that the θ phase reset may relate to processes that underlie acquisition and storage of behaviorally relevant information and speculate that ''they might underlie the most fascinating continuum in consciousness leading from the immediate past through the present to the immediate future.'' During early training in a classical conditioning paradigm that enabled separation of conditioned responses and purposedirected responses, the responses evoked by the conditioned stimulus were of low amplitude and usually followed by phase reset of the hippocampal θ (Buzsaki et al., 1979). However, with more training, θ reset decreased and short-latency high-voltage evoked potentials were evoked by the conditioned stimulus. In this condition, orienting activity decreased to the preconditioning level, suggesting changes in oscillation characteristics related to orienting, attentional factors rather than to movements. Although the results suggest that non-spatial information is encoded by a temporal code organized by θ oscillations, how non-spatial information is integrated into memory at the θ timescale is a critical issue that remains to be determined. The experimental analysis of the temporal organization of memory and θ has been limited by experiments that lack the temporal resolution to segregate encoding and retrieval. In human subjects asked to recall previously learned word-object associations, the neural signatures of memory retrieval fluctuate and are time locked to the phase of an ongoing theta oscillation (Kerren et al., 2018). It has been recently reported that in human patients performing a spatial memory task phase locking at the peak of θ preceded eye fixations to retrieved locations, whereas phase locking at the trough of θ followed fixations to novel object-locations, indicating that the hippocampus coordinates memory-guided eye movements (Kragel et al., 2020). These human results strongly suggest that memory encoding retrival is gated by θ-linked neuronal activity.
Spatial Navigation and Theta
The original detailed analysis of the relationship between spatial navigation and hippocampal θ was provided by O' Keefe and Recce (1993). O'Keefe and Recce (1993) discovered that hippocampal neurons began firing at a particular phase of the θ cycle as the rat entered the field and fired at progressively earlier phases of the θ cycle as the rat passed through the neuron's place field (Figure 4). This phenomenon was called phase precession because the firing phase of the ''place cell'' was highly correlated with the rat's spatial location, whereas temporal aspects of behavior were not. Several studies have shown that place cells can represent a position ahead of the animal in the field, suggesting that phase precession can predict the sequence of upcoming positions (Lisman and Redish, 2009;Buzsaki and Moser, 2013). Therefore, the activity in the hippocampus can be encoded as the sequence of action potentials within each θ cycle to signal information about future, present, and past locations and events (Dusek and Eichenbaum, 1998;Lisman and Redish, 2009;Pfeiffer and Foster, 2013;Sugar and Moser, 2019). Accordingly, hippocampal neurons discharge in function of the animal's location and direction of movement in a given environment suggesting that place cells play a role in navigational planning (O'Keefe and Dostrovsky, 1971;O'Keefe and Recce, 1993;Wilson and McNaughton, 1993;Brown et al., 1998;Zhang et al., 1998;Jensen and Lisman, 2000).
The above-described results imply that knowledge of the speed and direction of movement and landmark information are required to continuously compute the position of the animal during spatial navigation. Information on speed of locomotion is critical to maintain the correct phase relationships between place cell activity and past, present, and future positions. When the animal navigates the place field of a neuron, the number of θ cycles decreases with running speed, but the number (A3,A4) Place neuron firings (red, upper) and field θ (lower). Note that the neuron displays phase precession and fires at a higher rate at a specific position in the place field. (B1) Place cell firing (green, upper) and corresponding field θ (lower) during slow locomotion (mean speed 31 cm/s). (B2) Same as (B1), but fast locomotion (mean speed 55 cm/s). The arrows indicate the time it takes the rat to cross the place field. Note that there is no change of field θ and that the neuron displays phase precession at both speeds but fires at a higher rate during the fast trial. (A) Modified from Buzsaki and Draguhn (2004); (B) modified from Geisler et al. (2007).
of action potentials per θ cycle increases, and the θ phase shifts proportionally, leaving the relationship between action potential phase and spatial position relatively invariant (Geisler et al., 2007;Pfeiffer and Foster, 2013). Place coding in the hippocampus requires sensory inputs providing environmental, self-motion, and place memory information, and effective spatial navigation involves remembering landmarks and goal location to plan navigation paths. Therefore, to continuously compute past, present, and future positions, the hippocampus compares distances and durations through a speed-dependent modulation, and these computations are independent on the behavioral task.
It is noteworthy that John O'Keefe, Edvard Moser, and May Brit Moser were awarded the Nobel Prize in 2014. O'Keeffe for his work on ''place cells, '' their relationship with hippocampal θ and spatial navigation, and Edvard and May Britt Moser for the identification of ''grid cells'' in the entorhinal cortex that are involved in positioning and pathfinding.
INTERACTION OF THETA WITH GAMMA RHYTHMS
In the hippocampus, θ and gamma (γ) oscillations are the most prominent rhythms recorded in awake state or during REM sleep (Buzsaki, 2002;Cantero et al., 2003Cantero et al., , 2004Colgin, 2016). These oscillations can be observed in field potential recordings and are thought to transiently link distributed cell assemblies that are processing related information, a function that is important for network processes such as perception, attention, or memory (Singer, 1998;Buzsaki and Draguhn, 2004). γ Frequency Bands in the Hippocampus Whereas in neocortical network γ oscillations occur across a broad frequency band ranging from 30 to 140 Hz, in the hippocampus, two different frequency bands are usually observed. Slow γ ranges roughly from 25 to 60 Hz and fast γ from 60 to 100 Hz (Bosman et al., 2014). These different frequency bands may route different input streams of information to the hippocampus. Slow γ may facilitate transmission of inputs to CA1 from CA3 (Brun et al., 2002;Schomburg et al., 2014). Fast γ may promote inputs from the entorhinal cortex that transmit ongoing spatial information (Brun et al., 2002;Hafting et al., 2005). Accordingly, fast γ oscillations in CA1 were synchronized with fast γ in medial entorhinal cortex, and slow γ oscillations in CA1 were coherent with slow γ in CA3 (Colgin et al., 2009). The firing properties of hippocampal neurons exhibited in each case may reflect different functions: memory retrieval during slow γ and memory encoding during fast γ oscillations (Figure 5).
θ-γ Coupling During Memory Task Performance
As indicated above, hippocampal θ activity is involved in forming new episodic memories, especially in encoding of location and time or the context of events . During these active states, γ oscillations are prominent and also participate in memory formation (Buzsaki and Draguhn, 2004). γ oscillations are prominent in the entorhinal-hippocampal network during a variety of memory tasks in different species (Montgomery and Buzsáki, 2007;Sederberg et al., 2007; FIGURE 5 | Schematic diagram of neuronal circuits involved in theta-gamma (θ-γ) coupling. Blue traces indicate synaptic inputs from layers II to III of the enthorynal cortex (EC) and from the MS-DbB. The dentate gyrus (DG), CA1, CA3, and EC layer V neuronal connections are also shown. θ-γ coupling in the hippocampus results from the convergence of θ inputs from the MS-DbB and fast γ from the EC. Particularly, θ-γ coupling results in CA1 by the convergence of θ inputs from the MS-DbB and slow γ from the CA3. The inset shows θ-γ interactions in the hippocampus as revealed by the CA1 field activity (upper) and band-pass filtered intracellular γ activity (lower). Note phase locking between both rhythms (θ-γ coupling) and the amplitude modulation of the intracellular γ; modified from Penttonen et al. (1998). Jutras et al., 2009;López-Madrona et al., 2020). These authors suggest that successful memory performance requires coupling of γ rhythms to particular phases of the hippocampal θ (termed θ-γ coupling). Hippocampal θ-γ coupling was observed during spatial memory processing (Penttonen et al., 1998;Buzsaki and Moser, 2013) and has been shown to support the induction of LTP (Jensen and Lisman, 1996;Buzsaki, 2002;Vertes, 2005).
θ-γ coupling is modulated during exploration and memoryguided behaviors. In mice solving a mnemonic task, CA1 γ oscillations were more strongly phase locked to θ than in control periods (Tort et al., 2008). Similarly, rats learning to associate contexts with the location of food reward show an increase in θ-γ coupling during the learning progression (Lisman and Jensen, 2013). In a word recognition paradigm in humans, θ-γ coupling was selectively enhanced when patients successfully remembered previously presented words (Mormann et al., 2005). In agreement with that, Axmacher and colleagues, using intracranial EEG recordings showed θ-γ coupling in the hippocampus during working memory retention in a working memory task, and the strength of this coupling predicted individual working memory performance (Axmacher et al., 2010).
In rodents, 20-40 Hz oscillations in CA1 became more tightly locked to the θ phase as animals learned odor-place associations, suggesting that θ-γ coupling may play an important role in cued memory retrieval (Igarashi et al., 2014). In another study, γ power in CA1 increased in a delayed spatial alternation task when animals needed to remember which side to choose (Takahashi et al., 2014). θ-γ coupling was mainly observed during episodes of both active wake and REM sleep, with the highest level of coupling observed during REM sleep (Bandarabadi et al., 2019). Taken together, these results support the hypothesis that θ-γ coupling facilitates transfer of spatial information from the enthorinal cortex to CA1 (Brun et al., 2002;Hafting et al., 2005;Colgin et al., 2009).
It has been reported that inputs from the enthorinal cortex and CA3 arrive in CA1 at specific phases of the θ cycle (Hasselmo et al., 2002;Colgin et al., 2009). LTP in CA1 is most easily induced at a particular θ phase, which corresponds to the phase when enthorinal cortex input is maximal (Huerta and Lisman, 1995). Activating γ-modulated cell assemblies at a particular θ phase may allow the network to produce a more powerful output by ensuring that distributed cells fire closely in time (Bragin et al., 1997;Tort et al., 2008). θ-γ coupling may allow the hippocampal-entorhinal network to temporally organize sequences of events within each θ cycle, raising the possibility that θ-γ interactions are a critical component of mnemonic operations (Bragin et al., 1997). θ-γ coupling leads to a precise temporal coordination of spikes of multiple neurons and is therefore likely to contribute to circuital functions such as phase coding or spike-timing-dependent plasticity (Markram et al., 1997;Abbott and Nelson, 2000).
Theta-Gamma Coupling in Degenerative Pathologies
Taken together, the above results strongly suggest that θ-γ coupling is vital in learning and memory processes, and consequently, its loss may be at the base of pathologies. Working memory deficits are common among individuals with Alzheimer's dementia (AD) or mild cognitive impairment (MCI). The origin of these deficits has long been thought to be due to hippocampal and enthorinal cortical dysfunctions, although atrophy of MS-DbB also occurs in AD (Cantero et al., 2020). AD and MCI patients demonstrate the lowest level of θ-γ coupling in a verbal working memory task in comparison with healthy participants (Goodman et al., 2018). Similar findings have been observed in animal models of AD, which show a decreased θ-γ coupling (Zhang et al., 2016;Bazzigaluppi et al., 2018) that arises before amyloid beta accumulation (Goutagny and Krantic, 2013;Iaccarino et al., 2016). An impairment of θ-γ coupling that increases paralleling the progression of the MCI has been recently reported in human patients (Musaeus et al., 2020). These findings suggest that θ-γ coupling is critical for proper cognitive functioning and may therefore serve as a progression marker in degenerative diseases.
Place cells in the hippocampus fire action potentials in specific spatial locations (O'Keefe and Dostrovsky, 1971), whereas grid cells in the medial entorhinal cortex fire in a highly organized spatial pattern across an environment (Fyhn et al., 2004;Hafting et al., 2005). Patients with AD and other dementia-spectrum disorders exhibit profound disruption in spatial navigation and memory, even at very early stages of the disease (Hort et al., 2007;Allison et al., 2016). At a pathological level, misfolded tau deposition typically occurs first in the entorhinal cortex and hippocampus (Rubio et al., 2008;Cantero et al., 2011;Llorens-Martin et al., 2014) and reduce grid cell activity (Pooler et al., 2014;Ridler et al., 2020). These data correlate with human imaging studies, which suggest deficits in grid-cell-like activity in the entorhinal cortices of people at genetic risk of developing AD (Kunz et al., 2015).
CONCLUDING REMARKS
In the brain, information is represented by the activity of ensembles of neurons rather than by single cells, coordinating their activity to support complex cognitive processes and creating functional neural networks (Singer, 1998). An efficient way to assure transient synchronization between neuronal ensembles is the entrainment of neuronal groups into oscillatory activities. Oscillations may facilitate neuronal synchronization and synaptic plasticity, playing a key role in long-range communication between brain regions. In this review, we underscore that the θ rhythm results from complex interactions between circuital and intrinsic properties of MS-DbB and hippocampal neurons and that it plays a crucial role in cognitive processes such as learning and memory, and in the control of complex behaviors.
|
v3-fos-license
|
2018-02-19T09:35:31.000Z
|
2017-12-29T00:00:00.000
|
119223332
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.97.034030",
"pdf_hash": "95ca44445f778493bc07a9dca0abc4b3772b5ab5",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45976",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "95ca44445f778493bc07a9dca0abc4b3772b5ab5",
"year": 2017
}
|
pes2o/s2orc
|
Strong decays of the higher isovector scalar mesons
Under the assignment of $a_0(1450)$ as the ground isovector scalar meson, the strong decays of $a_0(1950)$ and $a_0(2020)$ are evaluated in the $^3P_0$ model. Our calculations suggest that $a_0(1950)$ and $a_0(2020)$ can be regarded as the same resonance referring to $a_0(3^3P_0)$. The masses and strong decays of $a_0(2^3P_0)$ and $a_0(4^3P_0)$ are also predicted, which can be useful in the search for radially excited scalar mesons in the future.
I. INTRODUCTION
In the framework of quantum chromodynamics (QCD), apart from the ordinary qq states, other exotic states such as glueballs, hybrids, and tetraquarks are permitted to exist in meson spectra. To identify these exotic states, one needs to distinguish them from the background of ordinary qq states, which requires one to understand well the conventional qq meson spectroscopy both theoretically and experimentally.
At present, above the a 0 (980) mass, three higher isovector scalar states, a 0 (1450), a 0 (2020), and a 0 (1950), have been reported experimentally. a 0 (1450) was observed in pp annihilation experiments [8,9], D ± → K + K − π ± [10], and D 0 → K 0 S K ± π ∓ [11]. a 0 (2020) with two alternative solutions of similar masses and widths was found by the Crystal Barrel Collaboration in the partial wave analysis of the data onpp → π 0 η and π 0 η ′ [12], and a 0 (1950) was observed by the BABAR Collaboration in the processes γγ → K 0 s K ± π ∓ and γγ → K + K − π 0 [13]. The masses and widths of the three isovector scalar states are listed in Table I. The lattice QCD calculations support that the lowest isovector scalar qq state corresponds to a 0 (1450) rather than a 0 (980) [14][15][16]. It is widely accepted that a 0 (1450) is the isovector member of the 1 3 P 0 qq nonet [1]. The na-tures of a 0 (2020) and a 0 (1950) are unclear. To be able to understand the nature of a newly observed state, it is natural and necessary to exhaust the possible qq description before restoring to more exotic assignments. Therefore, with the assignment of a 0 (1450) as the ground qq state, one naturally asks whether the higher isovector scalar states, a 0 (2020) and a 0 (1950), can be identified as the radial excitations of a 0 (1450). [13] Theoretical efforts on the quark model assignments for a 0 (2020) and a 0 (1950) have been carried out. It is suggested that a 0 (1950)/a 0 (2020) can be assigned as the 2 3 P 0 state based on the extended linear sigma model in Ref. [17], where a 0 (2020) is considered earlier evidence for a 0 (1950). In addition, a 0 (1950)/a 0 (2020) is assigned as the 3 3 P 0 state based on the relativistic quark model in Ref. [18], where the predicted a 0 (3 3 P 0 ) mass is about 1993 MeV, in agreement with both the a 0 (1950) and a 0 (2020) masses within errors. Obviously, further studies on the quark model assignments for a 0 (1950) and a 0 (2020) in other approaches are needed. Also, from Table I, one can see that the resonance parameters of a 0 (2020) are close to those of a 0 (1950). The observed mass difference between a 0 (1950) and a 0 (2020) is less than 100 MeV; in such a small mass interval, it would be very difficult to accommodate two radial excitations of a 0 (1450) in practically all the quark models. We therefore conclude that if both a 0 (2020) and a 0 (1950) can be explained as qq states, they should correspond to the same resonance. In this work, we shall discuss the possible quark model assignments of a 0 (2020) and a 0 (1950) by investigating their strong decays in the 3 P 0 model and check whether a 0 (2020) and a 0 (1950) can be identified as the same scalar meson.
The organization of this paper is as follows. In Sec. II, we give a brief review of the 3 P 0 model. In Sec. III, the calculations and discussion are presented, and the summary and conclusion are given in Sec. IV.
Following the conventions in Ref. [31], the transition operator T of the decay A → BC in the 3 P 0 model is given by where the γ is a dimensionless parameter denoting the probability of the quark-antiquark pair q 3q4 with quantum number J P C = 0 ++ . p 3 and p 4 are the momenta of the created quark q 3 and antiquarkq 4 , respectively. χ 34 1,−m , φ 34 0 , and ω 34 0 are the spin, flavor, and color wave functions of q 3q4 , respectively. The solid harmonic poly- The partial wave amplitude M LS (P ) of the decay A → BC can be given by [41], where M MJ A MJ B MJ C (P ) is the helicity amplitude and defined as, |A , |B , and |C denote the mock meson states defined in Ref. [42]. Due to different choices of the pair-production vertex, phase space convention, and employed meson space wave function, various 3 P 0 models exist in the literature. In this work, we employ the simplest vertex as introduced originally by Micu, who assumes a spatially constant pair-production strength γ [19], relativistic phase space, and simple harmonic oscillator (SHO) wave functions. With the relativistic phase space, the decay width Γ(A → BC) can be expressed in terms of the partial wave amplitude, where , and M A , M B , and M C are the masses of the mesons A, B, and C, respectively. The explicit expressions for M LS (P ) can be found in Refs. [31][32][33].
Under the SHO approximation, the meson space wave function in the momentum space is where the radial wave function is given by Here β is the SHO wave function scale parameter, and L is an associated Laguerre polynomial.
The decay widths of a 0 (1450) as the 1 3 P 0 state are listed in Table II. The dominant decay modes of the 1 3 P 0 isovector state are πη, πη ′ , and KK, consistent with observations of a 0 (1450) [8,9,50].
The decay widths of a 0 (1950) as the 2 3 P 0 and 3 3 P 0 states are shown in Table III. If a 0 (1950) is the 2 3 P 0 state, its total width is expected to be about 771 MeV, much larger than the observed a 0 (1950) width of 271 ± 22 ± 29 MeV [13]. The possibility of a 0 (1950) being the 2 3 P 0 state can be ruled out. If a 0 (1950) is the 3 3 P 0 state, its total width is about 207 MeV, reasonably close to the measurement within errors. The dependence of the total width of a 0 (3 3 P 0 ) on the initial state mass is shown in Fig. 1. Within the a 0 (1950) mass errors, the total width does not change too much. The assignment of a 0 (1950) as the 4 3 P 0 state can also be ruled out because the predicted width for a 0 (4 3 P 0 ) with a mass of 1931 MeV is about 37.3 MeV (see also Fig. 3), much smaller than the a 0 (1950) width. Therefore, the measured mass and width for a 0 (1950) are in favor of it being the 3 3 P 0 state. As shown in Fig. 1, a 0 1: The dependence of the total width of a0(3 3 P0) on the initial state mass. The dashed line with a green band denotes the BABAR experimental data [13].
MeV smaller than the lower limit of Crystal Barrel's solution I for the a 0 (2020) width of 330±75 MeV [12], and the predicted width for a 0 (3 3 P 0 ) with a mass of 1980 MeV is about 218 MeV, in agreement with Crystal Barrel's solution II for the a 0 (2020) width of 225 +120 −32 MeV [12]. The possibility of a 0 (2020) being the 2 3 P 0 state can be ruled out because the expected width for a 0 (2 3 P 0 ) with a mass of 1980 (2025) MeV is about 895 (995) MeV, much larger than the observed width of a 0 (2020), as shown in Table I. The predicted width for a 0 (4 3 P 0 ) with a mass of 1980 (2025) MeV is about 36.8 (34.6) MeV (see also Fig. 3), much smaller than the a 0 (2020) width, which makes a 0 (2020) unlikely to be the 4 3 P 0 state. So, the measured mass and width for a 0 (2020) are consistent with an assignment of the 3 3 P 0 state.
The experimental evidence for both a 0 (1950) and a 0 (2020) turns out to be consistent with the presence of the same resonance corresponding to a 0 (3 3 P 0 ). This naturally establishes 1.9 GeV as the approximate mass for the nn members of the 3P nonets, which could be useful to search for the nn members of the 3P nonets experimentally. The dominant decay modes of a 0 (3 3 P 0 ) are π(1300)η, πη(1475), πb 1 (1235), KK 1 (1270), and ρω. a 1 (1640) and a 2 (1700) as the 2P radial excitations have been established [25,51], which also fixes the natural mass scale for the nn members of the 2P multiplets as about 1.7 GeV. One can expect to find a 0 (2 3 P 0 ) near 1.7 GeV. At present, no candidate for the isovector scalar state around 1.7 GeV is reported experimentally. An a 0 -like pole associated to a resonance with a mass of about 1760 MeV is found by investigating the meson-meson interaction in Refs. [52,53]. The a 0 (2 3 P 0 ) mass in the extended linear sigma model is expected to be 1790 ± 35 MeV [17]. Systematic studies on the meson spectra in the relativistic quark models show that the expected a 0 (2 3 P 0 ) mass is about 1679 ∼ 1780 MeV [18,47]. Phenomenologically, it is suggested that the light mesons could be grouped into the following Regge trajectories [54], where M 0 is the lowest-lying meson mass, n is the radial quantum number, and µ 2 is the slope parameter of the corresponding trajectory. In the presence of a 0 (1450) and a 0 (1950)/a 0 (2020) being the 1 3 P 0 and 3 3 P 0 states, respectively, the a 0 (2 3 P 0 ) mass can be determined to be about 1744 MeV based on Eq. (7), 1 consistent with the extended linear sigma model prediction [17] and the quark model predictions [18,47].
The strong decays of a 0 (2 3 P 0 ) with a mass of 1744 MeV are presented in Table IV. The total width of a 0 (2 3 P 0 ) is expected to be about 364 MeV. The dominant decay modes of a 0 (2 3 P 0 ) include πη(1475), πη(1295), πb 1 (1235), πf 1 (1285), and ρω. The dependence of the total width of a 0 (2 3 P 0 ) on the initial state mass is shown in Fig. 2. When the initial state mass varies from 1700 to 1800 MeV, the total width of the a 0 (2 3 P 0 ) varies from about 298 to 460 MeV. With the initial state mass of 1700 MeV, our predicted width of 298 MeV is in agreement with the width of 293 MeV expected by Ref. [25] for a 0 (2 3 P 0 ). 1 We take M a 0 (1450) =1474 MeV, M a 0 (1950)/a 0 (2020) = (1931 + 2025)/2=1978 MeV, the average value of the a 0 (1950) mass reported by the BABAR Collaboration [13] and the favoured solution for the a 0 (2020) mass [12]. MeV based on Eq. (7), consistent with 2250 MeV, the expected mass for a 0 (4 3 P 0 ) in the quark model [18]. The strong decays of a 0 (4 3 P 0 ) with a mass of 2187 MeV are listed in Table V. The dependence of the total width of the 4 3 P 0 isovector state on the initial state mass is shown in Fig. 3. A narrow width for a 0 (4 3 P 0 ) is predicted. The πη(1295), KK 1 (1270), ρω, and πη 2 (1645) channels are the dominant decay modes for a 0 (4 3 P 0 ). As we can see in Figs. 1 and 3, the width derivatives are a discontinuity around 1950 MeV, which is because the decay channel KK(1460) is open above this energy.
IV. SUMMARY AND CONCLUSION
Observations of the state a 0 (1950) by the BABAR Collaboration have enlarged the family of the isovector scalar mesons. In this work, we discuss the possible quark model assignments of a 0 (1950) and a 0 (2020) by calculating their strong decays in the 3 P 0 model. We suggest that a 0 (1950) and a 0 (2020) can be regarded as the same resonance referring to a 0 (3 3 P 0 ). The confirmation of a 0 (1950)/a 0 (2020) as the 3 3 P 0 state thereby estab-lishes about 1.9 GeV as a natural mass scale for the nn members of the 3P nonets.
In the presence of a 0 (1450) and a 0 (1950)/a 0 (2020) being the 1 3 P 0 and 3 3 P 0 states, respectively, in Regge phenomenology, the masses of a 0 (2 3 P 0 ) and a 0 (4 3 P 0 ) are predicted to be about 1744 MeV and 2187 MeV, respectively. The predicted masses for a 0 (2 3 P 0 ) and a 0 (4 3 P 0 ) are consistent with some other theoretical expectations. The total widths of a 0 (2 3 P 0 ) and a 0 (4 3 P 0 ) are expected to be about 364 MeV and 36 MeV, respectively. Our predictions could be useful to study the higher isovector scalar mesons experimentally.
|
v3-fos-license
|
2018-12-05T19:48:39.239Z
|
2002-12-01T00:00:00.000
|
141898238
|
{
"extfieldsofstudy": [
"Sociology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://journals.jcu.edu.au/qar/article/download/65/60",
"pdf_hash": "d087b429b7c155e9dcb932d647e0e4a344c9c6ff",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45979",
"s2fieldsofstudy": [
"History"
],
"sha1": "d087b429b7c155e9dcb932d647e0e4a344c9c6ff",
"year": 2002
}
|
pes2o/s2orc
|
Background to the University of Queensland Archaeological Services Unit ’ s Lang Park Salvage Excavations : History , Significance Assessment and Methods
Brisbane’s major football venue, Lang Park, is undergoing a $280 million redevelopment. As part of this project the University of Queensland Archaeological Services Unit (UQASU) developed a cultural heritage management plan for the site. UQASU identified that the Lang Park site once housed a number of historic cemeteries, dating from the 1840s, and an early brick drain. These were assessed to be of high cultural heritage significance, and in 2000 UQASU formulated policies and strategies for their management. In 2001, UQASU began the salvage of those parts of the culturally significant elements that were to be deleteriously affected by earthworks and building activity. To date 397 burials have been exhumed.
By 1861 the Anglican cemetery had expanded to four acres and absorbed the Aboriginal burial area, the latter becoming redundant because of the diminishing number of Aboriginal people within the Brisbane area (Fisher 1994:38).The Presbyterians had also enlarged their reserve and the Baptists had acquired a parcel of land to the north of the Congregationalist/Wesleyan reserves (Fisher 1994:38).The Primitive Methodists applied for a site but failed to secure one.
During the 1850s and 1860s, Brisbane witnessed a rapid and unanticipated increase in population and urban development, and the cemeteries were soon bordered by streets and allotments (Brisbane Courier 15 March 1864).The cemeteries themselves became overcrowded with gravesites and fell into neglect.Ad hoc, shallow, or waterlogged burials were reported and the entire site posed a health risk due to its flood-prone hollows and proximity to the Milton water reserve (Brisbane Courier 15 March 1864 ;Fisher 1994:41).Considerable public pressure mounted to regulate burial procedures and establish a larger general cemetery at a site further from the town centre (Brisbane Courier 30 December 1862:2, 6 February 1864 ;Fisher 1994:37-38, 41).
In 1865 a Cemetery Act was passed by the Queensland Parliament that permitted the closure of the North Brisbane Burial Grounds (Fisher 1994:38, 41).By the 1870s new cemeteries controlled by the municipal council were established at other locations in the greater Brisbane region (Fisher 1994:36).However, because of ease of access, familiarity and lower burial costs, the North Brisbane Burial Grounds continued to be used as a place of interment by Brisbane residents until it was officially closed on 1 August 1875, although there is mention of sporadic burials occurring through to the 1890s (Fisher 1994:44;Hayman 1994:67).Between the 1870s and 1890s, physical changes to the Burial Grounds occurred.Between 1875 and 1877, 40 to 50 bodies were exhumed as a result of the closure (Fisher 1994:52).Three streets (Judge, Caroline and Caxton) came to dissect the site from east to west (Fisher 1994(Fisher :44, 1995:111):111).Caxton Street ran across part of the Jewish section, necessitating the complete removal or re-situating of some remains (Fisher 1994:52).Residential development and a new Petrie Terrace Boys School (built in 1888) encroached upon the Burial Grounds from the north while blocks of land immediately adjacent to the cemeteries were given over for government and recreational use (Fisher 1994(Fisher :46, 1995:112;:112;The Telegraph 11 October 1962:25).
In 1886 the open brick Milton Drain was dug along the western boundary to help drain the swamp (Figure 4) (The Builder and Contractor's News 15 July 1887:156).This structure was designed by William David Nisbet and J.B. Stanley.Nisbet was the Engineer in Charge of the City Drainage and he is credited with the design of much of the drainage of the inner city area (Richard 1980:4-7).
Despite these various alterations to the site, all the reserves except the Jewish cemetery survived reasonably intact through to the end of the nineteenth century.Over time, though, the problems of vandalism and neglect manifested themselves (Fisher 1994:46; QSA PRV 9892/1 4 June 1913, 28 July 1914) and the site became the focus of more public debate (Fisher 1994:46).Formal graveyards were now viewed as obsolete and ugly, while reformist movements in civic planning highlighted the need for open parkland in the densely inhabited working-class suburbs that the area now supported (Brisbane Courier 3 May 1975:8;Fisher 1995:129).Despite the construction of the Milton Drain, there were also sanitation problems posed by stagnant pools persisting within the disused graveyards.
A change of use from burial ground to recreation grounds was mooted in 1910 (Fisher 1994:47; QSA PRV 9892/1 28 July 1914).Approval for the conversion came with the passing in 1911 of the Paddington Cemeteries Act and the cancellation of all previous denominational grants (Fisher 1994:47).The conversion of the old cemeteries to a park was supervised by the Lands Department (QPP 1914 (2):407).The public was notified of the proposal for the establishment of a 'people's park', and a period of one year was given for concerned relatives to apply for the remains and/or memorials of deceased family members to be transferred to other cemeteries (Fisher 1994:47; QSA PRV 9892/1 28 July 1914).The cost of exhumation and re-burial was defrayed by the Department.Background to the UQASU's Lang Park Salvage Excavations Unfortunately locating graves proved difficult as no cemetery records could be found (QPP 1914 (2):407).Many graves were unmarked or had lost headstones, and it appears that in some cases bodies were buried secretly or unofficially (Fisher 1994:40;QPP 1914 (2):407;Hayman 1994:67).Furthermore, direct associations between the living and many of the dead had been lost through time, leading to faulty memories of gravesites or lack of response to the Lands Department advertisements (QPP 1914 (2):408).
Ninety-nine remains and 148 memorials were relocated, and 505 unclaimed memorials were stacked in a special memorial reserve, or 'buffer zone' along the northern side of the Anglican Christ Church (Fisher 1994:47;QSA PRV 9892/1 21 April 1914).In all approximately 150 graves were exhumed following the closure of the cemeteries in 1875.This number stands in marked contrast to the 4,643 graves that officers of the Lands Department located in 1914 (QPP 1914 (2):407) and Fisher's (1994:52) estimate that there were at least 10,000 interments in the North Brisbane Burial Grounds between 1843 and 1875.
Following the exhumations, the site was cleaned and leveled (QSA PRV 9892/1 12 June 1913, 9 July 1913, 25 July 1913, 3 October 1913, 6 November 1913, 31 March 1914, 16 April 1914) 2).The bulk of the land south of Caxton Street, which included the Anglican, Presbyterian, Roman Catholic and Jewish sections, was turned into a public park (QGG 1914 102(145):1541) (Figure 7).This constituted an area of 15 acres 1 rood 34 perches (50 hectares) and was duly named Lang Park in memory of the early Queensland pioneer, Dr John Dunmore Lang (QPP 1914 (2) After some initial wrangling, control of the new park went to the Ithaca Town Council, which initiated a program of shade tree planting and filling remaining hollows with garbage as well as silt from the Milton Drain (BCC Plan Custodian WDP 115 1928/1932;DNR Res. 915 10 May 1934;Fisher 1995:115;The Telegraph April 17 1914).It was not until 1924, when control of Lang Park was transferred to the Brisbane City Council, that the site began to be developed more intensely.During the 1920s and 1930s various improvements occurred, including the establishment of tennis courts at the northern and southern ends and the conversion of the Milton Drain into a closed system (BCC Plan Custodian WD B-7-29 1928;WDP 114 1928;WDP 115 1928WDP 115 /1932;;Fisher 1995:117).However, the impetus for producing a recreational facility for the local population gradually faded.Throughout the rest of the twentieth century Lang Park gradually became fragmented and alienated from the general public by commercial, sporting and other interests.
Lang Park's use as a convenient garbage dump assumed a dominant role through to the 1950s, with the disposal of rubbish and nightsoil extending over large areas (DNR Res. 915 8 May 1948, 27 August 1951;Fisher 1995:121-122;JOL c.1930 neg. 106336; QSA PRV 10121/1 31 December 1930) (Figure 5).In 1932, the Queensland Amateur Athletics Association (QAAA) obtained a lease over the southern portion of the park and by 1933 a fenced sporting ground was established, which included an oval with running track and associated facilities (Fisher 1995:117;DNR Res. 915 28 May 1937, 14 May 1948, 26 October 1949, 10 November 1949) (Figure 6).In the 1920s the northern section saw a sewage maintenance depot built, followed by a BCC electricity substation, and then, during World War II the establishment of a military base (Brisbane Courier 6 September 1928;DNR Res. 915 9 March 1942, 8 May 1948, 10 November 1949;Fisher 1995:121-122;Hayman 1994:79;Sun Magazine 7 May 1989:47).After the cessation of hostilities this area was leased to the Housing Commission and the Queensland Police-Citizens Youth Welfare Association (DNR Res. 915 10 March 1955;Fisher 1995:122).
During the 1950s a struggle developed between various bodies over the control, financing and future direction of the park (Fisher 1995:122-123;DNR Res. 915 7 August 1951, 16 February 1953).Between 1954 and 1959, the QAAA surrendered control of its area to Queensland Rugby League (QRL) and Brisbane Rugby League (BRL), which were lobbying to use the site as a new home for their code (Fisher 1995:122, 124-125;DNR Res. 915 30 September 1955).The advent of the QRL/BRL tenancy, later to give rise to a special controlling body, the Lang q a r | Vol. 13 | 2002 | 25 Park Trust, marked the period of greatest development and change for the park (Fisher 1995:124-125).The old oval was upgraded and became enclosed to the north, east and south by terraced earth mounds with reinforced concrete risers (DNR Res. 915 16 May 1957;Fisher 1995:125) (Figure 9).The other main physical changes were the removal of the old military buildings leased to the Housing Commission, and the construction of two large grandstands, one on the western side of the oval and one on the eastern (DNR Res.915 16 May 1957, 8 July 1957, 6 October 1978;Fisher 1995:125, 127).
During the 1980s and 1990s further changes occurred to the place.During the 1980s, the Queensland Police-Citizens Youth Welfare Association (now Police Citizens Youth Club -PCYC) replaced its original facilities with a newer structure, and an indoor cricket complex was established, later to become the Ozsports Centre and beach volleyball courts (DNR Res. 915 29 November 1984, 11 May 1994, 27 February 1997).In 1992, the controversial Hale Street ring-road was built which resulted in the resumption and excavation of a significant area of land along the eastern boundary of the park.This necessitated the demolition of a hall associated with the heritage-listed Christ Church, and the destruction of part of the memorial reserve (Fisher 1995:128;Hayman 1994;QGG 1989QGG 91:1799)).In 1993-1994 another major alteration to the site occurred with the replacement of the western grandstand by a massive six storey structure (the Suncorp/Metway grandstand) capable of seating 13,000 (Fisher 1995:128).Finally, in the mid-1990s, in the northwest corner of the park, the Queensland Government Sporthouse building was constructed.
Assessment of Archaeological Potential
UQASU's approach to the assessment of the potential subsurface archaeological material at the Lang Park redevelopment site closely followed the approach advocated by Pearson and Sullivan (1995).This approach emphasises the division of the process of cultural heritage management planning into three sequential stages: significance assessment; management policy; and development of management strategies.
Significance Assessment
The Burra Charter (Marquis-Kyle and Walker 1992:21) defines cultural significance as 'aesthetic, historic, scientific or social value for past, present or future generations'.This is mirrored by the Queensland Heritage Act, 1992 that defines cultural heritage significance under Section 4 of the Act as 'aesthetic, historic, scientific or social significance, or other special value, to the present community and future generations'.Two important factors that impact on the significance of a cemetery and its contents are rarity and integrity (New South Wales Heritage Office 1998:7).It may not be possible to disentangle these different elements of significance as the community's understanding of a place is usually built on many intertwined elements.Nonetheless, it is the cultural significance of a place that should determine how its cultural heritage is managed.
Information concerning the various 'values' of Lang Park, and how these might have been impacted upon by the material changes that have occurred over time at the site, came from a number of sources.Historical information was collected from Queensland State Archives, Brisbane City Council, John Oxley Library, University of Queensland Applied History Centre, Department of Natural Resources, and Lang Park Trust.Interviews were conducted with people who have been, or currently are, responsible for management of the area.Information on social significance was obtained from a number of representatives of the major church groups and from newspaper articles concerning previous disturbance to the site.Physical evidence for levels of fill and other material changes was gathered by a site inspection and the results of borehole and testpits dug by Sinclair Knight Merz Pty Ltd and ARUP Geotechnics for the purpose of testing for contaminated soils.From the assessment of the gathered data, a number of 'Statements of Significance' were generated concerning three historical elements of Lang Park: the cemeteries; the Milton Drain; and the old landfill.
The Cemeteries
The cemeteries have high historic and scientific value.Because they were the principal places of interment in Brisbane for the first 30 years after free settlement commenced, they are unique.The cemeteries can not only provide data on early cemetery design, civic planning and sanitation, but they are intimately linked to the political, social and demographic processes of an important formative period for Paddington, Brisbane and Queensland more generally.
A key factor in determining the significance of the cemeteries was their integrity.Long-standing official attitudes towards the cemeteries have maintained that nothing could be left of them due to past exhumations and various phases of building work on the site (e.g.Malone and Koch 2001).Yet from the historic information alone there was sufficient evidence to suggest that a large number of graves remained undisturbed beneath the current land surface.Since 1914 the site has been steadily covered by a protective layer of fill.Although some cutting has occurred, prior to the 1970s excavations for building footings and other structures it remained relatively unintrusive.It was only with the construction of the eastern grandstand and the Hale Street ring-road that any significant portions of the cemeteries were destroyed.The remaining burials and their contents therefore have potential to provide information on the conditions of life within the colony, including health (e.g.Higgins 1989), mortality rates, nutrition (e.g.Sullivan et al. 1989), religious and socio-cultural practices (e.g.Little et al. 1992), gender, status and class.
The cemeteries also have social significance.Many early pioneers, both prominent and humble, were buried at Lang Park (QPP 1914 (2):408).Consequently there remain people today who wish to preserve the memory and the resting places of these pioneers, or are aware of family members who are interred there.A single level of social significance was difficult to establish, though, due to the variety of opinions and attitudes within the local community and religious groups consulted.However, during the Hale Street widening in the 1990s, it was clear that much distress and ill-feeling was engendered by the destruction of graves (see Doughty 1991:1;Robertson 1991).
Background to the UQASU's Lang Park Salvage Excavations
The Milton Drain The Milton Drain has high historic and scientific significance.Historic sources indicate that the original brickwork survived the installation of concrete piping within it and now lies buried beneath fill and the foundations of the western grandstand.It exists as the last surviving example of this style of drain from the nineteenth century in Brisbane (B. Rough, Brisbane City Council, pers. comm., 2000).Not only is the drain associated with an important early engineer, William Nesbit, it has potential to yield architectural, town planning and engineering information.Any deposits retained inside the drain may provide data on early diet, environment, health and nutrition.
The Historic Landfill
The rubbish dumped on the site has low historic and moderate scientific significance.The results from test pitting and boreholes conducted as part of the construction process indicated a layered deposit of ash, fill, ceramics, metal, glass and organic matter across most of the site.The depth of this deposit varied from less than 1m in the south to over 7m in the north (Sinclair Knight Merz Pty Ltd 2000).Although it would appear that an extensive and reasonably sound archaeological deposit exists that has potential to yield data on twentieth century material culture, consumption patterns and landfill formation processes in Brisbane, there are many other local sites that can provide similar information.
Management Policy and Development of Management Strategies
From the 'Statements of Significance', three Zones of Cultural Heritage Significance were generated.These zones were High Significance (the cemeteries and the Milton Drain), Low-Medium Significance (the landfill) and No Significance (those areas disturbed to a large extent by previous building activity).
Two Impact Zones (High and Low) were also established which detailed the level of intrusion upon the archaeological record by the redevelopment.The new stadium complex encompasses almost the entire site and requires the demolition of the earth mounds, eastern grandstand and the Ozsports/Police Citizens Youth Club facilities.However, due to the types of footings to be used and the varied levels of fill in different areas its impact is not uniform.In some areas its effects are negligible while in the Zone of High Impact it is impossible to preserve any subsurface material.
From the onset of the cultural heritage assessment, UQASU was informed that the redevelopment would and had to proceed no matter the outcome of the assessment.Our task was to manage the impact on the cultural remains within the redevelopment's given time frame.After a review of the goals of the heritage management and redevelopment aspects at Lang Park, conflicts were identified and a Cultural Heritage Management Policy was developed accordingly.This policy called for the in situ conservation of elements of high significance where possible.The entirety of the Milton Drain and certain sections of the cemeteries could be retained because they lay within the Zone of Low Impact.The sections of the cemeteries in the Zone of High Impact could not be preserved and would need to be archaeologically salvaged and investigated prior to appropriate re-interment of any remains recovered.At various points across the site archaeological sampling of the landfill would also be required.
In order to implement the recommended Management Policy, Cultural Heritage Management Areas were generated through the intersection of the Zones of Cultural Heritage Significance and the Zones of Impact (Figure 8).Area 1 was where the redevelopment would have high impact upon an element of high cultural significance.As Figure 8 shows, this encompasses sections of the Anglican, Aboriginal, Presbyterian and Roman Catholic cemeteries.Area 2 was all other parts of the redevelopment, including the Milton Drain, which would not be impacted upon.
Sets of Management Strategies, tied to the different phases of the redevelopment, were devised for each of these zones.Applying to all stages and areas of the redevelopment was the requirement that work must comply with provisions of the Queensland Heritage Act, 1992 and the Cultural Record (Landscapes Queensland and Queensland Estate) Act, 1987.Consequently it was recommended that a program of cultural awareness training be initiated as part of the general workplace induction so that demolition and construction personnel could be made aware of their legal obligations and also how their activities may impact on the archaeological record.
In Cultural Heritage Management Area 1, it was recommended that due care be taken during demolition and that an archaeological consultant periodically monitor progress.It was also recommended that access routes for vehicles be designed to minimise disturbance to the ground surface and that security precautions be implemented to ensure that amateur collectors and others did not destroy the archaeological context once the demolition exposed subsurface sediments.
After demolition but prior to the construction phase, a major archaeological salvage program of the remaining burials was to be undertaken in Cultural Heritage Management Area 1.This would involve the excavation, mapping and laboratory analysis of burials and contents by suitably qualified archaeologists.In the Aboriginal section, this process would occur with Indigenous involvement.All artefacts removed during the excavations needed to be conserved and curated according to the requirements of the Queensland Museum or, in the case of Aboriginal material, treated in accordance with the wishes of the traditional owners.
Prior to the start of the excavation and analysis, arrangements were made with the Brisbane City Council for the re-interment of any non-Indigenous skeletal material removed from the site.Should skeletal or burial material be unearthed during the construction process a suitably qualified archaeological consultant and the Cultural Heritage Branch of the Environmental Protection Agency would have to be notified at once and the salvage procedures implemented.
In Cultural Heritage Management Area 2, a sondage sampling strategy across the redevelopment site was recommended to ensure that a representative sample of artefacts from the twentieth century dumping phase of the site was collected.These artefacts would need to be conserved and deposited with the Queensland Museum.Care also needed to be taken during the removal of two large fig trees growing on the site.One was situated in the vicinity of the Milton Drain on the northern side of the Suncorp/Metway grandstand and in this case the removal could impact on the brickwork of the drain.The second tree was located within the area of the Aboriginal cemetery and its removal had the potential to disturb gravesites.
Following the construction of the new stadium UQASU recommended that a memorial be erected to inform the visiting public of the importance of the site as the location of the first cemeteries related to the free settlement of Brisbane.Furthermore, any new developments proposed in any part of the Zone of High Cultural Heritage Significance would require the implementation of mitigation and salvage strategies.
Implementation of the Cultural Heritage Management Policy and Strategies
In August 2001 demolition of the eastern stand, terraced concrete seating, earth mounds and associated buildings at Lang Park commenced.Prior to this considerable consultation between the developers (the Lang Park Joint Venture), Project Services, UQASU, the Turrbal Association Inc., church groups, including the Brisbane Council of Churches and the Chevra Kadisha (Jewish Burial Society), and subcontractors occurred.There were many issues to consider, including the safe removal of contaminated fill, the control of media coverage, compliance with workplace health and safety regulations, and the development of site protocols for scientific staff, demolition and construction employees and visitors.There were issues not always associated with an archaeological project, such as dealing with the politics and the legal demands of the construction industry, and the heightened public emotions attached to these cemeteries and the human remains that they contained.With regard to the latter, concern over the disturbance of gravesites was acknowledged through an on-site ecumenical service held by church leaders and plans for a re-interment ceremony.
As in any project of this scale, it has been necessary for all parties to learn and accommodate the goals and methods of the others.A cultural heritage procedures manual was developed which formed the basis for the cultural heritage awareness training of demolition/construction crews.Conversely, archaeological personnel have been required to undergo general and site-specific safety inductions and to comply with general construction site practices.
In August, 2001, UQASU began the salvage of the affected cemetery areas, initially in the Aboriginal section (in Cultural Heritage Management Area 1) with Turrbal representatives assisting with the excavation, and then moving out into the Anglican area.Later salvages of the Roman Catholic and Presbyterian sections also occurred whilst no archaeological activities were carried out in the Jewish, Baptist, Congregationalist or Wesleyan areas.Our methods have been to use large excavators (20 tonne) with batter buckets to remove layers of twentieth century landfill until the original 1914 land surface is revealed (Figure 10).This was then assessed for changes in soil colour and texture indicating grave locations (see Owsley et al. 1997:210, Plate 12.6).Rectangular gravesite soil 'stains' stood out clearly from the undisturbed ground surface (Figures 10 and 11).These were identified and mapped.As this was a salvage operation with time being a priority, the heavy excavation machinery was retained to scrape back the subsoil until either burial remains such as coffin outlines or wood were exposed or the pad level for the new stadium's footings reached.Those gravesites which contained recognisable material were assigned feature numbers and their details recorded and photographed.These were then removed using standard archaeological techniques.Samples of coffin wood, coffin furniture and burial sediments were retained for laboratory analysis (Figure 12).Possible remains of demolished tombs, including terracotta edging, shells, and dressed sandstone and porphyry, were collected and recorded.
Background to the UQASU's Lang Park Salvage Excavations
Preliminary Results
Our prediction that a large number of burials would have survived post-depositional and building activities on the site has proven to be correct.To date 397 burials have been exhumed along with a broken headstone and a trench containing broken monumental masonry.The headstone was collected from the Roman Catholic section and was marked simply 'JS 1874'.One hundred and eighty-three burials have been exhumed from the Anglican section, 16 from the Aboriginal section, 163 from the Roman Catholic section and 35 from the Presbyterian section (Table 1).The concentration of burials varies in each section with the Roman Catholic section containing more than triple the density of the Presbyterian section.Many of the burials in the Roman Catholic section were placed very close together indeed (Figure 13).From the results presented in Table 1 and using the density from the Presbyterian section as a guide to the maximum densities of the unexcavated sections we estimate that approximately 4,500-5,000 burials occurred in the North Brisbane Burial Grounds.This figure more closely corresponds to the 1914 Lands Department count of 4,643 marked graves made than Fisher's (1994:52) estimate of 10,000 burials.This disparity demonstrates the validity of an archaeological approach in the management of historic cultural heritage.
The majority of the excavated graves have proved to be relatively shallow, being dug into hard subsoils consisting of clay and phyllite fragments overlying a phyllite base.This may confirm historic reports of ad hoc burial practices at the site.Most remains have shown poor levels of preservation for organic material.In many cases, particularly with regard to children's graves, both bone and wood have become little more than dark stains in the subsoil.Three intact coffins have been disinterred (one from the Anglican section and two from the Roman Catholic section) and their contents will be removed under laboratory conditions (Figure 14).The excavation and analysis results will be published in detail separately.
Conclusion
The Lang Park site has had a long and varied history, which, since 1914, has entailed considerable physical change.The current redevelopment is just another chapter in that history.Its use has altered from a burial place, to a public park, and then to a commercial sporting ground.Many of the changes to the site have been the result of the need to solve drainage issues and fill in low-lying land and hence the nineteenth century land surface now lies beneath twentieth century fill.Various structures have also come and gone, and in some areas deep excavation has occurred.
Developing and applying an appropriate Cultural Heritage Management Policy to manage the impact of the redevelopment has proved a challenge.The site is large and complex in terms of the placement of present and past features and sediments, and there are also associated competing social, political, economic and scientific factors.While the salvage excavation is not yet complete, it promises to yield valuable insights into colonial life in Brisbane.Already it has demonstrated that many gravesites have been preserved.Furthermore, this is arguably the largest and one of the most important historical archaeological excavations in Queensland, and is a valuable test case for cultural heritage management, cemetery salvage methodology, and the structuring of relationships between archaeologists, traditional owners, community groups and developers.
. Repairs to the Milton Drain and its branches were undertaken (QSA PRV 9892/1 4 June 1913) and Judge Street was converted into a pedestrian access way.By mid-1914 most work was complete.The northern part of the Baptist cemetery was converted into the Paddington Kindergarten and Creche (QGG 1914 102 (18):286; QSA PRV 9892/1 21 April 1914).The rest of the Baptist ground, plus the Congregational and Wesleyan cemeteries, and part of Caroline Street became the Ithaca Children's Playground (QGG 1914 102(145):1541; QSA PRV 9892/1 21 April 1914) (Figure
Figure 12 .
Figure 12.Coffin handles from a grave in the Presbyterian section (Photograph: UQASU).
Figure 14 .
Figure 14.Complete coffin and contents removed from the Roman Catholic section (Photograph: UQASU).
|
v3-fos-license
|
2019-09-26T09:05:56.068Z
|
2019-09-22T00:00:00.000
|
202760143
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2019/9107167",
"pdf_hash": "06b8f02cc19751f0f9443ae7c76cd1c988ca855b",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45980",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"sha1": "085c1c0ea94a597a28574a3122250124810abca1",
"year": 2019
}
|
pes2o/s2orc
|
Compound Autoregressive Network for Prediction of Multivariate Time Series
,
Introduction
In the information era, data play a signi cant role in various arti cial and natural systems.Data provide the basis for machine control, industrial system running, economical market, environment management, etc.For the complex systems above, the accurate real-time data are essential for the control and operation.Moreover, the future information is also very important, which is predicted with the historical data and can guide the beforehand operation for the system adjustment, environmental adaptation, and accident avoidance.erefore, the reliable prediction of the data in the time domain becomes an urgent issue for the complex systems.For the complicated composition and internal mechanism, the time-series data in the systems are usually nonstationary, nonlinear, and noisy.e complicated features make the prediction di cult.Besides, the variables in the time-series impact on each other to perplex the nonlinear relation.en, the prediction issue becomes the challenge in front of the complicated time-series characteristics and multivariate correlativity.
In the prediction issue, various explorations have been conducted to excavate the potential rules and features in the time-series data.For the practice application in some elds, the prediction methods are proposed based on mechanism models.In the methods, the inner mechanism of a system is studied deeply, in which the relations between system components are built with the approach of physics, chemistry, and biology, such as models of water environment (WASP [1] and EFDC [2]) and models of atmospheric di usion (Gaussian pu and plume model [3]).e system change can be predicted based on the mechanism model in the view of model simulation.However, the models are di cult to build because of the complex and unknown inner structure.Moreover, the professional and interdisciplinary knowledge is also required for the mechanism analysis.
e data-driven solution has been an effective complement for the mechanism methods.Different from the mechanism methods, the data-driven methods focus on the external data characteristic instead of the inner structure relation.It develops from the statistical method to the machine learning method which can excavate more features from the mass data.Machine learning solves mainly the problem of the parametric model setting and adaption in the statistical methods such as autoregression (AR), moving average (MA), autoregressive moving average (ARMA), and autoregressive integrated moving average (ARIMA) models [4].Machine learning including the traditional neural network and deep learning also face some problems in the time-series analysis.First, multiple variables usually need to be considered for the target predicted variable.In the multivariable analysis, the traditional networks mainly model the multivariable mapping relations, while neglecting the sequential features.And the deep learning methods are specialized in the sequential feature extraction of the univariate.Second, the computational efficiency should be considered in the prediction models, especially for the terminal application which cannot provide the high configuration.ird, the training methods affect the network performance largely.A suitable and extensible learning framework should be designed for the neural network.Based on the analysis of the existing research, we explore an access to the time-series prediction, in the view of multivariable modelling performance, computational efficiency, and training methods.e rest of this paper is organized as follows: Section 2 introduces the related prediction methods, including the statistical model and machine learning method.In Section 3, the main prediction model is proposed and the compound autoregressive network is presented with the prediction algorithm.Experiments are conducted in Section 4 to test the network.
e methods and results are discussed in Section 5. Finally, the paper is concluded in Section 5.
Related Works
e direct solution of the prediction is to figure out the change rule of the system, which is the basic thought of the mechanism-based prediction methods.Obviously, it is difficult to build the completed mechanism model to describe the system composition and change rule.en, the data-driven method becomes a feasible solution with the external characteristic irrespective of the system inner construction and relation.e data-driven methods can be divided into two categories: statistical model and machine learning model.
Prediction Models Based on Statistics.
e statistical model is based on the mathematical description and calculation of the data.e classical statistical models are built on the autocorrelation function and exponential decays of the time series.e typical models include AR, MA, and hybrid models.e AR model describes the change process of the regressor variable itself.In the model, the random variables in the next time steps are expressed with the linear combination of the variables in the previous moments.e MA model uses the sliding window to extract the time-series features in the view of the adjacent data segment.Because the length of the sliding window impacts the feature extraction ability mainly, some exponential smoothing methods are proposed to optimize the MA model, in which the cubic exponential smoothing method is applied widely.Based on AR and MA models, the hybrid model is proposed for accurate modelling, including the ARMA and ARIMA.
e ARIMA has been the typical hybrid model for the nonstationary regressive issue.It was applied in the prediction problems of environment monitoring [5], financial economy [6,7], food safety [8], traffic system [9], etc.
e statistical model can be expressed as follows.x t is the value of the time series at t, p is the number of autoregressive terms, q is the number of moving average terms, d is the differential order, ε t is the white noise at t, L is the lag operator, and α 1 , • • • , α p and β 1 , • • • , β q are the weights.en, the AR model can be expressed as ( e MA model is e ARMA model is e ARIMA model is e statistical models rely on the assumption of stationarity in the time series.Although the models are improved and evolved, they are still limited by the transformation and process of the stationary data.Besides, it is a problem on how to select a proper model and estimate the model parameters.e practice indicates that the models perform well in the linear short-term prediction.e prediction accuracy declines markedly in the complex and longterm time series.It becomes a demand to seek new prediction solutions to the nonstationary time series.
Prediction Model Based on Machine Learning.
Machine learning develops fast in the classification and regression research.
e black-box thought of machine learning seems to provide the extensive possibility for the complex modelling problems.e backpropagation neural network (BP), radial basis function neural network (RBF), nonlinear autoregressive neural network (NAR), support vector machine (SVM), and Bayes network have been studied and applied in the prediction problems [10].
Complexity
Some studies have been conducted to improve the network and prediction performance.Pradeepkumar [11] proposed a novel particle swarm optimization algorithm to train the quantile regression neural network, which was applied in the financial data prediction.Daly [12] designed the structure of the NAR to predict the video traffic in the Ethernet passive optical network.Wang [13] proposed an adaptive method based on the multiple-rate network to predict the parameters in industrial control.Liu [14] studied an improved grayscale neural network which was tested to predict the traffic stop.Some combinations of different methods are also a hotspot in the machine learning studies.Doucoure [15] predicted the wind speed with wavelet analysis and neural network.Wang [16] improved the BP with the self-adaptive differential evolution algorithm.
e machine learning methods above are mainly the shallow networks.ey are suitable for multivariate modelling because of the network structure of multiple input nodes.
e data in different time steps are imported independently into the network circularly, which place emphasis on the nonlinear mapping relation instead of the sequence connection in the time domain.Generally, they are limited in mass data processing and complex time-series relation modelling.Especially for the prediction issue, the sequence feature should be extracted which is difficult to realize in the traditional fully connected network.e recurrent neural network (RNN) [17] draws much attention in the sequence features.In the RNN, the nodes between the hidden layers are connected, and the input of the hidden layer includes not only the output of the input layer but also the output of the previous hidden layer.e RNN develops to the multidimensional recurrent neural network (MDRNN) [18] and to the bidirectional recurrent neural network (BiRNN) [19] for the higher performance.e long short-term memory network (LSTM) [20] is proposed for the long-term dependency problem in the traditional RNN.Some variants of the LSTM appear with the improvement and redesign of the structure or gate in the LSTM, including the bidirectional LSTM network (BiLSTM) [21] and gated recurrent unit (GRU) [22].Although the deep networks usually perform better than the traditional networks, they are studied and applied more with the univariate instead of the multivariate.Besides, their structures are more complex, and they need more training time and computing resources.
In the prediction problem of the time series, on the one hand, we should consider the sequence feature of the time series as well as the mutual effect of the related variables.On the other hand, we should balance the network prediction accuracy with the calculating speed and resources occupied.Considering the related works mentioned above, the advantages of different networks should be utilized, including the simple structure and multivariate analysis ability in the shallow networks, as well as the sequence feature extraction in the recurrent networks.en, the shallow recurrent neural network NAR [23] is selected as the basic network which can extract the nonlinear and sequence features in the time series.And a compound network structure and algorithm are designed to analyse multiple variables.e novel framework of the compound network can be applied in the prediction problem of complex systems, providing an alternative solution to analyse the data change in the data-driven view.
Compound Autoregressive Prediction Network
For the time series in the systems, the main feature is the trend in their changing process, as well as the incidence relation among different variables.e trend means that there are potential rules in the changing data, which can be linear, periodic, or stochastic.e incidence relation means the effect on multiple variables.For example, the temperature value fluctuates in its change rule, and it is impacted by other meteorological variables such as the precipitation and humidity.Based on the two important factors in the time series, a compound neural network is built to predict the object variable.e overall network structure is introduced firstly.
en, the components and training methods are analysed.e prediction algorithm for the multivariate time series is proposed finally.
Compound Autoregressive Network.
In the traditional neural networks, the NAR can realize the regression analysis of the time series itself.e network has been applied in practice and performs well in the short-term prediction.Besides, the data needed in the network training are obviously less than those of deep networks such as the LSTM and GRU.
en, the NAR can be an effective tool in the univariate prediction.Moreover, the nonlinear autoregressive network with external input (NARX) develops based on the NAR, in view of the incidence relation in the multiple variables.With the advantages of the NAR and NARX, the compound network is designed for the multivariate prediction issue, as shown in Figure 1. e compound autoregressive network proposed in this paper is abbreviated as CARN.
e CARN consists of two parts, namely, the primary network and auxiliary network.In the prediction issue, a variable is the main target to be predicted, and some variables are selected as the correlated variables according to their correlation degrees.e components in the compound network are corresponding to different types of variables.e primary network is built based on the structure of the NARX to predict the object variable.And the auxiliary network is built based on the NAR to provide the reference of the correlated variables.
For the primary network, the inputs include the object variable (Y in Figure 1) and the correlated variables (U in Figure 1).e nonlinear and complex relation in the variables is usually difficult to be analysed with mechanism modelling.But the network performs well in the black-box mapping relation mining.en, the design of the two types of inputs can excavate the associate relation in multiple variables.Besides the two types of inputs, the other characteristic of the network is the feedback of the object variable from the output to the input.e changing trend in the object variable itself is usually more important than the multivariable relation.And the self-trend is constructed based on the feedback in the time dimension.
Complexity
For the auxiliary network, the main inputs are the variables associated with the object variable.e network mainly sets up the time-series trend with the structure of the feedback.In the feedback, the data change gradient is also set as the input to compensate the prediction.e NAR-based auxiliary network realizes the regression of the univariate.Moreover, there is not only one e ect variable of the object variable.erefore, there are some auxiliary networks in practice, and the number of auxiliary networks equals the variable number.
Design and Train of Discrete Networks.
In the framework of the compound network, the primary and auxiliary networks are set up to predict the variables.ere are two issues to be solved including the concrete network structure and the network training method.e structures of the networks are shown in Figure 2.
ere are three layers in the primary network, namely, the input, hidden, and output layers.e inputs include the e ect variables which are from the auxiliary networks and the object variable.In the view of the time dimension, the data of the object variable in the past are used to predict the data in the next time steps.e data at present are provided by the auxiliary networks.e nonlinear regressive function of the network can be expressed as where y(t) is the prediction output, u(t) is the e ect variable input, (t − i) means the time step, n u is the input delay, and n y is the output delay.
e relation between the input and hidden layers is where j 1, 2, . . ., l, i 1 is the number of historical input data, u i 1 is the i 1 -th input, i 2 is the number of historical output data, y i 2 is the i 2 -th output, l is the number of hidden-layer neurons, f is the activation function in the hidden layer, W i 1 j is the connection weight between the i 1 -th input and the j-th neuron in the hidden layer, W i 2 j is the connection u(t -1)
Complexity
weight between the i 1 -th linear relation weight and the j-th neuron in the hidden layer, and A j is the threshold value of the j-th hidden neuron.e network output O can be obtained with the hiddenlayer output H j : where W j is the connection weight between the output neuron and the j-th neuron in the hidden layer and B is the threshold value of the output neuron.Similar to the primary network, there are also input, hidden, and output layers in the auxiliary network.But the hidden layers are extended to two layers.e inputs include the effect variable itself and the data change gradient which can be the reference to promote the prediction accuracy.e network can be expressed as where u(t) is the effect variable input and Δ(t) is the data change gradient given by where n u is the input delay and t 0 is the time step interval.e concrete model of the auxiliary network is where j 1 � 1, 2, ..., l 1 , j 2 � 1, 2, ..., l 2 , i 1 is the number of historical input data, i 2 is the number of linear relation weights between u(t) and u(t − 1), l 1 and l 2 are the number of hidden-layer neurons, n u is the input delay, f is the activation function of the hidden layer, u i 1 is the i 1 -th input number, ω is the connection weight between input and hidden neurons, and a is the threshold value of the hidden neuron.e output is derived from the hidden layer: where b is the threshold of the second hidden layer and c is the threshold of the output layer.
Based on the design of the networks above, the training method should be studied.
e basic learning method is from the algorithm of backpropagation through time, in which the variable from the feedback can be regarded as a new variable.e errors of the primary and auxiliary networks between the prediction output and the designed output are where e 1 and e 2 are the errors, o and O are the prediction outputs, and y and Y are the designed outputs.e connection weights ω i 1 j 1 , ω i 2 j 2 , ω j 1 , ω j 2 , W i 1 j , W i 2 j , W j , a j 1 , a j 2 , b 1 , b 2 , A j , and B are adjusted with the errors until the global error or the training iterations reach the preset value.Based on the backpropagation algorithm, the weights are obtained as
Complexity
where η 1 and η 2 are the learning rate and E 1 and E 2 are the global errors of the two networks.
Prediction Algorithm for Multivariate Time Series.
Based on the CARN proposed above, the data in practice can be used to train and obtain the networks which can predict the object variable with the effect variables.e prediction algorithm for the multivariate time series is designed based on the network model.In the algorithm, the data processing and calculation process is ascertained to obtain the final prediction results.e algorithm flow is shown in Figure 3.
e inputs of the prediction algorithm include the historical data of the object variable and effect variables and the data change gradient.e output is the series of the object variable in the next time steps.e steps of the algorithm are as follows: (1) e effect variables are selected with the correlation degrees between the object and effect variables.e historical data of the object variable and selected effect variables are preprocessed with the normalization method.In the preprocessing, the data change gradients of the effect variables should be calculated for the auxiliary networks.
(2) e historical data which have been processed are imported into the auxiliary networks.e networks are trained with the method in Section 3.2.(3) e outputs of the auxiliary networks and the historical data of the object variable are imported into the primary network to obtain the main prediction model.(4) e time step is set forward, and the updated data in the next time step can be obtained by repeating the steps above.
e compound network and the prediction algorithm for the multivariate time series have been proposed so far.In practice, the prediction length should be set, and the effect variables should be selected reasonably.en, the designed prediction results of the object variable can be obtained with the historical data.
Experiment Data and Setting.
In the experiment, we focus on the data prediction issue in the complex environment system.Two sets of the environment data are chosen to be tested.One is the atmospheric quality data from the monitoring system of an industrial park.And the other one is the meteorological forecast data.
For the atmospheric quality data, 3240 sets of data are truncated from the monitoring system in an industrial park of Hebei Province, China.e data are from different time periods which can represent different trends.e time periods include June to August in 2016 (set A), September to November in 2016 (set B), and December in 2016 to February in 2017 (set C). e monitored variables are SO 2 , NO 2 , CO, O 3 , VOC, humidity, temperature, wind speed, atmospheric pressure, etc.And they were recorded every hour in the monitoring system.SO 2 is the main factor in the atmospheric environment management in the industrial park.en, SO 2 is set as the object variable to be predicted, and the correlation degrees between other variables and SO 2 were calculated, as shown in Figure 4. en, the main effect variables were selected including NO 2 , CO, O 3 , humidity, and wind speed.
For the meteorological forecast data, there are 24 sets of data in a day.And every set is about the meteorological factors, including the temperature, humidity, wind speed, precipitation, and atmospheric pressure.Similar to the atmospheric quality data, the most relevant variables are selected for the object variable temperature.
e effect variables are the humidity, wind speed, and precipitation.
In the setting of the prediction models, the data were preprocessed firstly with the method of maximum and minimum.
e prediction network output should be denormalized.
e data were divided into the training, validation, and test sets.eir proportions are 70%, 15%, and 15%.e numbers of various sets are listed in Table 1.
In the experiments, the parameters of the network structure and training were obtained and are listed in Table 2. en, the networks are trained to run the prediction algorithm in Section 3.3.e prediction results are presented in Section 4.2.
Some typical prediction methods are set as the contrast methods, including the ARIMA model, BP, RNN, and LSTM.e contrast methods cover the main types of the classic statistical model and machine learning methods.In the concrete experiments, the ARIMA and RNN are used to predict the object variable.e BP and LSTM are designed with multiple inputs including the object variable and effect variables.
Results of Atmospheric Quality Data.
In the experiments, 162 sets of atmospheric quality data are tested for the prediction performance.e prediction results are shown in Figure 5.According to the experiment setting, the input delay means the historical data used, and the output delay means the prediction steps.For the atmospheric quality data, the historical data of the latest 6 hours are used to output the prediction, and the prediction results are the SO 2 concentration in the next 6 hours.e data are used forward circularly.In Figure 5, the reference true value and the prediction results of various methods are presented with lines in different colours, and some parts are enlarged for the obvious comparison.
For the prediction results in Figure 5, all methods can trace the general trend of the SO 2 concentration data.e results of the ARIMA and RNN fluctuate more acutely than the others.e results of the CARN are closer to the true value so that the black line seems to be hidden in the figure .For the obvious comparison of different methods, the errors are calculated and shown in Figure 6.
e mean absolute error (MAE) and root-mean-squared error (RMSE) are selected as the evaluation indicators.e indicators are listed in Table 3.
6 Complexity e absolute errors show the similar trend of the prediction results in Figure 5.In the general view, the CARN performs more stably than other methods, in which the errors of the ARIMA and RNN change more sharply.e prediction performance can be evaluated objectively with the indicators in Table 3. e MAE is the average of all errors in their absolute value.In the indicator MAE, the CARN and LSTM perform better than the others.e MAE of results in the ARIMA is largest, while the RNN and BP show a similar MAE. e RMSE re ects the overall closeness of the results to the average value.It can indicate the stability of the prediction methods.
e sort of the RMSE in di erent methods is similar to the trend of the MAE, and the CARN is more stable than the others in prediction.Complexity from the experiment of atmospheric quality data, the input and output delays are set to 12. e latest 12 sets of data are used to predict the temperature in the next 12 hours.e data shown in Figure 7 present an obvious periodic trend.In fact, 216 sets of data are the meteorological data in 9 days.e temperature changes circularly in the period of one day.
Results of Meteorological Forecast
en, the data change rule is more distinct.e prediction results of the CARN are closer to the true value than the others, in which the ARIMA and RNN uctuate because they are predicted only with the object variable and other methods use the object variable with e ect variables.
e errors are calculated and presented in Figure 8 and Table 4. Figure 8 shows the errors of di erent prediction results.From the prediction results in Figure 7 and errors in Figure 8, it can be seen that all methods can trace the data change rule closely because of the periodicity in the meteorological data.e errors mainly occur in the uctuation.e maximal MAE reaches 4.43 °C in the ARIMA which is near to
Discussion
For the prediction issue of the multivariate time series, a compound network framework is introduced in which the structure of the nonlinear autoregressive network and the prediction algorithm are designed.e experiments are conducted within the environment data, including the atmospheric quality data and meteorological forecast data.e prediction methods and results will be discussed in this section.
Firstly, the method shows the favourable short-term tracking performance in the data change rule.Generally, the prediction methods cannot avoid the divergency in the long term.It seems that there is not divergency in our prediction results.It is not that our approach is perfect, while the good regressive results derive from the setting of prediction time.
e prediction time steps of the experiment are 6 and 12, which belong to the short-term prediction.e practical true values are imported into the model circularly to output the data in the future.erefore, the prediction results show the good regressive effects.e results indicate that the proposed method can meet the short-term prediction need.
Secondly, the proposed method focuses on the prediction problem with multiple variables.For the accurate prediction, the related variables should be considered based on the target variable to be predicted.In the SO 2 and temperature are set as the object variable, and the related variables are selected as the effect variables.In the comparison methods, the ARIMA and RNN only use the object variable to predict the data themselves.
e BP, LSTM, and CARN use multiple variables to obtain more accurate results.It is indicted that the effect variables help Complexity improve the prediction performance.In the proposed method, the design of the auxiliary network meets the need of multivariate analysis.irdly, the proposed method seeks the balance of precision performance and calculation resource occupancy.As mentioned in the introduction of related works, deep learning shows the excellent performance in prediction.It can be proved in the experiment where the result of the LSTM is similar to that of the CARN.However, the structure of the deep network is more complex than that of the network NAR, which may lead to the large consumption of the calculation resources.In the proposed method, networks based on the NAR are combined to obtain the expected prediction accuracy.Meanwhile, the simple structure of the NAR can reduce the calculation resource demand.e balance of accuracy and calculation resource in our method is beneficial to the application in practice.e proposed CARN reaches the expectant effect in the time-series prediction.
e effect is guaranteed with the compound structure of the primary and auxiliary networks to model the multivariable relation.Meanwhile, the training method in the CARN is also tested with experimental results based on the adjustment of the network parameters.
For the objective appraisal, the performance and application of the proposed network can be extended in the future.For the network performance, the training method is derived from the framework of backpropagation through time, which is an effective and simple solution in network learning.e related works on the backpropagation learning method are abundant.e improvement methods can be imitated based on the compound network structure.For the application, the proposed network can solve the direct prediction problems, such as the forecast of the weather, environment, economic market, and health management.It can also solve the data prediction in other complex systems indirectly.For example, the network may help the prediction of the control parameter in the nonlinear time-delay system [24].e prediction result will be the important information for the control and management issues.
Conclusion
For the intelligent and advance management in the information era, the data-driven prediction method is studied in this paper.Considering the characteristic of the nonstationary and multivariate effect in the nonlinear time series, a compound prediction framework is designed based on the autoregressive neural network.e experiments on the environment data are conducted to verify the performance of the method.
e method shows the favourable accuracy and appropriate calculation scale.
e proposed network realizes the prediction of the multivariate.Besides, it takes the computational efficiency into account as well as the prediction performance.Furthermore, the principle of the network training in this paper is practical.It provides a feasible solution to the nonlinear multivariate time series with the shallow neural network.In the future work, the training method can be improved based on the advanced research, and the long-term prediction performance should be promoted.Moreover, the compound autoregressive network can be applied in other fields, including the direct forecasting of the time series and indirect prediction of the parameters and components in the complex systems.
Figure 1 :
Figure 1: Compound autoregressive network for the multivariable time-series prediction.
Figure 2 :
Figure 2: Structure design of the networks: (a) primary network; (b) auxiliary network.
Figure 3 :Figure 4 :
Figure 3: Prediction algorithm ow for the multivariate time series.
Figure 5 :
Figure 5: Prediction results of atmospheric quality data in three time periods: subset data (a) from June to August in 2016, (b) from September to November in 2016, and (c) from December in 2016 to February in 2017.
Figure 6 :
Figure 6: Errors of the prediction results of atmospheric quality data in different methods: subset data (a) from June to August in 2016, (b) from September to November in 2016, and (c) from December in 2016 to February in 2017.
Figure 7 :
Figure 7: Prediction results of meteorological forecast data.
Figure 8 :
Figure 8: Errors of the prediction results of meteorological forecast data in different methods.
Table 1 :
Number of data in the experiments.
Table 4
lists the error evaluation indicators MAE and RMSE.
Table 2 :
Parameters of the network structure and training.
Table 3 :
Error evaluation indicators of the prediction results of atmospheric quality data.
Table 4 :
Error evaluation indicators of the prediction results of meteorological forecast data.
|
v3-fos-license
|
2019-05-15T15:31:27.862Z
|
2019-05-14T00:00:00.000
|
153315094
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s12885-019-5673-6",
"pdf_hash": "a8959654aa95202472f99d2bd2e1afabd0a3f5c1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45982",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "a8959654aa95202472f99d2bd2e1afabd0a3f5c1",
"year": 2019
}
|
pes2o/s2orc
|
High expression of MMP19 is associated with poor prognosis in patients with colorectal cancer
Background Matrix metalloproteinase 19 (MMP19) is a member of zinc-dependent endopeptidases, which have been involved in various physiological and pathological processes. Its expression has been demonstrated in some types of cancers, but the clinical significance of MMP19 in colorectal cancer (CRC) has not been reported. Thus, we aimed to analyze the clinical significance of MMP19 in CRC in present study. Methods The expression of MMP19 was first explored in The Cancer Genome Atlas (TCGA) cohort, and then validated in the GSE39582 cohort and our own database. Clinicopathological features and survival rate were also investigated. Results MMP19 was found to be a predictor for overall survival (OS) in both univariate (hazard ratio [HR]: 1.449, 95% confidence interval [CI]: 1.108–1.893, P = 0.007) and multivariate survival analyses (HR: 1.401, 95% CI: 1.036–1.894, P = 0.028) in the TCGA database. MMP19 was further validated as an independent factor for recurrence free survival in the GSE39582 database by both univariate analysis (HR: 2.061, 95%CI: 1.454–2.921, P < 0.001) and multivariate analysis (HR = 1.470, 95% CI: 1.025–2.215, P = 0.032). In an in-house cohort, MMP19 was significantly upregulated in CRC tissues when compared with their adjacent normal controls (P < 0.001). Ectopic MMP19 expression was positively associated with lymph node metastases (P = 0.029), intramural vascular invasion (P = 0.015) and serum carcinoembryonic antigen levels (P = 0.045). High MMP19 expression correlated with a shorter OS (HR = 5.595; 95% CI: 2.573–12.164; P < 0.001) and disease free survival (HR = 4.699; 95% CI: 2.461–8.974; P < 0.001) in multivariate cox regression analysis. Conclusions Expression of MMP19 was upregulated in CRC. High expression of MMP19 was determined to be an independent and poor prognostic factor in CRC. These results suggest that MMP19 may be a good biomarker for CRC.
Background
Colorectal cancer (CRC) is the third most common and the third leading cause of cancer-related death in the United States [1]. In China, both the incidence and mortality rate of CRC has been increasing, and CRC is ranked as the third leading cause of cancer-related deaths [2]. The development of distant metastasis is the main reason of cancer-related death regardless of effective surgical procedures and systemic chemotherapy.
Approximately 20-25% of patients are initially diagnosed with synchronous metastases, of which approximately 50% ultimately develop metachronous disease after colectomy [3,4]. The survival outcome of CRC is mainly determined by tumor stage and some other clinicopathological factors. However, the heterogeneity of this disease makes it difficult to predict patient prognosis with these traditional factors [5]. Recent genetic and molecular analyses of CRC have identified a set of predictive biomarkers, including RAS status, BRAF mutation and mismatch repair protein expression, that can aid in the identification of patients who are at high risk of disease progression or recurrence [3,[6][7][8]. Although available biomarkers are commonly used for predicting long-term outcome, some previous studies have reported that a proportion of patients are misdiagnosed [7,8]. Therefore, the identification of novel markers that can be used to screen various prognostic risk subgroups to guide individual treatment is urgently needed.
Matrix metalloproteinases (MMPs) are zinc-dependent endopeptidases that involve in a variety of physiological processes [9], and they act in concert in tumor invasion and metastasis [10]. Over the years, they have been investigated for their roles in cancer progression and metastasis [9,[11][12][13][14]. MMP14 plays an important role in CRC progression and prognosis [15]. Immunohistochemical score based on major members of MMP/TIMP profile can identify a distinct group of colorectal cancers with poor prognosis [14]. However, some MMPs, such as MMP19, have not been fully investigated in CRC. MMP19 was first isolated as an autoantigen from the synovium of a rheumatoid arthritis patient [16,17]. MMP19 contains classical MMP structural domains, including a signal peptide, pro-peptide, catalytic domain, hinge region, and C-terminal domain [17,18]. MMP19 is reportedly involved in the progression and metastases of various cancers, but its role in CRC remains unknown.
In this study, we used The Cancer Genome Atlas (TCGA) and whole-genome expression microarray (Gene Expression Omnibus, accession number GSE39582) databases to investigate the expression of MMP19 and RNA sequence. Furthermore, we explored the relationship between MMP19 expression and cancer prognosis using our data to determine whether MMP19 can serve as a valuable prognostic predictor in CRC patients.
Methods
TCGA and GSE39582 database MMP19 mRNA expression was retrieved from the TCGA portal (http://tcga-data.nci.nih.gov) and GSE39582 database (https://www.ncbi.nlm.nih.gov/geo/). We selected patients who had both RNA sequencing data and clinicopathological factors available for the correlation analysis. The inclusion criteria were as follows: pathologically diagnosed with invasive adenocarcinoma, available intact survival information, and RNA sequencing data available. A total of 359 CRC samples in the TCGA cohort and 474 cases in the GSE39582 database were selected. The relationship between MMP19 expression and the prognosis of CRC patients was explored.
Validation cohort
The study was approved by the Ethics Committee of Taizhou Municipal Hospital, Medical College of Taizhou University (Zhejiang, China). Before surgery, all patients provided written informed consents in compliance with the ethics of the World Medical Association (Declaration of Helsinki) for the donation of their tissue for the present research. All patients underwent radical colectomy, and all fresh tissues, including tumor tissues and normal controls, were frozen in liquid nitrogen immediately after resection and stored in RNA later at − 20°C. Pathological diagnoses were made by at two pathologists and restaged according to the 8th American Joint Committee on Cancer guidelines. Normal control tissue was retrieved at least 10 cm from the tumor margin.
All patients were underwent followed up strategy according to NCCN guidelines. The primary endpoint for patients were OS and DFS. The OS was defined as the time from diagnoses to death from any cause, and the DFS was defined as the time from diagnoses to the first recurrence or death [19]. The survival data was got from the medical records or contacts with patients by phone or email.
Ethics statement
This study was approved by the Taizhou Municipal Hospital Research Ethics Committee (ID: 2018-03-0039). The study was implement according to the approved guidelines. Informed consent was obtained from each patient before surgery.
Immunohistochemistry (IHC) study
Immunohistochemistry (IHC) was performed on formalin-fixed, paraffin-embedded tissue sections as previous described [15,22]. MMP19 was detected using the rabbit anti-MMP19 polyclonal antibody AP6202a (Abgent Inc.). Primary antibodies was omitted as negative control. Data were assessed by two independent single-blinded pathologists. A semi quantitative immunoreactivity scoring system was used to sort patients into high and low expression groups according to the immunoreactivity score [22,23].
Statistical analysis
OS, DFS or recurrence free survival (RFS) was used as primary endpoint for TCGA, GSE39582 and validation cohort. Survival analysis were compared using the univariate and multivariate Cox proportional hazard model among different MMP19 mRNA expression levels in the TCGA and GSE39582 database. The results were demonstrated as hazard ratios (HR) and 95% confidence intervals (CI). MMP19 was also classified into high and low expression subgroups in TCGA and GSE39582 cohorts by the X-tile program with a maximum χ 2 value and minimum P value. [24]. A one-sided P value < 0.05 was considered as statistically significance.
Results
MMP19 was an independent prognostic factor for survival in the TCGA cohort A total of 359 eligible patients were included in this study from the TCGA database, including 199 (55.4%) men and 160 (44.6%) women. The median age was 64 (range 31-90) years. The median follow-up time was 32 (range, 0-15) months and 82 patients (22.8%) died after the last follow-up.
MMP19 was correlated with inferior clinical characteristics in the validation cohort
To investigate the potential relevance of MMP19 expression in CRC tissues in terms of clinical characteristics, MMP19 mRNA expression was further examined in 198 CRC cancer tissues and paired normal controls. The results indicated that MMP19 expression was significantly up-regulated in cancer tissues than in normal controls (P < 0.05; Fig. 3a). The clinical and histopathologic characteristics classified by the median MMP19 mRNA expression level are summarized in Table 3. High MMP19 expression was significantly correlated with lymph node metastases (P = 0.029), intramural vascular invasion (P = 0.015) and serum carcinoembryonic antigen status (P = 0.045; Table 3). Genes usually exert function through their encoded proteins. Therefore, we used immunohistochemistry to detect the expression of MMP19 protein in 42 patients in the validation group, and found MMP19 mRNA expression is highly correlated with their protein levels (P < 0.001) ( Table 3).
Discussion
With recent advances in high-throughput technologies (e.g., RNA deep sequencing), the transcriptomes of many tumors have been surveyed and many novel biomarkers and therapeutic targets have been identified. To validate MMP19 as a potential novel target gene to predict survival, we designed our study in three steps. First, we found MMP19 as a potential novel biomarker in terms of survival in the TCGA database. Second, we studied MMP19 in the GSE39582 database and confirmed it as novel biomarker for CRC. Finally, because some important clinical characteristics were missed in the TCGA and GSE39582 database, such as strategies of adjuvant therapy, and the quality of surgery,, we further validated the results from our own database. We found that MMP19 expression was significantly upregulated in cancer tissues relative to the normal controls, and that high MMP19 expression was associated with inferior clinical characteristics. Importantly, MMP19 was validated as an independent predictor for both OS and DFS for CRC after colectomy. Our results point to a crucial role for MMP19 in the evolution of CRC. MMP19 is a classical member of the MMP family, which consists of at least 23 enzymes [17]. MMP19 shares the typical structural domains of MMP, including a signal peptide, propeptide, catalytic domain, hinge region, and C-terminal domain [18]. MMP19 has been gradually recognized as an important oncogene in carcinogenesis and progression. It is associated with increased mortality and promotes metastatic behavior in non-small cell lung cancer (NSCLC) [25]. An increase in MMP19 expression indicates the progression of cutaneous melanoma and might augment melanoma growth by promoting the invasion of tumor cells [26]. MMP19 is highly expressed in astroglial tumors and promotes the invasion of glioma cells [27]. For CRC, a previous study reported that MMP19 may involve in malignant transformation, is low expressed in normal mucosa, and is upregulate during neoplastic progression [28], but its prognostic value has not been reported. Here, we provide new evidence that MMP19 plays a crucial role in CRC. High MMP19 expression was significantly correlated with lymph node metastases and intramural vascular invasion, which suggests that MMP19 may play a critical role in CRC invasion and metastases. Distant metastases and recurrence are two main reasons for cancer-related death; thus, it was not surprising that high MMP19 expression was correlated with a poor prognosis in CRC. Similar, increased MMP19 gene expression correlates with a worse prognosis and facilitates invasion in NSCLC [25]. Although we obtained conclusions from three independent databases, there were some limitations in our study. First, we only investigated the clinical significance of MMP19 in CRC using patient samples; no in vitro study or animal models were used in this study. Second, a further study is needed to understand the mechanisms underlying the function of MMP19 in CRC progression.
Conclusion
In summary, our results demonstrate that MMP19 is upregulated in CRC and is a potential predictor of CRC, which provides additional information for predicting survival and developing a therapeutic strategy. Our results warrant further studies on the detailed mechanisms by which MMP19 facilitates tumor progression in CRC.
|
v3-fos-license
|
2018-04-03T05:56:05.661Z
|
1997-08-15T00:00:00.000
|
23173068
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/272/33/20456.full.pdf",
"pdf_hash": "86d5ff6af21df80ac63dcb72babbc3c705ffa93e",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45985",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"sha1": "13dbab8011a895126a518632f080840c2438c562",
"year": 1997
}
|
pes2o/s2orc
|
Optimal subsite occupancy and design of a selective inhibitor of urokinase.
Human urokinase type plasminogen activator (u-PA) is a member of the chymotrypsin family of serine proteases that can play important roles in both health and disease. We have used substrate phage display techniques to characterize the specificity of this enzyme in detail and to identify peptides that are cleaved 840-5300 times more efficiently by u-PA than peptides containing the physiological target sequence of the enzyme. In addition, unlike peptides containing the physiological target sequence, the peptide substrates selected in this study were cleaved as much as 120 times more efficiently by u-PA than by tissue type plasminogen activator (t-PA), an intimately related enzyme. Analysis of the selected peptide substrates strongly suggested that the primary sequence SGRSA, from position P3 to P2', represents optimal subsite occupancy for substrates of u-PA. Insights gained in these investigations were used to design a variant of plasminogen activator inhibitor type 1, the primary physiological inhibitor of both u-PA and t-PA, that inhibited u-PA approximately 70 times more rapidly than it inhibited t-PA. These observations provide a solid foundation for the design of highly selective, high affinity inhibitors of u-PA and, consequently, may facilitate the development of novel therapeutic agents to inhibit the initiation and/or progression of selected human tumors.
Local activation and aggregation of platelets, followed by initiation of the blood coagulation cascade, assure that a fibrin clot will form rapidly in response to vascular injury (1). The presence of this thrombus, however, must be transient if the damaged tissue is to be remodeled and normal blood flow restored. The fibrinolytic system, which accomplishes the enzymatic degradation of fibrin, is therefore an essential component of the hemostatic system (1). The ultimate product of the fibrinolytic system is plasmin, a chymotrypsin family enzyme with relatively broad, trypsin-like primary specificity that is directly responsible for the efficient degradation of a fibrin clot (2). Production of this mature proteolytic enzyme from the inactive precursor, or zymogen, plasminogen is the rate-limiting step in the fibrinolytic cascade (2,3). Catalysis of this key, regulatory reaction is tightly controlled in vivo and is mediated by two enzymes present in human plasma, u-PA 1 and t-PA (3)(4)(5)(6).
u-PA and t-PA are very closely related members of the chymotrypsin gene family. These two proteases possess extremely high structural similarity (7,8), share the same primary physiological substrate (plasminogen) and inhibitor (plasminogen activator inhibitor, type 1) (3), and, unlike plasmin, exhibit remarkably stringent substrate specificity (9 -11). Despite their striking similarities, the physiological roles of t-PA and u-PA are distinct (5,6), and many studies (5,6,(12)(13)(14)(15)(16)(17)(18) suggest selective inhibition of either enzyme might have beneficial therapeutic effects. Mice lacking t-PA, for example, are resistant to specific excitotoxins that cause extensive neurodegeneration in wild type mice (13), and mice lacking u-PA exhibit defects in the proliferation and/or migration of smooth muscle cells in a model of restenosis following vascular injury (5,6).
A large body of experimental evidence from studies involving both model systems and human patients suggests that u-PA may play an important role in tumor biology and provides a compelling rationale to pursue the development of u-PA inhibitors. For example, anti-u-PA antibodies inhibit metastasis of HEp3 human carcinoma cells to chick embryo lymph nodes, heart, and lung (19), and similar studies demonstrated that these antibodies inhibit lung metastasis in mice following injection of B16 melanoma cells into the tail vein (20). Anti-u-PA antibodies also inhibit both local invasiveness and lung metastasis in nude mice bearing subcutaneous MDA-MB-231 breast carcinoma tumors (21). In addition, a recent study indicated that u-PA-deficient mice are resistant to the induction and/or progression of several tumor types in a two-stage, chemical carcinogenesis model (18). Finally, high levels of tumor-associated u-PA correlate strongly with both a shortened disease-free interval and poor survival in several different human cancers (22)(23)(24).
Because mice lacking either u-PA or t-PA do not develop thrombotic disorders, selective inhibition of either of these two enzymes seems unlikely to create thrombotic complications in vivo. On the other hand, mice lacking both u-PA and t-PA suffer severe thrombosis in many organs and tissues, resulting in a significantly reduced life expectancy (5,6). Nonselective inhibition of these two enzymes, therefore, seems almost certain to produce catastrophic consequences in the clinical setting. Consequently, significant interest exists in the development of inhibitors that are stringently specific for either t-PA * This study was supported in part by National Institutes of Health (NIH) Grants RO1 HL52475 and PO1 HL31950 (to E. L. M.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
¶ or u-PA, which are expected to facilitate a detailed investigation of the precise roles of the two enzymes in several important pathological processes and may aid the development of novel therapeutic agents to combat these processes. Rational design of these selective inhibitors is greatly complicated, however, by the absence of obvious "lead compounds"; both their primary physiological substrate and inhibitors fail to discriminate between the two closely related proteases.
We have used substrate phage display (25,26) to elucidate optimal subsite occupancy of u-PA. Peptide substrates that match the consensus sequence for substrates of u-PA derived from these studies are cleaved by u-PA 840 -5300 times more efficiently than control peptides containing the physiological target sequence present in plasminogen. In addition, unlike the plasminogen-derived control peptides, the selected peptides exhibit substantial selectivity for cleavage by u-PA versus t-PA. Information gained in these investigations was used to augment the u-PA/t-PA selectivity of PAI-1, the physiological inhibitor of both t-PA and u-PA (27,28); suggests potential lead compounds for the design of selective, small molecule inhibitors of u-PA; and provides new insights into the divergent evolution of molecular recognition by intimately related enzymes.
Construction of the phage vector fAFF1-tether C (fTC) and the random hexapeptide library fAFF-TC-LIB has been previously described (26). Control substrate phage fTC-PL, which contained the physiological target sequence for u-PA and t-PA, was constructed by hybridizing the single-stranded oligonucleotides 5Ј-TCGAGCGGTGGATCCGGTA-CTGGTCGTACTGGTCATGCTCTGGTAC-3Ј and 5Ј-CGCCACCTAGG-CCAGGACCAGCACAACAACCACGAGAC-3Ј and then ligating the annealed, double-stranded products into the XhoI/KpnI-cut vector fTC. All constructs were first transformed into MC1061 by electroporation and then transferred into K91.
Measurement of Enzyme Concentrations-Concentrations of functional t-PA and u-PA were measured by active site titration with 4-methylumbelliferyl p-guanidinobenzoate (29) using a Perkin-Elmer LS 50B Luminescence Fluorometer as described previously (9,30). In addition, the enzymes were titrated with a standard PAI-1 preparation that had been previously titrated against a trypsin primary standard. Total enzyme concentrations were measured by enzyme-linked immunosorbent assay.
Phage Selection Using u-PA-Substrate phage display was originally developed by Matthews and Wells (25) using monovalent phage, and an alternative method that used multivalent phage was reported later by Smith and Navre (26). Multivalent substrate phage were screened with u-PA using reaction conditions identical to those previously reported for t-PA (31) except that digestion of the phage was performed using enzyme concentrations varying from 2 to 10 g/ml and incubation times varying from 0.5 to 10 h.
Dot Blot Assay of Phage Proteolysis-Phage precipitation and dot blot analysis were performed as described previously (26,31). Individual phage stocks were prepared and digested with no enzyme, t-PA, u-PA, or u-PA in the presence of 1 mM amiloride, a specific inhibitor of u-PA, for periods of time varying from 15 min to 10 h. Individual reaction mixtures were spotted onto a nitrocellulose filter using a dot blotter apparatus (Bio-Rad). The filter was probed with mAb 3E-7 and developed using the Amersham Western ECL kit. Loss of positive staining indicates loss of antibody epitopes from the phage due to proteolytic cleavage of the randomized hexamer region.
Preparation and Sequencing of DNA from Phage Clones-DNA samples were prepared from interesting phage clones as described previously (31). Briefly, phage were precipitated from a 1-ml overnight culture by adding 200 l of 20% polyethylene glycol in 2.5 M NaCl. The mixture was incubated on ice for 30 min, and the phage pellet was collected by microcentrifugation for 5 min. The phage were resuspended in 40 l of lysis buffer (10 mM Tris-HCl, pH 7.6, 0.1 mM EDTA, 0.5% Triton X-100) and heated at 80°C for 15 min. Single-stranded DNA was purified by phenol extraction and ethanol precipitation and sequenced by the dideoxy method.
Kinetics of Cleavage of Synthetic Peptides by t-PA and u-PA-Peptides were synthesized and purified as described (9). Kinetic data were obtained by incubating various concentrations of peptide with a constant enzyme concentration to achieve between 5 and 20% cleavage of the peptide in each reaction. For assays with u-PA, enzyme concentration was either 815 or 635 nM. For assays with t-PA enzyme, concentration was 700 nM. Peptide concentrations were chosen where possible to surround K m and in all cases were between 0.5 and 32 mM. The buffer used in these assays has been described (9). Reactions were stopped by the addition of trifluoroacetic acid to 0.33% or by freezing on dry ice. Cleavage of the 13-and 14-residue peptides was monitored by reverse phase HPLC as described (9). The 4 -6-residue peptides were acylated at their amino termini and amidated at their carboxyl termini. Cleavage of the 4 -6-residue peptides was monitored by hydrophilic interaction HPLC chromatography (32) using a polyhydroxyaspartamine column from PolyLC (Columbia, MD). Buffer A was 50 mM triethylamine phosphate in 10% acetonitrile, and buffer B was 10 mM triethylamine phosphate in 80% acetonitrile. Peptides were eluted by a gradient that was varied from 100% buffer B to 100% buffer A during a 13-min interval. The percentage of cleaved peptide was calculated by dividing the area under the product peaks by the total area under substrate and product peaks. For all peptides containing multiple basic residues, mass spectral analysis of products confirmed that cleavage occurred at a single site and identified the scissile bond. Data were interpreted by Eadie-Hofstee analysis. Errors were determined as described (33) and were Ͻ25%.
Site-directed Mutagenesis and Construction of an Expression Vector Encoding a Recombinant Variant of PAI-1-
The expression vector pPAIST7HS was derived from the plasmid pBR322 and contained a full-length cDNA encoding human PAI-1 that was transcribed from a T7 gene 10 promoter (34). The 300-base pair SalI/BamHI fragment of human PAI-1 was subcloned from pPAIST7HS into bacteriophage M13mp18. Single-stranded DNA produced by the recombinant M13mp18 constructs was used as a template for site-specific mutagenesis according to the method of Zoller and Smith (35) as modified by Kunkel (36). The mutagenic oligonucleotide had the sequence 5Ј -CCACAGCTGTCATAGGCAGCGGCAAAAGCGCCCCCGAGGAGA-TC-3Ј.
Following mutagenesis, single-stranded DNA corresponding to the entire 300-base pair SalI-BamHI fragment was fully sequenced to ensure the presence of the desired mutations and the absence of any additional mutation. The 300-base pair SalI-BamHI double-stranded DNA fragment from the mutated, replicative form DNA was used to replace the corresponding fragment in pPAIST7HS to yield a full-length cDNA encoding PAI-1/UK1, which contained the amino acid sequence GSGKSA from the P4 to P2Ј position of the reactive center loop.
Expression and Purification of Recombinant Wild Type PAI-1 and the Variant PAI-1/UK1-Expression of wild type and the mutated variant of PAI-1 was accomplished in the E. coli strain BL21[DE3]pLys s (Novagen), which synthesizes T7 RNA polymerase in the presence of isopropyl-1-thio--D-galactopyranoside. Bacterial cultures were grown at 37°C with vigorous shaking to an A 595 of 0.9 -1.1, and isopropyl-1-thio--D-galactopyranoside was added to a final concentration of 1 mM to induce the synthesis of T7 RNA polymerase and the production of PAI-1 proteins. Cultures were grown for an additional 1-2 h at 37°C and then shifted to 30°C for 2-6 h.
Cells were pelleted by centrifugation at 8000 ϫ g for 20 min at 4°C and resuspended in 40 ml of cold start buffer (20 mM sodium acetate, 200 mM NaCl and 0.01% Tween 20, pH 5.6). The cell suspension was disrupted in a French pressure cell (Aminco), and cellular debris was removed by ultracentrifugation for 25 min at 32,000 ϫ g.
Purification of soluble, active PAI-1 was performed as described previously (37). PAI-1 containing supernatants were injected onto a XK-26 column (Pharmacia Biotech Inc.) packed with CM-50 Sephadex (Pharmacia). The column was washed with 5 column volumes of start buffer (20 mM sodium acetate, 200 mM NaCl, and 0.01% Tween 20, pH 5.6), and PAI-1 proteins were eluted using a 0.2-1.8 M linear gradient of NaCl in the same buffer. Peak fractions were collected, pooled, and concentrated using a Centriplus 30 concentrator (Amicon). Purified preparations were analyzed by activity measurements using standard, direct assays of t-PA, SDS-polyacrylamide gel electrophoresis, and measurement of optical density at 280 nm.
Measurement of Active PAI-1 in Purified Preparations-A primary standard of trypsin was prepared by active site titration using p-nitrophenyl guanidinobenzoate HCl as described previously (38). Concentra-tions of active molecules in purified preparations of wild type or mutated PAI-1s were determined by titration of standardized trypsin as described by Olson et al. (39) and by titration of standardized t-PA preparations.
Kinetic Analysis of the Inhibition of t-PA and u-PA by Recombinant PAI-1 and PAI-1/UK1-Second order rate constants (k i ) for inhibition of t-PA or u-PA were determined using pseudo-first order (k i Ͻ 2 ϫ 10 6 ) or second order (k i Ͼ 2 ϫ 10 6 ) conditions. For each reaction, the concentrations of enzyme and inhibitor were chosen to yield several data points for which the residual enzymatic activity varied between 20 and 80% of the initial activity. Reaction conditions and data analysis for pseudo-first order reactions were as described previously (40 -43).
For second order reactions, equimolar concentrations of u-PA and PAI-1 were mixed directly in microtiter plate wells and preincubated at room temperature for periods of time varying from 0 to 30 min. Following preincubation, the mixtures were quenched with an excess of neutralizing anti-PAI-1 antibody (generously provided by Dr. David Loskutoff), and residual enzymatic activity was measured using a standard, indirect chromogenic assay. These indirect, chromogenic assays were compared with control reactions containing no PAI-1 or to which PAI-1 was added after preincubation and the addition of anti-PAI-1 antibody, plasminogen, and Spec PL to the reaction mixture. Data were analyzed by plotting the reciprocal of the residual enzyme concentration versus the time of preincubation.
RESULTS
Construction and Use of Substrate Phage Libraries-A polyvalent fd phage library that displayed random hexapeptide sequences and contained 2 ϫ 10 8 independent recombinants was prepared (25,31). Each member of this library displayed an N-terminal extension from phage coat protein III that contained a randomized region of six amino acids, a six-residue linker sequence (SSGGSG), and the epitopes for mAbs 179 and 3-E7. Because u-PA did not digest the phage coat protein III sequence, the antibody epitopes, or the flexible linker sequence, the loss of antibody epitopes from the phage surface upon incubation with u-PA required cleavage of the randomized peptide insert. Incubation of the library with u-PA, followed by removal of phage retaining the antibody epitopes, therefore, accomplished a large enrichment of phage clones whose random hexamer sequence could be cleaved by u-PA.
Analysis of Selected Phage Clones and Identification of a Consensus Sequence-Following five rounds of selection to enrich and amplify phage that display sequences that are readily cleaved by u-PA, 100 phage clones were identified as u-PA substrates. DNA sequencing of these clones revealed the presence of 91 distinct hexamer sequences among the selected phage (Table I) Optimal Subsite Occupancy for u-PA residue was an arginine. An additional 22 phage contained two basic residues but only a single arginine. Alignment and analysis of these hexamer sequences suggested that the consensus sequence for optimal subsite occupancy for substrates of u-PA, from P3 to P2Ј, was SGR(S Ͼ R,K,A)X, where X represents a variety of amino acid residues but was most often alanine, glycine, serine, valine, or arginine. Analysis of these data was complicated by the fact that approximately 72% of the selected substrate phage contained an arginine in the first position of the randomized hexamer and therefore utilized the amino-terminal flanking residues, Ser-Gly, to occupy the P3 and P2 subsites. While these results left no doubt that the P3-P1 SGR sequence created by the fusion was a very favorable recognition site for u-PA, this use of flanking residues necessitated a particularly careful examination of the P3 and P2 preferences of u-PA. Consequently, we altered our experimental protocol in two ways to address this issue. First, we isolated an unusually large collection of substrate phage (91 distinct substrates) to ensure that a reasonable number of these (23) would not utilize the flanking Ser-Gly to fill the P3 and P2 subsites. This allowed a meaningful comparison of the consensus sequence derived from the entire library with that derived from the non-fusion phage and the demonstration of good agreement between the two consensus sequences. Second, we performed a previously described dot blot analysis (26,31) of the digestion of all 100 substrate phage by u-PA using a wide variety of stringencies of digestion. Although this semiquantitative assay cannot provide kinetic constants, it can provide an accurate rank ordering of the lability of the substrate phage clones.
Under the most stringent conditions examined, 11 of the 100 substrate phage, containing eight distinct randomized hexamer sequences, proved to be particularly labile u-PA substrates (Table II). All eight of the most labile substrate phage contained the P3-P1 SGR motif, demonstrating that this sequence is, in fact, a more labile u-PA site than related, selected sequences present in the library such as SSR, TAR, TSR, TTR, etc. This dot blot analysis also yielded additional information regarding the preferences of u-PA for the unprimed subsites. While analysis of the entire substrate phage library failed to reveal a clear consensus at P1Ј and P2Ј, the most labile substrate phage displayed an obvious preference at both of these positions. Five of the eight most labile phage contained a serine residue at P1Ј, and seven of these eight phage contained an alanine residue at P2Ј. These observations strongly suggest that the primary sequence SGRSA, from P3 to P2Ј, represents optimal subsite occupancy for substrates of u-PA.
Kinetic Analysis of the Cleavage of Peptides Containing Sequences Present in Selected Substrate Phage-Four peptides containing amino acid sequences present in the randomized hexamer region of the most labile phage were chosen for detailed kinetic analysis (Table III) and compared with the hydrolysis of a control peptide (I) containing the P3-P4Ј sequence of plasminogen, a series of residues that fall within a disulfide-linked loop in the native protein. All four of the selected peptides were substantially improved substrates for u-PA, by factors of 840 -5300, compared with the control, plasminogen peptide (Table III). These increases in catalytic efficiency were mediated primarily by increases in k cat , suggesting that optimized subsite interactions served to lower the energy of the transition state rather than the ground state. For example, compared with that of control peptide (I), the K m for cleavage of the most labile selected peptide (II) was reduced by a factor of 5.6; however, the k cat was increased by a factor of more than 940. In addition, peptide substrates that interacted optimally with the primary subsites of u-PA were selective for cleavage by u-PA relative to t-PA. The four selected peptides (II-V), for example, were cleaved 16 -89 times more efficiently by u-PA than by t-PA, and improvements in both K m and k cat contributed to the preferential hydrolysis by u-PA.
Minimization of the Selective Peptide Substrates-The kinetic analysis described above was performed using substrate peptides that were 14 amino acids in length. To confirm that the specificity we observed was inherent in the selected hexapeptide sequences, we examined the kinetics of cleavage of short peptides containing only sequences found within selected hexapeptide sequences. Pentapeptide VII, for example, was cleaved by u-PA with a catalytic efficiency of 1200 M Ϫ1 s Ϫ1 and exhibited a u-PA/t-PA selectivity of 20. The behavior of pentamer VII in these assays, therefore, was very similar to that of peptide IV, a 14-mer that contains the same P3-P2Ј sequence as the pentamer. These observations indicate that appropriate occupancy of the P3-P2Ј subsites alone can create selective substrates for u-PA.
Effect of Lysine versus Arginine at P1-Differences at position 190 (chymotrypsin numbering system) between u-PA and t-PA suggest that u-PA may exhibit decreased discrimination between arginine and lysine at the P1 position of a substrate compared with t-PA (44). Consistent with this hypothesis and in contrast to the selected t-PA substrate library, the u-PA library did include members that contained a P1 lysine. This observation suggested that the u-PA/t-PA selectivity of a peptide substrate might be enhanced by placement of lysine in the P1 position, although this increased selectivity was likely to be accompanied by decreased reactivity toward u-PA. To test this hypothesis, we analyzed hydrolysis of a variant of u-PA selective peptide (VI) that contained a P1 lysine (peptide VIII). The P1 lysine mutation decreased the catalytic efficiency for cleavage of this peptide by a factor of 49 for t-PA and by a factor of 7 for u-PA. As predicted, then, the P1 lysine mutation did enhance the u-PA/t-PA selectivity of the peptide substrate by a factor of approximately 7. It is not surprising, therefore, that the most selective u-PA substrate, peptide IX, which is cleaved approximately 121 times more efficiently by u-PA than by t-PA, is derived from the randomized hexamer region of a substrate phage that contained a P1 lysine.
Importance of P3 and P4 for Discrimination between u-PA and t-PA-Recent investigations that explored optimal subsite occupancy for substrates of t-PA suggested that the P3 residue was the primary determinant of the ability of a substrate to discriminate between t-PA and u-PA and that this selectivity could be enhanced modestly by appropriate occupancy of P4 (11). These suggestions were based on evidence obtained from a statistical analysis of phage selected using a substrate subtraction protocol rather than by a kinetic analysis of peptide substrates. Consequently, to test these hypotheses, we synthesized variants of the most labile u-PA-selective substrate (peptide II) that contained mutations in the P3 and/or P4 positions and analyzed the hydrolysis of these peptides by u-PA and t-PA. In peptide X the P3 serine of peptide II was replaced by a tyrosine, and in peptide XI the P3 serine was replaced by arginine. As expected, these mutations substantially decreased the u-PA/t-PA selectivity of the peptide by a factor of 330 or 360, respectively, and actually converted the peptide into a t-PA-selective substrate. Moreover, mutation of both the P3 serine and P4 glycine of the most labile u-PA substrate to arginine and glutamine, respectively (peptide XII), decreased the u-PA/t-PA selectivity by a factor of 1200. These data confirm the proposed status of the P3 and P4 residues as specificity determinants for substrates of t-PA and u-PA and suggest a particularly prominent role of the P3 residue in this capacity.
Design and Characterization of a Variant of PAI-1 That Is Selective for u-PA-
To test the prediction that information gained from the study of peptide substrates could facilitate the design of selective, high affinity inhibitors of urokinase, we sought to augment the u-PA/t-PA selectivity of the serpin PAI-1, the primary physiological inhibitor of both t-PA and u-PA. We used oligonucleotide-directed, site-specific mutagenesis to construct a variant of PAI-1 that contained the primary sequence found in the peptide substrate that was most selective for u-PA, GSGKS, from the P4 -P1Ј position of the reactive center loop. Kinetic analysis indicated that the PAI-1 variant inhibited u-PA approximately 70 times more rapidly than it inhibited t-PA with second order rate constants for inhibition of u-PA and t-PA of 6.2 ϫ 10 6 M Ϫ1 s Ϫ1 and 9 ϫ 10 4 M Ϫ1 s Ϫ1 , respectively (Table IV). In contrast, wild type PAI-1 inhibits u-PA and t-PA with second order rate constants of 1.9 ϫ 10 7 M Ϫ1 s Ϫ1 and 1.8 ϫ 10 6 M Ϫ1 s Ϫ1 , respectively. As anticipated, therefore, the mutated serpin possessed a u-PA/t-PA selectivity that was approximately 7-fold greater than that of wild type PAI-1. Moreover, the 70-fold selectivity of the PAI-1 variant is consistent with the value of 120 observed for hydrolysis of the corresponding peptide substrate by the two enzymes (Tables III and IV).
DISCUSSION
Substrate Phage Can Elucidate Specificity Differences between Closely Related Enzymes-u-PA and t-PA possess distinct but overlapping physiological and pathological roles, and the ability to selectively inhibit either enzyme with small molecules would allow these roles to be examined comprehensively both in vitro and in vivo. Normally, such inhibitor design would be based on knowledge of the sequences of endogenous protein substrates or inhibitors. This approach, however, is not possible with u-PA and t-PA because these enzymes share the same physiological substrate, plasminogen, and inhibitor, PAI-1. Furthermore, this similarity calls into question the hypothesis that highly selective inhibitors can be generated, since the specificities of the two enzymes appear so similar. We find, however, both in this study and in a previous study aimed at the design of t-PA selective substrates (11), that there are subtle but significant differences in optimal subsite occupancy between the two enzymes, and these distinctions can be elucidated by substrate phage display protocols.
Sequences Selected for Optimal Cleavage Do Not Resemble the Physiological Target Sequence-A key observation of this study is that the primary sequence SGRSA, from the P3-P2Ј positions of a peptide substrate, affords highly labile subsite occupancy for urokinase. This sequence differs at P3, P1Ј, and P2Ј from the target sequence found in plasminogen (PGRVV) and is cleaved by u-PA greater than 5300 times more efficiently. This major discrepancy, in both primary sequence and lability, of the physiological target sequence and the consensus sequence derived using substrate phage display protocols suggests that a physiological target sequence is not necessarily a reasonable lead compound for the design of specific, small molecule substrates or inhibitors of highly selective serine proteases.
A major contribution to the discrepancy between the physiological and consensus target sequences of u-PA almost certainly arises from the highly conserved mechanism of zymogen activation of chymotrypsin family enzymes (45). Following activation cleavage of a chymotrypsinogen-like zymogen, the P1Ј and P2Ј residues insert into the activation pocket, where they form a number of conserved hydrophobic interactions as well as a new, buried salt bridge with the aspartic acid residue adjacent to the active site serine (45)(46)(47). Because these interactions substantially stabilize the active conformation of the mature enzyme, this key role after activation cleavage places severe functional constraints on the P1Ј and P2Ј residues of a chymotrypsinogen-like zymogen and consequently prevents the two residues from evolving simply to interact optimally with the activating enzyme. Consistent with this hypothesis, the consensus and physiological target sequences for u-PA agree well on the unprimed side of the scissile bond; however, the two target sequences diverge dramatically at the P1Ј and P2Ј subsites.
Additional factors are also likely to contribute to the observed discrepancy between the consensus and physiological target sequences for u-PA. For example, modeling studies reported by Lamba, Huber, Bode and co-workers (8) suggest the S1Ј and/or the S2Ј pockets utilized by u-PA when hydrolyzing plasminogen may actually differ from those used when hydrolyzing peptide substrates. Moreover, as the enzyme diverged from a trypsin-like precursor, u-PA may have evolved a strong dependence for efficient catalysis upon productive interactions with substrates at secondary sites that diminished the contribution of optimal interactions with primary subsites in the active site cleft. Although the location, role, and even the existence of such secondary contacts between u-PA and plasminogen remain obscure at the present time, previous studies of the interaction of u-PA and t-PA with PAI-1 have demonstrated very clearly that these two enzymes are capable of using specific, secondary contacts efficiently both to enhance selectivity and to dampen the influence of optimal primary subsite interactions (42, 48 -50). Although the reactive center loop of PAI-1 has evolved to match optimal subsite occupancy for urokinase very closely, in the absence of productive contact with a single, strong secondary site of interaction between the two proteins, PAI-1 becomes a poor inhibitor of u-PA (50).
Implications Regarding the Possibility of Additional Physiological Substrates for u-PA-The identification of synthetic peptides that are cleaved up to 120 times more efficiently by u-PA than by t-PA raises the possibility that similar u-PAselective (or t-PA-selective) physiological substrates may exist that are currently not appreciated. Differences in the phenotypes exhibited by mice lacking either of the two enzymes are consistent with this possibility (5,6). This issue remains uncertain, however, because selective expression of t-PA or u-PA in particular microenvironments could also account for these distinct phenotypes.
Importance of the P3 Residue in Discriminating between u-PA and t-PA-By demonstrating that mutation of the P3 residue alone could alter the relative u-PA/t-PA selectivity of a peptide substrate by a factor of greater than 300 (Table III), this study provided strong support for the hypothesis that the P3 residue was the primary determinant of the ability of a substrate to discriminate between u-PA and t-PA. We have previously reported that occupancy of P3 by arginine or large aromatic or hydrophobic residues favored cleavage by t-PA (11), and this investigation showed that a P3 serine residue favored cleavage by u-PA. In addition, this study demonstrated that more modest alterations of specificity could be achieved by selective occupancy of the P4 and P1 subsites. These data indicated that PAI-1, which contains a P3 serine, has evolved to match optimal subsite occupancy of u-PA more closely than that of t-PA. This observation may explain why PAI-1 inhibits u-PA more rapidly than it inhibits t-PA (Table IV) and suggests that, during the evolution of the fibrinolytic system, there may have been a greater need to suppress the activity of u-PA in the circulation than to regulate t-PA activity. Consistent with this hypothesis, the circulating, single chain form of u-PA is a true zymogen, while t-PA is secreted into the circulation as an active, single chain enzyme.
Substrate Phage Display Can Aid Inhibitor Design-Another implication of these studies is that information gained from the application of substrate phage display libraries can lead di-rectly to the design of specific inhibitors. Although hydrolysis of the selective, small peptide substrates by u-PA is characterized by K m values in the 0.6 -3 mM range, it has been routinely observed that the introduction of a transition state bond geometry adjacent to the P1 residue of a protease substrate can create either a reversible inhibitor whose affinity for the target protease is enhanced by 3-6 orders of magnitude or an irreversible inhibitor with an impressive second order rate constant for inhibition of the target protease (Ͼ10 5 M Ϫ1 s Ϫ1 ) (for a review, see Ref. 51). Similar results using the substrates identified in this study would create highly selective, small molecule u-PA inhibitors, with affinities in the low nanomolar range, that might be further improved by subsequent, systematic chemical modification.
Conclusion-The ability to identify subtle but significant specificity differences between enzymes that share the same physiological substrates and inhibitors, as demonstrated in this study, is a fundamental challenge both for basic enzymology and rational drug design. Advances in this area will significantly enhance understanding of the molecular determinants and mechanisms of specific catalysis and may facilitate the design of highly selective and therapeutically valuable new enzymes.
|
v3-fos-license
|
2023-10-20T14:00:49.106Z
|
2023-10-19T00:00:00.000
|
264308920
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://translational-medicine.biomedcentral.com/counter/pdf/10.1186/s12967-023-04570-0",
"pdf_hash": "8e72bdae426fe207498ba5e89eaa014ddd8b3bd9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45987",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"sha1": "c9dc85774be766d1d7068c35d950c206ae52da80",
"year": 2023
}
|
pes2o/s2orc
|
Washed microbiota transplantation improves renal function in patients with renal dysfunction: a retrospective cohort study
Background Changes in the gut microbiota composition is a hallmark of chronic kidney disease (CKD), and interventions targeting the gut microbiota present a potent approach for CKD treatment. This study aimed to evaluate the efficacy and safety of washed microbiota transplantation (WMT), a modified faecal microbiota transplantation method, on the renal activity of patients with renal dysfunction. Methods A comparative analysis of gut microbiota profiles was conducted in patients with renal dysfunction and healthy controls. Furthermore, the efficacy of WMT on renal parameters in patients with renal dysfunction was evaluated, and the changes in gut microbiota and urinary metabolites after WMT treatment were analysed. Results Principal coordinate analysis revealed a significant difference in microbial community structure between patients with renal dysfunction and healthy controls (P = 0.01). Patients with renal dysfunction who underwent WMT exhibited significant improvement in serum creatinine, estimated glomerular filtration rate, and blood urea nitrogen (all P < 0.05) compared with those who did not undergo WMT. The incidence of adverse events associated with WMT treatment was low (2.91%). After WMT, the Shannon index of gut microbiota and the abundance of several probiotic bacteria significantly increased in patients with renal dysfunction, aligning their gut microbiome profiles more closely with those of healthy donors (all P < 0.05). Additionally, the urine of patients after WMT demonstrated relatively higher levels of three toxic metabolites, namely hippuric acid, cinnamoylglycine, and indole (all P < 0.05). Conclusions WMT is a safe and effective method for improving renal function in patients with renal dysfunction by modulating the gut microbiota and promoting toxic metabolite excretion. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s12967-023-04570-0.
Background
Chronic kidney disease (CKD), affecting approximately 10% of the global population [1,2], is expected to become the fifth leading cause of death by 2040 [3].CKD results in a progressive decline in kidney function culminating in end-stage renal disease (ESRD), requiring renal replacement therapy (RRT) for patient survival.The current count of over 2.5 million patients with CKD undergoing RRT is predicted to double, reaching 5.4 million by 2030 [4].Regrettably, existing therapies offer limited efficacy and only slow disease progression [5].Consequently, an urgent imperative exists to develop novel approaches that can arrest or reverse the decline in renal function.
Accumulating evidence underscores the involvement of gut microbiota in kidney disease pathophysiology, and a conjectured gut-kidney axis has been proposed [6,7].Profound disparities in gut microbiome composition between patients with CKD and healthy controls have been documented [8,9].Additionally, animal studies have demonstrated that probiotics, as gut microbiota modulators, can significantly improve renal function in CKD mice [10,11].However, in human patients with CKD, probiotics can only delay the decline in renal function rather than effect a cure or reversal [10,12].Given the complexity of bacteriahost interactions, a single-species microbiota-targeted intervention might prove insufficient to improve the outcomes of all patients with CKD [13].
Faecal microbiota transplantation (FMT), involving the transfer of multispecies gut microbiota from a healthy donor to a recipient, has proven effective in treating conditions such as Clostridioides difficile infection, inflammatory bowel disease, and metabolic disorders [14].Recent clinical studies have shown FMT's potential benefits for hypertension, systemic lupus erythematosus, and hyperuricaemia [15][16][17], which were considered causative factors of CKD.Furthermore, CKD mice treated with healthy-donor gut microbiota exhibited less severe kidney histopathology and lower serum creatinine (SCr) levels compared with those treated with gut microbiota from patients with ESRD [18].While individual case reports exist [19], no cohort study has addressed whether FMT can improve renal function in patients with renal dysfunction.
Challenges including intricate sample preparation and the high incidence of adverse events (AEs) restrict FMT's application [20].Washed microbiota transplantation (WMT), using an automated purification system distinct from traditional FMT, significantly reduces AEs [21].This study evaluated WMT's efficacy and safety in improving renal activity among patients with renal dysfunction.
Study design and patients
This retrospective, single-centre, cohort study adhered to the Declaration of Helsinki and obtained approval from the Ethics Committee of the First Affiliated Hospital of Guangdong Pharmaceutical University (approval number: 2021-123).Written informed consent was obtained from all patients, except in cases where a legal representative consented on behalf of those unable to do so.
The study encompassed consecutive adult inpatients (≥ 18 years of age) who underwent WMT and attended at least one follow-up visit at the Department of Gastroenterology, First Affiliated Hospital of Guangdong Pharmaceutical University from 1 January 2017 to 30 June 2021.Additionally, a control group of patients with renal dysfunction, who did not undergo WMT within the same timeframe, was recruited to assess the effect of WMT on renal parameters.The control group was nearly 1:1 matched for sex and age.The exclusion criteria were as follows: (1) acute gastrointestinal infection within 1 month; (2) antibiotic usage within 3 months (except for those who underwent WMT for antibiotic-associated diarrhoea); (3) pregnancy; (4) ongoing RRT (renal transplantation or dialysis) or substantial renal-affecting medication usage (e.g., diuretics or glucocorticoids); and (5) missing medical data.Sample size estimation was performed using online software (Power and Sample Size Calculators; HyLown Consulting LLC, Atlanta, GA, USA).
Donor selection and WMT procedure
Healthy donors were initially screened using a questionnaire followed by blood and stool tests to rule out communicable diseases, as previously described [15].
A total of 500 mL of 0.9% saline (NaCl) and 100 g of stool sample were homogenised and microfiltered through an automated microbiota purification system (GenFMTer; FMT Medical, Nanjing, China) to prepare the washed microbiota suspension.The faecal microbiota suspension was centrifuged (1100 ×g for 3 min at room temperature), and the precipitate was washed with 0.9% NaCl.This process was repeated twice more, each time involving centrifugation and washing.Eventually, 100 mL NaCl was added to resuspend the microbiota precipitate, yielding the final washed microbiota suspension [15].
The WMT procedure involved administering the washed microbiota suspension (120 mL per day for 3 consecutive days) to patients via a transendoscopic enteral tube (for the lower gastrointestinal tract) or a nasojejunal tube (for the upper gastrointestinal tract), according to each patient's specific conditions and preference.Patients received microbial suspensions from healthy donors, allocated at random.
Data collection
Electronic medical records provided the following clinical information: demographic details, body mass index, smoking and alcohol habits, history of comorbidities (e.g., hypertension and type 2 diabetes), history of RRT, medication usage, indication for WMT (organic or functional disease), route of WMT delivery (lower or upper gastrointestinal tract), AEs associated with WMT, and laboratory parameters, including SCr, blood urea nitrogen (BUN), serum uric acid (UA), haemoglobin, serum sodium, serum potassium, serum calcium, serum phosphorus, triglycerides, total cholesterol, and low-density lipoprotein cholesterol (LDL-c).
Definitions
The estimated glomerular filtration rate (eGFR) was calculated as follows: eGFR (mL/min/1.73m 2 ) = 186 × SCr − 1.154 × age −0.203 × (0.742 if female).Normal renal function was defined as eGFR of ≥ 90 mL/min/1.73m 2 , while renal dysfunction was defined as eGFR of < 90 mL/min/1.73m 2 (CKD stages 2-5) [22].Alcoholism was defined as weekly alcohol consumption of > 210 g for males and > 140 g for females [23].Organic diseases encompassed conditions resulting in structural changes to the organs or tissues (e.g., inflammatory bowel disease and chronic liver disease), while functional diseases referred to those lacking structural changes (e.g., functional bowel disorders and gut dysbiosis).WMT-related AEs, including abdominal pain, diarrhoea, and fever, were assessed by physicians based on clinical judgment.The effect of WMT on renal parameters was determined as follows: △renal parameter = renal parameter after WMT-renal parameter at baseline.
Sample collection
Patient stool, urine, and blood samples were collected 2 days before each WMT session (baseline and approximately 1 month, 2 months, and 6 months after the first WMT).Stool samples from healthy donors used for WMT were also collected for sequencing.The stool samples were contained within stool collection tubes with a deoxyribonucleic acid (DNA) stabiliser (Invitek, Germany).All samples were stored at -80℃ until sequencing.
Microbiome analysis
DNA extraction and sequencing were conducted by Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China), as previously described [15].Briefly, DNA was extracted from each stool sample using the E.Z.N.A. ® soil DNA Kit (Omega Bio-Tek, Norcross, GA, USA).DNA concentration was assessed using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Wilmington, DE, USA).Amplification of bacterial 16S ribosomal ribonucleic acid (rRNA) gene V3-V4 regions was achieved through the 338F and 806R primer sets, and amplicon integrity was verified via agarose gel electrophoresis.Paired-end sequencing was performed using the Illumina MiSeq platform.Raw sequencing reads were deposited in the National Centre for Biotechnology Information Sequence Read Archive under the Accession numbers PRJNA790000.
Paired-end sequences were combined using FLASh (version 1.2.11), and subsequent quality filtering was performed using fastp (version 0.19.6).The remaining sequencing data underwent DADA2-based denoising to generate amplicon sequence variants (ASVs) in QIIME2 (version 2020.2).Taxonomic assignment for the ASVs was performed using QIIME2 and the SILVA 16S rRNA database.Sequencing data analyses were performed using the Majorbio Cloud Platform (www.major bio.com).
Metabolomics analysis
For liquid chromatography-mass spectrometry (LC-MS), frozen urine samples were thawed on ice and vortexed.Each urine sample (100 μL) was combined with methanol (300 μL) and 1 μg/mL of L-2-chlorophenyl alanine (Bidepharm, Shanghai, China) as an internal standard for protein precipitation.The mixture was sonicated in an ice-water bath for 10 min, followed by incubation at − 20 ℃ for 1 h and centrifugation at 14,000 ×g at 4 ℃ for 15 min.The supernatant (100 μL) was transferred to a glass vial for LC-MS analysis.A quality control sample was prepared by combining 20 μL supernatant from each sample.
LC-MS analysis employed a Q Exactive Plus mass spectrometer (Thermo Fisher Scientific), with all samples analysed in positive and negative ionisation modes.The positive mode mobile phase comprised water with 0.1% formic acid (A) and acetonitrile (B), while the negative mode mobile phase comprised water with 5 mM acetic acid (A) and acetonitrile (B).The column temperature was maintained at 35 ℃, with an injection volume of 3 μL.The gradient elution program was run as follows: 0 min, 1% B; 8 min, 99% B; and 10.1 min, 1% B, at a flow rate of 0.4 mL/min.Electrospray ionisation source parameters included sheath gas flow at 45 L/min, auxiliary gas flow at 15 L/min, sweep gas flow at 0 L/min, spray voltage at 4000 V (for positive mode) or − 3000 V (for negative mode), and capillary temperature at 400 ℃.
Thermo Fisher Scientific Compound Discoverer (version 3.1) facilitated metabolite annotation of LC-MS data, referencing the BioCyc, Human Metabolome, Kyoto Encyclopaedia of Genes and Genomes, MassBank, and National Institute of Standards and Technology databases.Metabolomics analyses and related graphs were generated using MetaboAnalyst 5.0 online tools (www.metab oanal yst.ca).Based on partial least squares discriminant analysis (PLS-DA) results, variable importance in projection (VIP) scores were calculated.Metabolites with VIP scores > 1.0 in the PLS-DA model and P < 0.05 in the Wilcoxon rank-sum test were identified as differential metabolites.
Statistical analysis
Statistical analysis was performed using SPSS software (version 22.0; IBM, Armonk, NY, USA) and Prism (version 8; GraphPad, San Diego, CA, USA).Continuous data are presented as the mean and standard deviation for normally distributed variables and as a median and interquartile range for non-normally distributed variables.Categorical data are presented as frequencies and percentages.Between-group comparisons of continuous variables were performed using the Student's t-test and the Wilcoxon rank-sum test, while categorical variables were analysed using the chi-square test and Fisher's exact test.For one-sample comparisons (between time points), the one-sample t-test or Wilcoxon signed-rank test was used as appropriate.Statistical significance was determined by a two-tailed P-value of < 0.05.
Demographic characteristics of patients and healthy donors
Initially, 527 patients who underwent WMT were enrolled, and 253 met the final analysis criteria.Of these patients, 86 had renal dysfunction while 168 did not.Among those with renal dysfunction, 76 were in CKD G2, nine in CKD G3, and one in CKD G4.A control group comprising 86 sex-and age-matched patients with renal dysfunction who did not undergo WMT was also included.Additionally, 25 healthy donors passed the donor screening.The demographic and clinical characteristics of patients and healthy donors are summarised in Additional file 3: Table S1.
Gut microbiota profiles in patients with renal dysfunction and healthy donors
Gut microbiota profiles were compared between patients with renal dysfunction and healthy donors.The phylumlevel relative abundances of gut microbes in patients with renal dysfunction and healthy donors are presented in Additional file 1: Fig. S1a.Although no differences were observed in genus-level richness and diversity (Additional file 1: Fig. S1b, Fig. 1b), principal coordinate analysis (PCoA) and nonmetric multidimensional scaling (NMDS) analysis based on β-diversity showed a significantly different microbial community structure between the two groups (Fig. 1c, Additional file 1: Fig. S1c).Compared to healthy donors, patients with renal dysfunction had notable changes in genus-level relative abundances (Fig. 1d, Additional file 1: Fig. S1d).This encompassed reduced relative abundances of Eubacterium coprostanoligenes, Anaerostipes, Monoglobus, and Butyricicoccus (all P < 0.05).
The effects of WMT on renal function in patients without renal dysfunction were also assessed.No significant effects of WMT on renal parameters were observed in these patients, except for a decrease in serum UA after the third WMT (Fig. 3).
Clinical factors associated with the effects of WMT on renal function
Subsequently, potential factors influencing the effects of WMT on renal function were assessed.Among the Among the patients with renal dysfunction, 60 and 26 underwent WMT for functional and organic diseases, respectively.However, no significant differences were observed in the effects of WMT on renal function parameters (SCr, eGFR, BUN, and UA) between these two groups (Fig. 5a).Given hypertensive nephropathy as the primary cause of renal dysfunction, a comparison was drawn between the effects of WMT on renal function parameters in patients with renal dysfunction caused by hypertensive nephrology (n = 31) and those resulting from other aetiologies (n = 55).However, minimal significant differences in most renal function parameters were observed between patients with hypertensive nephropathy and those with other aetiologies (Fig. 5b).
Effects of WMT on renal disease-related parameters in patients with renal dysfunction
Given that patients with renal dysfunction experience a wide array of complications, including electrolyte disturbances, dyslipidaemia, and anaemia, the impact of WMT on renal disease-related parameters in patients with renal dysfunction was also analysed.The total cholesterol, LDL-c and haemoglobin levels demonstrated signs of improvement after WMT, while other parameters did not exhibit significant changes after treatment (Additional file 5: Table S3).
AEs of WMT
As safety remains a primary concern in WMT, WMTrelated AEs were examined.Among 86 patients with renal dysfunction undergoing 206 WMT procedures, the AE incidence was 2.91%.The most prevalent WMTrelated AE was diarrhoea (two WMT procedures, 0.97%), followed by bloating (one WMT procedure, 0.49%), fever (one WMT procedure, 0.49%), vomiting (one WMT procedure, 0.49%), and anal pain (one WMT procedure, 0.49%).Notably, the bloating experienced by one patient resolved spontaneously, while AEs in the remaining five patients improved after symptomatic treatment.No serious AEs were observed.
Gut microbiota profiles in patients with renal dysfunction before and after WMT
Gut microbiota profiles of patients with renal dysfunction were compared before and after WMT to further investigate the potential mechanism by which WMT improves renal function.A total of 26 stool samples (collected at baseline [n = 13], 1 month [n = 9], 2 months [n = 2], and 6 months [n = 2] after the first WMT) were included for 6a) at the genus level was significantly higher and the Simpson index was significantly lower (0.24 ± 0.22 vs.0.09 ± 0.04, P = 0.004; Additional file 2: Fig. S2b) after WMT, while no significant differences were observed in the abundance-based coverage estimator and Chao indices (Additional file 2: Fig. S2b).Genus-level PCoA (R = 0.139, P = 0.001; Fig. 6b) and NMDS analysis (stress: 0.264, R = 0.139, P = 0.001; Additional file 2: Fig. S2c) demonstrated that the gut microbiota profile of patients with renal dysfunction after WMT tended to resemble that of healthy donors.Notably, several gut genera, including Eubacterium coprostanoligenes, Anaerostipes, Monoglobus, and Dorea, exhibited significant enrichment after WMT, having initially been significantly reduced in patients with renal dysfunction.Simultaneously, other genera, including Hungatella, were significantly decreased after WMT (Fig. 6c, Additional file 2: Fig. S2d).As presented in Fig. 6d, the relative abundances of several genera correlated with renal parameters in patients with renal dysfunction.For instance, the Eubacterium coprostanoligenes group, Senegalimassilia, and Coriobacteriales incertae sedis abundances were positively correlated with eGFR levels.
Urine metabolic profiles in patients with renal dysfunction before and after WMT
Urine metabolomic profiles from 13 patients with renal dysfunction before and after WMT (with available samples at baseline [n = 13], 1 month [n = 12], 2 months [n = 8], and 6 months [n = 2] after the first WMT) were subjected to metabolomics analysis.As demonstrated by the distinct separation in the PLS-DA score plot (Fig. 7a), points representing pre-and post-WMT stages were distinctly separated.VIP scores, derived from PLS-DA outcomes, led to the identification of the top 15 metabolites ranked by VIP scores, as presented in Fig. 7b.Moreover, a heatmap visualised the abundance of the top 25 metabolites based on VIP scores before and after WMT (Fig. 7c).Among these, 16 metabolites with VIP scores > 1.0 and P < 0.05 were identified as differential metabolites (Additional file 6: Table S4).More importantly, the relative abundances of three toxic metabolites, namely hippuric acid, cinnamoylglycine, and indole, associated with CKD progression [24][25][26][27], were elevated in the urine of patients after WMT (all P < 0.05).Using the Small Molecule Pathway Database metabolite set enrichment analysis revealed that pathways such as "homocysteine degradation", "sulphate/sulphite metabolism", "methionine metabolism" and "glycine and serine metabolism" experienced notable alterations in patients with renal dysfunction after WMT (Fig. 7d).
Discussion
This study investigated the efficacy, safety, and underlying mechanism of WMT in enhancing renal activity among patients with renal dysfunction.The findings revealed that WMT resulted in a significant improvement in renal activity for patients with renal dysfunction, while not significantly affecting those without renal dysfunction.In addition, WMT exhibited favourable tolerability and safety, with a low AE incidence (2.91%).After WMT administration, an increase in gut microbiota diversity and the abundance of specific probiotic bacteria were observed in patients with renal dysfunction.Furthermore, their gut microbiota profiles demonstrated a close resemblance to those of healthy donors, and enhanced removal of toxic metabolites through the urine was evident.This suggests that WMT might improve renal function through gut microbiota regulation and improved toxin excretion (Fig. 8).To the best of our knowledge, this is the first clinical study demonstrating the efficacy and safety of WMT in improving renal function in humans.
Current research highlights that patients with CKD have an altered intestinal microbiota [28].Consistent with previous studies [29,30], this study reported that the β-diversity of the microbial community was significantly different between patients with renal dysfunction and healthy donors.However, unlike studies finding marked α-diversity variations in mild CKD (CKD stages 1 and 2) compared with patients without CKD, no significant differences in gut microbiota richness or diversity were observed in our study.This aligns with another study suggesting comparable α-diversity in these two patient groups [31].Furthermore, the study uncovered substantial reductions in the relative abundances of several genera, such as Eubacterium coprostanoligenes, Anaerostipes, Monoglobus, and Butyricicoccus, in patients with renal dysfunction compared with healthy donors, which is consistent with previous reports [29,[32][33][34].
Recent studies have shed new light on the pathogenetic roles of the gut microbiota in kidney diseases, with interventions targeting it (e.g., diet, probiotics, and FMT) holding promise for CKD treatment [13].Notably, Zhu et al. observed that Lactobacillus casei Zhang administration ameliorated gut dysbiosis and slowed disease progression, yet failed to arrest or reverse renal function decline [10].Likewise, Wang et al. demonstrated that healthy donor gut microbiota administration effectively lowered SCr and urea levels, mitigating kidney pathology in CKD mice compared with those receiving microbiota from patients with ESRD, thereby suggesting the potential for FMT to reverse kidney disease progression [18].In our study, WMT targeted gut microbiota not only arrested but reversed renal function decline among patients with renal dysfunction, suggesting that Several mechanisms could explain these findings.First, patients with CKD exhibit dysbiosis, a change in microbiota composition and structure, with decreased probiotic bacteria and increased pathogenic bacteria [35,36].After WMT, the abundances of several probiotic genera, such as Dorea and Anaerostipes, often reduced in kidney disease [37,38], increased, while the abundance of potential pathogens such as Hungatella, which is significantly increased in patients with CKD [39], markedly decreased in patients with renal dysfunction after receiving WMT.Second, CKD-associated harmful microbiota generates trimethylamine-N-oxide, implicated in uremic toxin accumulation by activating the renin-angiotensinaldosterone system [40].Our study evidenced decreased toxic microbiota abundance after WMT, coupled with increased urinary toxin excretion.Therefore, WMT might promote toxin excretion by reducing trimethylamine-N-oxide production, subsequently improving the renin-angiotensin-aldosterone system [19].Third, uraemia alters the gut biochemical environment, resulting in intestinal mucosal injury (leaky gut), common in CKD.This promotes lipopolysaccharide translocation and serum proinflammatory cytokine production, such as interleukin (IL)-6 and tumour necrosis factor (TNF)-α, exacerbating kidney injury [41,42].FMT has been shown to restore intestinal barrier function, lowering serum lipopolysaccharide, IL-6, and TNF-α levels [43], suggesting that WMT may improve renal function by enhancing intestinal barrier integrity and reducing systemic inflammation.
Patients with renal dysfunction who underwent WMT through the lower gastrointestinal tract (with [44] and hypertension [15], colonic FMT demonstrated superiority over nasointestinal FMT.There are two possible explanations for these results.First, location-specific microbes tend to colonise homologous gut regions, suggesting that microbes from the large intestine are more likely to colonise the large intestine than the small intestine [45].Thus, large-intestine-derived microbes in faecal suspension, when delivered to the large intestine via the lower gastrointestinal tract, might improve microbiota colonisation.Second, patients who received colonic WMT underwent bowel preparation, which potentially facilitated microbiota colonisation, thereby enhancing the therapeutic effect.Electrolyte abnormalities, dyslipidaemia, and anaemia are common systemic complications of CKD [46].This study suggested a trend of improvement in blood lipids (total cholesterol and LDL-c) and haemoglobin among patients with renal dysfunction after WMT, indicating the potential of WMT to ameliorate CKDrelated metabolic abnormalities and anaemia.Similar observations are seen in clinical studies where FMT increased insulin sensitivity in patients with metabolic syndrome and increased haemoglobin in those with anaemia caused by chronic disease by modulating the intestinal microbiota composition and metabolism [47,48].However, whether WMT can improve other CKD-related parameters and complications, such as mineral bone disorder and endocrine dysfunction, remains to be investigated.
This study observed a significant reduction in the abundances of Eubacterium coprostanoligenes, Anaerostipes, and Monoglobus in faecal samples from patients with renal, consistent with findings in patients with immunoglobulin A nephropathy and renal failure [18,33].Furthermore, the abundances of Eubacterium coprostanoligenes, Senegalimassilia, and Coriobacteriales incertae sedis, were positively correlated with eGFR levels, indicating their protective role against renal disease progression.Interestingly, WMT led to the abundance of these five genera in patients with renal dysfunction.Further investigation is warranted to assess the therapeutic potential of these genera in CKD management.
Several limitations of our study warrant consideration.First, its retrospective design and small sample size led to a limited number of samples from patients with renal dysfunction.Additionally, the use of DNA stabilising buffer in stool samples posed challenges for metabolomics analysis.Second, several potential confounders, such as protein, water, and salt intake, medication use, and underlying cause of renal dysfunction, which might influence renal disease progression, were incompletely recorded.Third, the relatively short follow-up duration with only 40% and 15% of the patients completing 3 months and 6 months follow-up after WMT, respectively, precludes assessing long-term outcomes of patients with renal dysfunction.Future
Conclusions
In conclusion, WMT proves both safe and effective in improving renal function among patients with renal dysfunction by modulating the gut microbiota and promoting toxic metabolite excretion.These findings suggest that targeting the gut microbiota using WMT offers a promising novel approach for treating CKD.
Fig. 1
Fig. 1 Gut microbiota profiles of patients with renal dysfunction and healthy donors.a Study design; b Shannon's diversity index at the genus level; c Principal coordinate analysis of microbiota composition at the genus level; d Wilcoxon rank-sum test bar plot of relative abundances of the top 20 differential genera.* P < 0.05; ** P < 0.01; *** P < 0.001
Fig. 2
Fig. 2 Effects of WMT on renal parameters in patients with renal dysfunction.a Changes in the levels of SCr, eGFR, BUN, and UA in patients with renal dysfunction before and after WMT.b Comparison of the changes of renal parameters between patients with renal dysfunction who did and did not undergo WMT.△renal parameter = renal parameter after WMT-renal parameter at baseline.BUN, blood urea nitrogen; eGFR, estimated glomerular filtration rate; SCr, serum creatinine; UA, uric acid; WMT, washed microbiota transplantation.* P < 0.05; ** P < 0.01; *** P < 0.001; ns, no significance
Fig. 4
Fig. 4 Association between WMT delivery routines and the effects of WMT on renal function.The effects of WMT on SCr (a), eGFR (b), BUN (c), and UA (d) in patients with renal dysfunction who underwent WMT through the upper or lower gastrointestinal tract.△renal parameter = renal parameter after WMT-renal parameter at baseline.BUN, blood urea nitrogen; eGFR, estimated glomerular filtration rate; SCr, serum creatinine; UA, uric acid; WMT, washed microbiota transplantation.* P < 0.05
Fig. 5
Fig. 5 Clinical factors associated with effects of WMT on renal function.a Effects of WMT on renal parameters in patients with renal dysfunction who underwent WMT for organic or functional disease; b Effects of WMT on renal parameters in patients with dysfunction caused by hypertensive nephrology or other aetiologies.△renal parameter = renal parameter after WMT-renal parameter at baseline.BUN, blood urea nitrogen; eGFR, estimated glomerular filtration rate; SCr, serum creatinine; UA, uric acid; WMT, washed microbiota transplantation.* P < 0.05
Fig. 6
Fig. 6 Gut microbiota profiles in patients with renal dysfunction before and after WMT. a Shannon's diversity index at the genus level; b Principal coordinate analysis (PCoA) of microbiota composition at the genus level; c Wilcoxon rank-sum test bar plot of relative abundances of the top 15 differential genera; d Heatmap of the correlations of genus-level abundances and renal parameters.BUN, blood urea nitrogen; eGFR, estimated glomerular filtration rate; SCr, serum creatinine; UA, uric acid; WMT, washed microbiota transplantation.* P < 0.05; ** P < 0.01; *** P < 0.001
Fig. 7
Fig. 7 Urine metabolic profiles in patients with renal dysfunction before and after WMT. a Partial least squares discriminant analysis (PLS-DA) score plots of the metabolites; b Important metabolites identified by PLS-DA based on variable importance in projection (VIP) scores; c Heatmap of the abundances of the top 25 metabolites based on the VIP scores; d Metabolite set enrichment analysis
|
v3-fos-license
|
2021-10-17T15:12:22.095Z
|
2021-10-01T00:00:00.000
|
244487049
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.spiedigitallibrary.org/journals/journal-of-applied-remote-sensing/volume-15/issue-4/044504/Hierarchical-mapping-of-Brazilian-Savanna-Cerrado-physiognomies-based-on-deep/10.1117/1.JRS.15.044504.pdf",
"pdf_hash": "ce9908da40a79955d8d5843ca7c483ea0ca23818",
"pdf_src": "SPIE",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45989",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science"
],
"sha1": "7cf822fba233d3930adc0bbd1846c639c7624c03",
"year": 2021
}
|
pes2o/s2orc
|
Hierarchical mapping of Brazilian Savanna (Cerrado) physiognomies based on deep learning
Abstract. The Brazilian Savanna, also known as Cerrado, is considered a global hotspot for biodiversity conservation. The detailed mapping of vegetation types, called physiognomies, is still a challenge due to their high spectral similarity and spatial variability. There are three major ecosystem groups (forest, savanna, and grassland), which can be hierarchically subdivided into 25 detailed physiognomies, according to a well-known classification system. We used an adapted U-net architecture to process a WorldView-2 image with 2-m spatial resolution to hierarchically classify the physiognomies of a Cerrado protected area based on deep learning techniques. Several spectral channels were tested as input datasets to classify the three major ecosystem groups (first level of classification). The dataset composed of RGB bands plus 2-band enhanced vegetation index (EVI2) achieved the best performance and was used to perform the hierarchical classification. In the first level of classification, the overall accuracy was 92.8%. On the other hand, for the savanna and grassland detailed physiognomies (second level of classification), 86.1% and 85.0% were reached, respectively. As the first work that intended to classify Cerrado physiognomies in this level of detail using deep learning, our accuracy rates outperformed others that applied traditional machine learning algorithms for this task.
Introduction
The ecosystems of savanna cover ∼20% of the Earth's terrestrial area.Especially in tropical regions, they are rich in biodiversity 1 and water resources 2 and play an important role in carbon stock due to their content of above and below-ground biomass. 3The Brazilian Savanna, also known as Cerrado, is considered a global hotspot for biodiversity conservation, containing around 4800 endemic species. 1,4In addition, its water resources feed the three largest watersheds in South America: the Amazon, Prata, and São Francisco.The Cerrado comprises 24% of the Brazilian territory, being the second largest biome in the country.However, despite its ecological relevance, only 8.6% of its native vegetation is in protected areas, i.e., specific regions established to protect biodiversity, water bodies, and other environmental resources. 5Approximately 47% of the Cerrado native vegetation have already been converted to other land uses, especially pasture (29%) and annual agriculture (9%). 6Moreover, from 2008 to 2019, the deforestation rates in the Cerrado have been higher than in the Amazon biome in 9 of these 12 years interval. 7This heavy loss of native vegetation has severe environmental consequences, such as vegetation fragmentation, habitat loss, 8 and reduction of water yield and carbon stocks. 9,10n this scenario, accurate mapping of Cerrado vegetation is essential to support policies against deforestation and, consequently, to maintain the provision of ecosystem services, since these maps are crucial to assess biodiversity, improve carbon stock estimation within the biome, and guide conservation policies.
The use of remote sensing imagery to perform Cerrado vegetation mapping is still a challenge due to the high spatial variability and spectral similarity among its vegetation types, also known as physiognomies.According to the well-known classification system proposed by Ribeiro and Walter, 11 there are 25 physiognomies, which vary in vegetation structure, tree density, edaphic conditions, and floristic composition.These physiognomies can be grouped into three major ecosystem groups: grasslands, savannas, and forests.Therefore, this classification system has an intrinsic hierarchical structure, where the previous identification of the three major ecosystem groups in a first level of classification may improve the identification of more detailed physiognomies in subsequent levels.Nevertheless, only few studies have been exploring the hierarchical aspect of the classification system to map Cerrado vegetation. 12,13Neves et al. 12 used geographic object-based image analysis (GEOBIA) techniques and the random forest (RF) algorithm to compare hierarchical and nonhierarchical approaches to classify seven Cerrado physiognomies.Although the authors have observed the superiority of the hierarchical approach over the nonhierarchical one, the accuracy for some physiognomies was still low.More recently, Ribeiro et al. 13 used GEOBIA and RF and also included a spatial contextual ruleset to represent other environmental factors (e.g., soil type, slope, and elevation) to improve the Cerrado vegetation classification.They classified 13 classes, including 11 vegetation types, with an overall accuracy (OA) of 87.6%.However, their semiautomatic methodology still relied on some subjective tasks, such as the selection of image segmentation parameters.
5][16][17][18] The TerraClass Cerrado project 6 utilized Landsat images and region growing image segmentation followed by visual interpretation to map land use and land cover in the entire Cerrado biome in 2013.The forest and nonforest (grasslands and savannas) classes from TerraClass map obtained accuracies between 60% and 65%.The MapBiomas Project 14,19 classified the entire Cerrado from 1985 to 2017 using Landsat images (30 m) and RF algorithm to identify forest, savanna, and grassland.6][17][18] Jacon et al. 16 used a hyperspectral image (30-m spatial resolution) to perform automatic classification of seven physiognomies based on time series and multiple discriminant analysis.Ferreira et al. 17 and Schwieder et al. 18 used Landsat images to classify five and seven vegetation classes, respectively.Ferreira et al. 17 employed spectral linear mixture models and Mahalanobis distance, whereas Schwieder et al. 18 utilized the support vector machine (SVM) method and phenological metrics extracted from dense time series.
Regardless of the classification methods, studies have shown that detailed Cerrado vegetation mapping based on Landsat-like imagery is a challenging task since the Cerrado biome is composed by heterogeneous and seasonal natural vegetation types.In addition, some works 12,15,20 performed a semiautomatic classification using high spatial resolution image, such as WorldView-2 (2-m spatial resolution) 12,15 and RapidEye imagery (5-m spatial resolution). 13,20espite the use of a more detailed spatial resolution, the misclassification in transition areas remained an issue.Each physiognomy has a unique biodiversity and is responsible for a specific amount of carbon stocked above and below the ground. 10,11][17] To improve the physiognomies discrimination to generate a detailed vegetation map, a large variety of machine learning techniques have been employed.Convolutional neural networks (CNNs) are able to perform end-to-end classification, learning features from an input dataset, and presenting increasing complexity through the layers of the network. 21The results achieved with such methods often outperform those obtained with traditional machine learning algorithms, such as RF or SVM. 22For savanna vegetation, some efforts have already been made with deep learning to delineate tree crowns. 23,24Nogueira et al. 25 were the first to employ a deep learning-based method to identify vegetation patterns, which include different tree heights, tree cover and shrub, and herbaceous vegetation.Using RapidEye imagery, entire regular image patches were designated as only one class (forest, savanna, or grassland), resulting in a considerable mixture of classes in a single patch.A semantic segmentation (also known as pixelwise classification) of the three major ecosystem groups was performed by Neves et al., 26 using a modified U-net architecture and eight spectral bands of the WorldView-2 satellite image.Compared with the classification approach performed by Nogueira et al., 25 the semantic segmentation results in a better class delineation.
The U-net 27,28 belongs to the group of fully convolutional neural networks (FCNNs 29 ).Compared with more traditional CNNs such as LeNet 30 and AlexNet 31 that predict a single class for each image patch, FCNNs are tailored to the task of semantic segmentation.In particular, they take an image patch with an arbitrary number of channels as input and predict a label-map usually of the same size as the input.The U-net is composed of a multilayer convolutional encoder that successively reduces the spatial resolution and increases the number of filters per kernel and a multilayer convolutional decoder, which upscales the features to the original spatial resolution.They further use skip-connections between encoder and decoder layers, of the same spatial resolution, to preserve low-level details, required for the precise prediction of object boundaries.
In deep learning methods, originally developed in the computer vision field, the analysis of the contribution of different spectral bands to improve the network accuracy is not yet well explored.The spectral behavior of the physiognomies and their respective major groups rely on the information contained in different wavelengths, represented here by satellite spectral bands.However, the majority of works with deep learning approaches used only red, green, and blue channels 22,32 or included the near-infrared (NIR) one. 25,33In addition, very few initiatives have applied some hierarchical behavior in classification tasks. 34,35herefore, the objective of this work is threefold: (1) to hierarchically classify the Brazilian Savanna physiognomies based on deep learning techniques according to the classification system proposed by Ribeiro and Walter; 11 (2) to evaluate different combinations of spectral bands taken as input dataset in the deep learning classification; and (3) to evaluate the deep learning classification performance in relation to the different training samples selection methods.To the best of our knowledge, this is the first work that produced a Brazilian Savanna map in this level of detail based on deep learning techniques.
Materials and Methods
This study was performed according to the flowchart presented in Fig. 1.In phase A of the methodology [Fig.1(a)], we investigated several combinations of spectral bands as input in the processing to perform a semantic segmentation of three major Cerrado ecosystem groups (grassland, savanna, and forest).Figure 1(b) shows the steps required to perform the semantic segmentation approach using an adapted U-net CNN architecture. 28Following, we applied the best input dataset resulting from the first part to perform the hierarchical Cerrado vegetation mapping [Fig.1(c)].All processing procedures are described in detail in the following sections.
Study Site
As study site, a Brazilian protected area was chosen to ensure the native Cerrado vegetation analysis.The Brasília National Park (BNP) (Fig. 2), located in the Federal District, Brazil, comprises ∼423 km 2 of native Cerrado vegetation, which encompasses the major physiognomies found in the Cerrado biome. 17,36It contains several endangered species 37 and a dam that is responsible for 25% of the Federal District's water supply.This protected area was also used as study site in several other works, 12,15,17,18,36 which attests its representativeness and facilitates a comparison among the results.
According to the existing physiognomies in the study site and the classification system proposed by Ribeiro and Walter, 11 we differentiated two hierarchical levels of classes.In the first level, three major ecosystem groups (also known as formations) were classified: forest, savanna, and grassland.In the forest formations, there is a predominance of arboreal species, forming continuous or discontinuous canopy.In the second level, forest was maintained as gallery forest (Mata de Galeria), since it is the only forest physiognomy with significant presence in this area.
In savanna formations, the presence of continuous canopy is uncommon, and there are trees and shrubs scattered over grasses.The areas identified as savannas in the first level were subdivided into woodland savanna (Cerrado Denso), typical savanna (Cerrado Típico), Rupestrian savanna (Cerrado Rupestre), shrub savanna (Cerrado Ralo), and Vereda in the second level.In grasslands, there are predominantly herbaceous species and some shrubs.Four subclasses were differentiated in the second level: shrub grassland (Campo Sujo), open grassland (Campo Limpo), Rupestrian grassland (Campo Rupestre), and humid open grassland (Campo Limpo Úmido).The humid open grassland is a subtype of open grassland, but it was considered an independent class due to its significant presence in the study site.The Cerrado physiognomies hierarchy and their individual definitions and characteristics are presented in Table 1.Besides, their patterns in a WorldView-2 image true color composite can be observed in Fig. 3.
Remote Sensing Data, Preprocessing, and Feature Extraction
The WorldView-2 image (tile ID 103001003373A600), with spatial resolution of 2 m and acquired on July 22, 2014, was utilized in this work.Although the Cerrado vegetation is strongly influenced by the seasonality, the chosen image is from the dry season.According to Jacon et al., 16 the spectral separability of the physiognomies increases during the dry season.This image has eight spectral bands: coastal (400 to 450 nm), blue (450 to 510 nm), green (510 to 580 nm), yellow (585 to 625 nm), red (630 to 690 nm), red-edge (705 to 745 nm), NIR 1 770 to 895 nm), and NIR2 (860 to 1040 nm).Initially represented in digital numbers, the image was converted to surface reflectance using the fast line-of-sight atmospheric analysis of hypercubes (FLAASH) algorithm 38 available in the ENVI 5.2 software.
Besides the spectral bands, other features were included as input data in the experiments: two-band enhanced vegetation index 39 and three components of LSMM 40 (vegetation, soil, and shade components).The EVI2 is given as where ρNIR is the reflectance in the NIR band and ρR is the reflectance in the red band.
To create the LSMM, 10 endmembers (pure pixels) were selected for each component, which is calculated as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 6 ; 1 9 7 where r i is the spectral reflectance mean for the i'th band of a pixel with N components; j is the number of components; i is the number of bands; a ij is the spectral reflectance of the j'th component of a pixel for the i'th band; x j is the proportional value of the j'th component of the pixel; and e i is the error for the i'th band.As we did not have a detailed Cerrado vegetation map and FCNNs require classified patches (rather than points) for training, the entire WorldView-2 image was classified through visual interpretation and, then, used as reference data.This was performed by some specialists in remote sensing, who have previous experience in mapping Brazilian Savanna vegetation.Considering our intention to differentiate natural vegetation types (Fig. 3), the other minority areas were masked in the reference data (built-up areas, water bodies, bare soil, and burned areas).
Network Architecture
In this work, a variation of the U-net architecture, 27 proposed by Kumar, 28 was used in all tasks of pixelwise classification (semantic segmentation).The architecture 28 mainly follows the designchoices of Ronneberger et al. 27 However, the architecture was modified to use zero-padding, instead of unpadded convolution, to preserve the spatial size along the network.As a further modification, the upsampling is based on transposed convolutions with a stride of two along both spatial dimensions.
Network parameters such as the number of layers and number of filters per layer are shown in Fig. 4. The input layer (gray color) represents the size of the sample patches (160 × 160).The depth (N) is the number of bands.The output layer has the same size of the input layer, but the depth is represented by the number of classes C. Every other layer is represented according to the legend; a 2 × 2 max-pooling layer, for instance, is illustrated in pink.The numbers in brackets are the image sizes in each layer followed by the number of filters.As in the original U-net, skipconnections are used to concatenate information of high spatial resolution (but low complexity) with information of low spatial resolution (but high complexity).
While the last network layer for semantic segmentation is usually modeled by softmax function, here we chose the sigmoid function because it presented higher OAs in the preliminary tests.This allows the model to predict independent probabilities per class and per pixel.Final class predictions are obtained by choosing the respective classes with highest probabilities.The network was implemented in a Python environment, using Keras 41 with TensorFlow 42 as backend.The NVIDIA GeForce RTX 2070 super (8 GB) GPU was used.
Analysis of Spectral Input Data
To test the network performance according to the spectral data used as input, eight datasets were created.The first one is composed by the red, green, and blue bands, and it was considered the baseline dataset, since it is the simplest one and presents the spectral bands commonly used in deep learning approaches.Then, we added one type of information (e.g., NIR spectral bands or EVI2) at each of the following datasets to evaluate how it could improve the model performance.An additional dataset using only the LSMM components was also used.Since the LSMM vegetation component is highly correlated to the EVI2, we did not include a dataset using them together.The datasets are summarized below: • RGB (red, green, and blue bands) • LSMM (three LSMM components: vegetation, soil, and shade).
All datasets were divided into regions A, B, and C (Fig. 5), which contain roughly similar distributions of the three classes of the first level of classification.Thereafter, the datasets and the reference data were cropped into nonoverlapping and adjacent tiles of 160 × 160 pixels to be used as samples.Tiles with any no-data value were excluded from further processing (i.e., pixels originally covering built-up areas, water bodies, bare soil, and burned areas).To increase the amount of samples, six data augmentation techniques were employed: transposition, horizontal, and vertical flips and three rotations: 90 deg, 180 deg, and 270 deg (clockwise direction).
The samples selected from regions A, B, and C were combined as follows: 70% of the samples from two regions (e.g., A and B) were randomly selected for training, and 30% for validation.The resulting network was then tested in the remaining region (e.g., C).This experiment was repeated three times, i.e., until the three regions had a semantic segmentation resulting from the cross-validation approach.Table 2 shows the number of samples used in each experiment.For the training and validation sets, those numbers include samples generated through a data augmentation procedure.
During training, the early stopping criterion (also known as "patience" in Keras) was set to 50, i.e., if after 50 epochs the validation accuracy did not increase, the training process was halted Fig. 5 Regions A, B, and C used to generate sample patches in the spectral input data analysis, according to Table 2. to avoid overfitting.To reduce the misclassification in the tile borders, the resulting image was created through a sliding window approach with steps of 20 pixels.All procedures [Fig.1(a)] were executed for each one of the eight datasets to classify the three first level classes (forest, savanna, and grassland).The outputs of this phase are some thematic maps, accuracy measures as well as the best set of input data to be utilized in the next processing phase to achieve the hierarchical classification.
Hierarchical Semantic Segmentation
Using the input dataset, which yielded the best performance in the previous processing phase, we carried out the semantic segmentation approach to hierarchically classify the Cerrado physiognomies.Differently from the first level, the classes of the second level are unbalanced, i.e., they do not present similar distributions across the three image regions (Fig. 5).As the unbalanced class distribution can create artifacts, a different approach for sample generation was employed.
Initially, the WorldView-2 image was partitioned into so-called superpixels 43 using the simple linear iterative clustering (SLIC), implemented in the Scikit-Learn Python package. 44SLIC is an adaptation of the k-means algorithm, 45 which computes the weighted distance measure through a combination of color (in the CIELAB color space) and spatial proximity.As input for SLIC, we use the red, green, and blue bands, since the CIELAB color space is defined by the lightness (color brightness).It is also possible to control the superpixels compactness.If its value is large, spatial proximity is more relevant, therefore superpixels are more compact (close to a square in shape).However, when the compactness value is small, they adhere more to image boundaries and have less regular size and shape. 46In this study, a compactness value equals to 400 was used.This value depends on the data values range used and was chosen empirically to create the superpixels that adhere well to the interest patterns.
Figure 6 shows how sample patches are generated.For each superpixel, a class was assigned based on the majority of corresponding pixel classes in the reference image.Thereafter, superpixels centroids were calculated and used as the center point for each sample of 160 × 160 pixels.Each sample corresponds to the pair composed of one patch for the reference image and one patch for the WorldView-2 image.For each class of interest, 1000 centroids were randomly selected to generate sample patches.The sample patches may contain transition areas between physiognomies, which is a positive aspect, because it enables the network to learn the context in which each physiognomy occurs.Similar to the previous experiment (Sec.2.4), all samples that contain any no-data value were excluded.Thus, the six data augmentation techniques mentioned before were applied for the remaining training and validation samples.
The complete samples set was randomly split: 70% and 30% were assigned for training and validation, respectively.The sliding window approach was also employed to create the results.To use the same sample generation approach in the entire hierarchical process, the semantic segmentation was repeated for the first level (forest, grassland, and savanna).Subsequently, the semantic segmentation approach was employed for each resulting savanna and grassland maps.The final Cerrado physiognomies map (and the respective accuracy metrics) is composed of the forest map (gallery forest), the savanna map (shrub savanna, typical savanna, woodland savanna, Rupestrian savanna, and Vereda), and the grassland map (open grassland, shrub grassland, Rupestrian grassland, and humid open grassland).These last two were generated in the second level of classification.These methodological steps are represented in Figs.1(b) and 1(c).
The Vereda physiognomy has a minority presence in the BNP, so its area is not large enough to be included into deep learning classification.Therefore, this physiognomy was included in the first level of classification (as part of the overall savanna physiognomy), but it was manually identified in the second level.Consequently, it is present in the final map but not considered in the confusion matrices and accuracy metrics.Another relevant detail concerns the generation of samples for the savanna and grassland physiognomies in the second level of classification.When generating the samples of the four grassland types, for instance, any other class present in the sample patch (e.g., forest or savanna) was considered as others, a temporary class.If the pixels corresponding to others had simply been excluded, the network would be unable to understand patterns of transitions between grassland physiognomies and other classes of forest and savanna, for example.As the output of the network is a probability for each pixel and each class, when the network tried to classify a pixel as others, the second highest probability was considered, according to the example of Fig. 7.
Accuracy Assessment
The obtained semantic segmentation maps were then compared with the respective reference data, and confusion matrices were generated.Since we are performing a hierarchical classification, misclassifications on the first level will directly influence the results of the second level, i.e., if a pixel of shrub savanna was classified as grassland in the first level, it will still be considered as an error in the confusion matrix of the second level.Based on each confusion matrix, the following metrics were computed: OA, recall, precision, and F1-score.The OA corresponds to the percentage of pixels with the respective labels assigned correctly.Precision, also known as user's accuracy, is the proportion of pixels predicted for a class and actually belonging to that class; it is the complement of the commission error.Recall, also known as producer's accuracy, is the proportion of pixels of a particular class successfully identified; it is the complement of omission error.The F1-score is the harmonic mean of precision and recall, computed for each class.
Assessment of the Spectral Input Information
Table 3 presents the accuracies of the training and validation steps for the assessment of the spectral input information, as well as the number of epochs needed to stabilize the network for each experiment.It took about 5 h to train each network.Due to the patience parameter of 50, all trainings stopped before 100 epochs, and stabilization occurred between epochs 5 and 93.
In general, all accuracy values were higher than 87.5%.The highest accuracies in training were reached using the RGB + NIR1 + NIR2 + RedEdge dataset in experiments 1 and 3, whereas, in experiment 2, the highest accuracy was obtained for RGB + EVI2.In the validation step, the highest accuracies for experiments 1 and 2 were reached using RGB + EVI2, whereas the highest accuracy in experiment 3 was observed with the RGB + NIR1 + NIR2 + RedEdge dataset.
For the test step, the OAs and F1-score per class are presented in Table 4.The OA varied from 87.4%, using RGB + LSMM, to 89.3% with the RGB + EVI2 dataset.It could be expected that the eight band dataset would achieve the highest performance, since it contains more bands and, consequently, most of spectral information.However, it obtained the second worst OA value of 87.6%.Despite presenting the lowest OA, the RGB + LSMM dataset had the highest F1-score (0.91) for the forest class.This is also reflected in the class delineation in the mapping result.For savanna and grassland, the highest F1-scores (0.92 and 0.84, respectively) were achieved with the same dataset with best OA, RGB + EVI2.
To analyze in more detail the results of Table 4, Fig. 8 shows selected patches of the WorldView-2 image, the reference, and the thematic maps using the RGB + EVI2 and the RGB + LSMM datasets.In this scale, the misclassified areas between grassland and savanna (G × S), grassland and forest (G × F), and savanna and forest (S × F) are highlighted in different colors.Despite the small difference between the best (RGB + EVI2) and the worst (RGB + LSMM) datasets of 1.9% points, the resulting maps show significant dissimilarities.
In all maps, the major areas of misclassification occur between grassland and savanna (G × S), followed by savanna and forest (S × F).There are only few areas of confusion between grassland and forest (G × F).This behavior is expected, since the confusions of classification occur mainly in transition areas, considering an increasing scale of vegetation density in the existing physiognomies (i.e., G × S and S × F).In addition, the higher forest F1-score with RGB + LSMM, already observed in Table 4, is also reflected in the maps.In Fig. 8, we notice a better delineation of forest areas when using this dataset, even better than in RGB + EVI2 dataset results.
Detailed Physiognomies Mapping
For the hierarchical classification, we used the input dataset composed by RGB + EVI2 bands, since it achieved the best performance in the assessment of the spectral input information for the first level, especially for savanna and grassland.For the first level of classification, the accuracy during training was of 97.9%, achieved after 147 epochs.The confusion matrix for the validation step is presented in Table 5.The matrix is presented in terms of number of pixels, and the OA was of 92.8%.Forest obtained the highest F1-score of 0.95, and the other two classes achieved F1-scores higher than 0.91.The grassland recall (0.89) was the only metric lower than 0.90, as 10.5% of the grassland pixels were misclassified as savanna.In general, our deep learning approach yielded a very accurate classification of the three classes.The confusion between grassland and savannas, which presented the highest error rate, occurs mainly along the class borders, where it is indeed difficult to define when one class becomes another, even in field campaigns.In a hierarchical classification, results of the first level affect directly the performance of the second level.Considering the detailed reference of savanna physiognomies and the resulting classification of the first level, it is shown that 92.7% of the woodland savanna was correctly included in savanna class, while 6.4% was misclassified as forest (Table 6).The typical savanna had the highest percentage of area classified as savanna (98.2%).While the Rupestrian savanna showed the highest percentage of misclassification, 88.3% of the area were classified as savanna, whereas 10.5% was classified as grassland.Pixels of savanna classified as forest or grassland were included as errors in the recall of the second level of classification for savanna physiognomies (Table 7).
Similar to Table 6, typical savanna showed the best performance in the classification of the detailed physiognomies, with F1-score of 0.91 (Table 7).The other three savanna physiognomies achieved F1-scores from 0.84 (shrub savanna) to 0.88 (Rupestrian savanna).Most of the misclassified pixels of woodland and shrub savannas were labeled as typical savanna.That was an expected behavior, since shrub, typical, and woodland savanna (in this order) compose an increasing scale of vegetation density and biomass.Regarding Rupestrian savanna, the network was able to identify its pattern among the savanna physiognomies, resulting in a precision of 0.91.However, the confusion of 10.5% of this physiognomy with grassland in the previous level of classification leads to a recall of only 0.84.During the training step, the accuracy for savanna physiognomies was of 96.6% when 190 epochs were executed.In the validation step, the OA was of 95.6%.However, when errors from the first level of classification are propagated to the second level, the OA becomes 86.1%.Considering the reference of the detailed grassland physiognomies, almost all (99.4%) open grassland were correctly classified as grassland in the first level (Table 8).Rupestrian, shrub, and humid open grassland had 94.1%, 85.5%, and 79.4% of their areas classified as grassland, respectively.For the shrub and humid open grassland, 14.2% and 18.0%, respectively, were misclassified as savanna in the first level.In the hierarchical classification system proposed by Ribeiro and Walter,11 humid open grassland is a subtype of open grassland.However, it was considered an independent class in this work as it presents a pattern very different from the traditional open grassland.Besides that, preliminary experiments showed that separating these two classes increased the open grassland's OA by more than 2% points.
During the training step, the OA of the classification of the detailed grassland physiognomies was 96.3% with 171 epochs.In the validation, the network achieved an OA of 95.6%.After the inclusion of the errors from the first level of the classification, the grassland physiognomies OA decreased to 85.0%.The resulting confusion matrix is presented in Table 9.The F1-scores varied from 0.86 (humid open grassland) to 0.94 (open grassland).Shrub and Rupestrian grassland had F1-scores of 0.89 and 0.93, respectively.The largest amount of misclassified pixels of shrub grassland was classified as open grassland.For open, Rupestrian, and humid open grassland, the largest amount of misclassified pixels was classified as shrub grassland, the majority grassland physiognomy in the BNP.
The results of savanna and grassland physiognomies are presented in Fig. 9.This figure also includes gallery forest, generated in the first level of classification.Therefore, a detailed mapping of all physiognomies in the BNP is created.In addition to the reference and the predicted images, the image representing the differences between both is also presented to clearly show right and wrong results.To better visualize the development of the hierarchical classification, two zoomed regions are also presented.Just like in Tables 7 and 9, the errors from the first level of classification are carried over to the second level, so no misclassified area from the first level can become correctly classified [represented in dark blue, Fig. 9(c)] in the second level.
Assessment of Spectral Input Information and Samples Generation
The majority of research in optical RS that applies deep learning techniques use images with the three most common channels (spectral bands), the RGB (red, green, and blue). 22,32Others also include an NIR channel. 25,33These works rarely use all available satellite bands, sometimes because of the increase in processing time, or because the network architecture and the algorithms are prepared to use only three input layers.In the RS field, several extraspectral information, such as yellow or red-edge bands (available in WorldView-2), may be used.Zhang et al. 47 tested the efficiency of using datasets containing four and eight bands of the WorldView-2 and WorldView-3 images.The authors achieved better accuracies with the eight-band dataset, although they classified only different urban targets (e.g., buildings and roads).While more bands in general improve the results, in this study, it was observed that the use of some bands, such as the yellow band (present in the 8 band dataset), did not improve the classification of vegetation patterns.Thus, the increase in the number of bands is not necessarily directly related to an increase of OA.Deep learning networks learn features from the training data to identify the desired classes.For this reason, it is believed that it is not necessary to give the network supplementary information, commonly called handcrafted features (e.g., vegetation indices, fractions of the LSMM), in addition to the satellite spectral bands. 48However, our results showed the opposite: by combining a handcrafted feature (vegetation index) and original data, we obtained the best OA.The EVI2 enhanced the information regarding vegetation biomass in a way that does not occur when using only the red and the NIR bands.Thus, the OA of the RGB + EVI2 dataset was higher than the OA of other datasets containing red and NIR bands (eight band, RGB + NIR1 + NIR2, and RGB + NIR1 + NIR2 + RedEdge).A better performance when including vegetation indices was also observed by Kerkech et al., 49 although the authors used different indices and tested them in a different domain (crop disease detection).
On the other hand, the inclusion of the three fractions (vegetation, soil, and shade) of the LSMM as input did not increase the OA.However, the vegetation and shade LSMM fractions highlighted and represented well the conditions of the dense and high vegetation as well as the presence of shades (due to differences in tree heights), improving the forest delineation.This resulted in the highest F1-score for this class using RGB+LSMM dataset.The extraction of LSMM fractions from high spatial resolution images is still useful, since the mixture of vegetation targets is still present in a 2 × 2 m pixels. 50,51In a pixel of woodland savanna, for instance, the proportions of vegetation and shade fractions are mainly higher than in a pixel of shrub savanna.These fractions are widely used to classify vegetation in Cerrado 14,15,17,52,53 with traditional machine learning techniques.In the deep learning field, new methodologies have been tested to generate the LSMM fractions as results of some CNN, 54 but they were never tested as input layers in semantic segmentation using deep learning approach.
Regarding the generation of samples for the network training and validation, we employed two procedures.The first, used in the analysis of spectral input data, splits the image in three parts, two of which generated the training and validation samples and the last one was used to test the network.In the second procedure, employed in hierarchical classification, the samples were generated based on superpixel centroids.The first procedure is commonly used, since it is able to detect patterns of the same class in different regions of the image. 55,56However, in a case with several classes with different occurrences across the study site, the second procedure may be more appropriate to ensure the classification represents all classes and contexts.In this study, the OA with the first procedure was of 89.3%, raising to 92.8% in the second one.Both of them for the first level of classification and using the RGB+EVI2 dataset.
Fu et al. 57 used an approach similar to the second procedure.They generated the samples using object centroids but with a multiresolution segmentation algorithm to classify general classes (e.g., water, road, and building).The accuracies of this approach with a CNN were 5% to 10% higher than the accuracies achieved with GEOBIA.Superpixel segmentation algorithms, such as SLIC, create more uniform objects, whereas traditional segmentation algorithms (e.g., multiresolution segmentation) generate objects with different sizes and shapes.Using this last case to create the network samples could potentially generate patches with irregular proportions of the classes due to the centroid positions: 58 in segments with irregular shapes, the centroids are not always located inside the patch.A supposed elongated and curved segment of gallery forest, for example, could generate a centroid outside that class and fail to represent that pattern.
Hierarchical Classification
The high OAs achieved in our results are assumed to be mainly related to two aspects: the deep learning methodology and the high spatial resolution of the WorldView-2 imagery.Using coarser spatial resolution and the same three classes of the first level, previous works 6,14,59 achieved lower OAs using traditional machine learning techniques.Although these works identified only three classes, it is worth noting that each class has a high intraclass variability.Therefore, the grassland class, for instance, contains the pattern (i.e., spectral behavior) of many different types of grasslands (four, in the BNP).Still using coarser spatial resolution (30 m), Ferreira et al. 17 and Schwieder et al. 18 improved the detail of vegetation classes.Schwieder et al. 18 had recalls of 86% and 64% for gallery forest and shrub grassland, respectively, whereas we achieved 95% and 83%.
In Ref. 17, the classes shrub Cerrado and woodland Cerrado are equivalent to our shrub grassland and shrub savanna classes, respectively.While they correctly classified 75% of both, we presented recalls of 83% and 81% for shrub grassland and shrub savanna.Thus, probably only the handcrafted features, used by the traditional machine learning algorithms, are not enough to acquire all the information needed about the vegetation classes.Improving the spatial resolution of the input imagery is also not enough to classifying the Cerrado vegetation.Keeping the RF algorithm and switching to the same spatial resolution used in this work (2 m), Girolamo Neto 15 and Neves et al. 12 obtained 88.9% and 88.2%, respectively, when differentiating the classes forest, savanna, and grassland.When dealing with the physiognomies, they achieved recalls of 39.15% and 39.25% for shrub savanna and 63.32% and 72.51% for shrub grassland, respectively, whereas we achieved 81% and 83% for these two classes.
Each detailed physiognomy has its own floristic composition, vegetation density, and edaphic factors.The woodland savanna, due to its dense vegetation, is often confused with forests, whereas the shrub savanna, with a more sparse vegetation, is confused with shrub grassland, a grassland physiognomy. 12,15,16The confusion between forest and grasslands is rather rare.Although such a confusion is surprising, it may be related to the presence of humid open grassland areas in the BNP.This physiognomy is located predominantly close to the gallery forests 11 and, consequently, the misclassified areas occur at the boundaries between these two classes.
The physiognomies, according to the classification system used in this work, 11 present an increasing scale of density and, consequently, biomass.Under these circumstances, the most common errors occur in transition areas between the physiognomies.7][18] The majority of works that classified detailed physiognomies using traditional machine learning techniques performed the validation using independent random points. 12,15,16,52As we performed a semantic segmentation approach, the validation samples (as well as the training samples) were independent patches (160 × 160 pixels) entirely classified.Thus, our approach generates a more robust evaluation of the delineation of the physiognomies and, consequently, the misclassification in transition areas.
The use of a hierarchical classification approach intended to minimize the confusion between savanna and grassland physiognomies or between any of them with forest patterns.6][17][18] Compared with the other few works that also used hierarchical approaches, 12,13 ours presented superior accuracy rates.Thus, we demonstrated the potential of applying deep learning techniques to open problems in the RS field.
It is important to note, though, that in many applications, and specially in RS, one of the main limitations of deep learning is the requirement of a large amount of training samples to achieve acceptable results. 60The amount of samples used in this work (see Secs. 2.4 and 2.5) was satisfactory to differentiate the physiognomies in the BNP.Despite covering a proportionally small area of the Cerrado biome, the BNP is a preserved area representative of the Cerrado ecosystems and contains the major physiognomies of the biome. 17However, due to the physiognomy heterogeneity across the Cerrado biome, 61 the application of our methodology in the entire biome would require the inclusion of more samples during the training phase.
Conclusions
This study proposed and evaluated a new methodology based on deep learning techniques to hierarchically classify the Cerrado physiognomies, according to the Ribeiro and Walter 11 classification system.The use of deep learning techniques enabled the creation of maps with higher detail and accuracy than other techniques such as GEOBIA and general machine learning.Although it does not completely prevent misclassifications, the use of a hierarchical approach reduced the confusion between detailed classes.Testing several datasets as input in the networks showed that the best dataset was composed by RGB bands plus EVI2.
Each Cerrado physiognomy has a different amount of biomass above and below ground and a unique biodiversity.Consequently, the proper identification and delineation of the Cerrado physiognomies are fundamental to truly understand the role of savanna biomes in the global carbon cycle.In addition, Cerrado vegetation maps can be used as a proxy in biodiversity studies, since a high rate of endemism can be found in its physiognomies.The greatest limitation when mapping the physiognomies of the Cerrado, the second largest biome of a continental-size country such as Brazil, is the lack of reference data.
The difficulty in differentiating the physiognomy patterns associated to the biome extension results in few options of vegetation maps, available only for three classes (usually forest, savanna, and grassland) and with a spatial resolution of around 30 m.This limitation is even more problematic when dealing with a semantic segmentation approach, since reference points are not enough and entirely classified patches are required as samples for network training.In addition, we should point out that using a high spatial resolution image also has some limitations to provide a ground truth regarding the Cerrado vegetation types.Due to the huge spatial variability of this vegetation, even in field campaigns it is hard to determine where a physiognomy ends and another one starts.
Under these circumstances, we performed the hierarchical classification methodology using deep learning in a relevant protected area, the BNP, and was quite efficient to classify the three major groups of physiognomies, in a first level, and 10 detailed physiognomies, in the second level.As the Cerrado contains several particularities across the biome, the reproduction of the method for the entire biome would require the availability of high spatial resolution images and reference data in the same spatial resolution to generate more samples for training and validation.
For future work, we suggest the inclusion of additional satellite data to consider other aspects of the physiognomies (e.g., satellite image time series to include the physiognomies seasonality in the analysis) in the mapping.As investigations of time series (without considering the spatial context) with well-known techniques have been carried out already and may not be enough, it is also suggested to keep the high spatial resolution and apply deep learning architectures that are appropriate for time series data, such as the long short-term memory networks.The inclusion of active RS data, such as radar and LiDAR (light detection and ranging) data, can also provide additional information, especially of the vegetation structure, to differentiate the physiognomies patterns.
Fig. 2
Fig. 2 Location of the BNP image (true color composite) in the Brazilian savanna.
Fig. 1
Fig. 1 Methodological flowchart presenting: (a) the analysis of spectral input data, which generates the most accurate input dataset that will be used in the next part.(b) The semantic segmentation approach.(c) The hierarchical mapping methodology, where the semantic segmentation icon is used to indicate every time the semantic segmentation is performed.
Fig. 3
Fig. 3 Patterns of the physiognomies in the WorldView-2 image (true color composite).
Fig. 4
Fig. 4 Modified U-net architecture (adapted from Ref. 28).The N, in the input size, is the number of bands, while the C, in the output size, corresponds to the number of classes.
Fig. 6
Fig. 6 Generation of sample patches.RP1 and WP1 are the reference and WorldView-2 patches of sample 1, respectively, and RP2 and WP2 are the reference and WorldView-2 patches of sample 2.
Fig. 7
Fig. 7 Example of savanna classification (second level of classification) demonstrating: (a) minority edges predicted as others (in black); and (b) replacement of the others class by the second highest probability of the network output.
Fig. 8
Fig. 8 Patches of: (a) the WorldView-2 image; (b) the reference data; (c) resulting thematic map using RGB + EVI2 dataset; and (d) resulting thematic map using RGB + LSMM datasets.G × S are the misclassified areas between grassland and savanna; S × F, between savanna and forest; and G × F, between grassland and forest.
Fig. 9
Fig. 9 Result of the detailed physiognomies mapping, showing: (a) predicted classified image; (b) reference image; (c) difference between predicted and reference images; and (d) zoom of two regions to show the result in more detail.
Table 1
11tailed description of the physiognomies, adapted from Ribeiro and Walter classification system.11
Table 2
Regions and number of samples used for training, validation, and testing procedures in each cross-validation experiment.
Table 3
Training and validation accuracies for all datasets in the three experiments (see description in Table2).The highest values in training and validation for the three experiments are given in bold.
Table 4
OAs (%) and classes F 1-score for all input datasets.The highest values are given in bold.
Table 5
Confusion matrix (in number of pixels), precision, recall, and F 1-score (highlighted in bold) for the first level of classification, using the RGB + EVI2 dataset.OA = 92.8%.
Table 6
Analysis of the result of the second level of classification for savanna physiognomies regarding the first level resulting map (%).
Table 7
Confusion matrix (in number of pixels) for the savanna physiognomies, in the second level of classification (OA = 86.1%).Precision, Recall and F 1-score are highlighted in bold.
Table 8
Analysis of the result of the second level of classification for grassland physiognomies regarding the first level resulting map (%).
Table 9
Confusion matrix (in number of pixels) for the grassland physiognomies, in the second level of classification (OA = 85.0%).Precision, Recall and F 1-score are highlighted in bold.
|
v3-fos-license
|
2021-07-03T06:16:56.511Z
|
2021-06-23T00:00:00.000
|
235711733
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.mdpi.com/2077-0383/10/13/2754/pdf",
"pdf_hash": "d4605a11f76b2d54555ce01df9838367c4f5dc3e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45991",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "15976dd85ba50e4f0f493fd42088f8f49bca56fd",
"year": 2021
}
|
pes2o/s2orc
|
Dissecting the Role of PCDH19 in Clustering Epilepsy by Exploiting Patient-Specific Models of Neurogenesis
PCDH19-related epilepsy is a rare genetic disease caused by defective function of PCDH19, a calcium-dependent cell–cell adhesion protein of the cadherin superfamily. This disorder is characterized by a heterogeneous phenotypic spectrum, with partial and generalized febrile convulsions that are gradually increasing in frequency. Developmental regression may occur during disease progression. Patients may present with intellectual disability (ID), behavioral problems, motor and language delay, and a low motor tone. In most cases, seizures are resistant to treatment, but their frequency decreases with age, and some patients may even become seizure-free. ID generally persists after seizure remission, making neurological abnormalities the main clinical issue in affected individuals. An effective treatment is lacking. In vitro studies using patient-derived induced pluripotent stem cells (iPSCs) reported accelerated neural differentiation as a major endophenotype associated with PCDH19 mutations. By using this in vitro model system, we show that accelerated in vitro neurogenesis is associated with a defect in the cell division plane at the neural progenitors stage. We also provide evidence that altered PCDH19 function affects proper mitotic spindle orientation. Our findings identify an altered equilibrium between symmetric versus asymmetric cell division as a previously unrecognized mechanism contributing to the pathogenesis of this rare epileptic encephalopathy.
Introduction
PCDH19-related epilepsy (MIM 300088) is a rare, genetic, drug-resistant developmental and epileptic encephalopathy, generally affecting females, and characterized by early-onset intractable seizures (9 months, on average). The disorder, which was first reported fifty years ago by Juberg and Hellman [1], represents one of the most diffuse monogenetic epileptic forms in the pediatric population [2][3][4][5], and recent molecular epidemiologic studies indicate PCDH19 as the second most clinically relevant gene in epilepsy after SCN1A [6-9]. The disease is characterized by a wide phenotypic spectrum, ranging from benign focal epilepsy with normal cognitive function to severe generalized or multifocal epilepsy resembling Dravet syndrome, with a more favorable prognosis [6, 10,11].
PCDH19 is localized on the long arm of chromosome X (Xq22.3) and encodes a membrane calcium-dependent cell-cell adhesion glycoprotein of the protocadherin family [11][12][13]. PCDH19 is characterized by six extracellular cadherin repeats with conserved calcium binding sequences, a transmembrane domain, and an intracellular region with two conserved motifs (CM1 and CM2) at the C terminus [13]. Like the classical cadherins, the main function of protocadherins is to enhance cell aggregation in a homophilic fashion, the limitations of dissecting human diseases due to the unavailability of relevant human tissues. Somatic cell reprogramming allows for the derivation of patient-specific iPSCs, which can be differentiated into disease-pertinent cell-types and used as an informative model system before, during and following differentiation.
Here, we generated patients' fibroblast reprogrammed iPSCs as an in vitro model for PCDH19-CE. Specifically, by modeling cortical neurons derived from iPSCs obtained from a male patient with a postzygotic pathogenic variant in PCDH19, we show that the accelerated differentiation of PCDH19-mutated iPSCs is associated with an altered orientation of cell division. We also provide evidence that cells expressing a mutated PCDH19 allele have an aberrant mitotic spindle.
Derivation of iPSCs
The studies were conducted in compliance with the Code of Ethics of the World Medical Association (Declaration of Helsinki), and with national legislation and institutional guidelines (local institutional ethical committee, Ref. 1702_OPBG_2018, date of approval 11 February 2019). PCDH19 mut iPSCs were derived from primary skin fibroblasts of an affected male individual (c.1352C>T, p.Pro451Leu; [25]), and control (CTRL) iPSCs were derived from a male, age-matched, healthy individual. An additional CTRL iPSC line was purchased from System Biosciences. All experiments were performed using the two control lines. Since no difference was observed among CTRL lines in all experiments, the figures report only one CTRL iPSC line.
The first iPSCs colonies appeared at around day 18, and the Essential 8 medium was changed with mTeSR1 (85850, STEMCELL Technologies, Vancouver, BC, Canada). iPSCs colonies were picked under a sterile hood with EVOS microscopy system (Thermo Fisher Scientific, Waltham, MA, USA) and transferred into a 24-well plate, pre-coated with Matrigel and cultured in mTeSR1 medium for expansion. The criteria used to select the best cell clones were based on colony morphology (characterized by rounded and sharp colony edges).
Alkaline Phosphatase Assay
Alkaline phosphatase (ALP) staining was performed, following the manufacturer's instructions (86R-1, Merck KGaA, Darmstadt, Germany). Cells were incubated at RT for 30 min with a solution based on naphthol AS-BI and fast red violet LB (86R-1KT, Sigma Aldrich, St. Louis, MO, USA). The cells were photographed using a Leica DM1000 (Leica Microsystems, Wetzlar, Germany) equipped with Leica LAS X software (Leica Microsystems, Wetzlar, Germany).
Genome Integrity Assay
DNA extraction was executed using NucleoSpin Tissue (740952, Macherey-Nagel, Düren, Germany) and following user's manual. To perform a genetic analysis of the obtained clones of iPSCs, we used a qPCR-based kit (hPSC Genetic Analysis Kit 07550, STEMCELL Technologies, Vancouver, BC, Canada). The assay is able to detect the most common karyotypic abnormalities reported in human iPSCs (Chr 1q, Chr 4p, Chr 8q, Chr 10p, Chr 12p, Chr 17q, Chr 18q, Chr 20q, Chr Xp). The assay uses a double-quenched probe with a 5-carboxyfluorescein (5-FAM) dye, and it was used following manufacturer's instructions. Data were analyzed with Genetic Analysis Application supplier (STEMCELL Technologies, Vancouver, BC, Canada).
Maintenance of iPSCs
iPSCs were grown in feeder-free condition using Matrigel in mTeSR1. When the iPSCs were 70-80% confluent, they were passaged 1:4 and transferred to new wells and incubated at 37 • C, 5% CO 2 ; the medium was changed every day and the cells split every three days.
Brain Organoids Generation, Culturing, and Analysis
Brain organoids were generated from iPSCs using culture mediums for the establishment of human iPSC-derived cerebral organoids (STEMDIFF Cerebral Organoid Kit 08570, STEMCELL Technologies, Vancouver, BC, Canada) based on the formulation published by [36,37]. Cerebral organoid formation was initiated through an intermediate embryoid body (EB) formation step, followed by the induction and expansion of neuroepithelia. iP-SCs were seeded at a confluence of 9000 cells/well in a 96-well, ultra-low attachment plate (7007, Corning, New York, NY, USA) in EB formation medium. On the 5th day, the medium was changed with brain organoid induction medium. On the 7th day, each organoid was embedded into a Matrigel dome at the center of the well in a 24-well plate (FALCON), in the brain organoid expansion medium. From the 10th day, the brain organoids were maintained in maturation medium. For imaging, the IncuCyte S3 time-lapse microscopy system (Sartorius) (Essen BioScience, Ann Arbor, MI, USA) was used to image the wells every 6 h. Imaging was performed for 8 days (from the 8th to the 16th day) at 37 • C. Phase images were acquired for every experiment. Analysis parameters for software moduleprocessing definitions were optimized individually for each experiment according to the workflow outlined in the manufacturer's manual. The optimized processing definitions were subsequently used for image analysis, focusing on the parameter of the Organoid object total area. Microplate graphs were generated using the time plot feature in the graph/export menu of the IncuCyte software. The raw data of organoid growth were exported to Microsoft Excel and GraphPad Prism was used to calculate mean values ± SEM and perform ad hoc statistical analyses. . Primary antibodies were diluted in blocking solution and incubated for at least 1 h at RT. Prior to and following the 1 h incubation period with the corresponding secondary antibody (Alexa Fluor, 1:500, Thermo Fisher Scientific, Waltham, MA, USA), cells were washed two times in PBS, for 5 min each. Nuclei were counter-stained with Hoechst 33342 (1:10,000, H3570, Merck KGaA, Darmstadt, Germany) and coverslips were mounted using PBS/glycerol (1:1). All images were acquired using the confocal microscope Leica SP8 (Leica Microsystems, Wetzlar, Germany) in combination with the LAS-X software (Leica Microsystems, Wetzlar, Germany). Confocal z-stacks were imported into LAS-X 3D software (Leica Microsystems, Wetzlar, Germany) to obtain their 3D surface rendering. Metaphases count was performed using NDP.view2 Image viewing Software (Hamamatsu Photonics, Hamamatsu City, Shizuoka, Japan) on immunofluorescence for βIII-tubulin. Stained samples were previously acquired for their entire surface with a digital scanner NanoZoomer S60 (Hamamatsu Photonics, Hamamatsu City, Shizuoka, Japan).
Neurite Length Assay
Cortical neurons were plated at a density of 5000 cells/well in a 96-well plate (92696, TPP) pre-coated with Matrigel and neurites' length was measured using the Incucyte System (Essen BioScience, Ann Arbor, MI, USA) with the Neurite Analysis application for Neurolight labeled cells. The cells were infected with a lentiviral-based vector encoding with an orange fluorescent protein (Incucyte Neurolight Lentivirus 4807, Essen BioScience, Ann Arbor, MI, USA). Well plates were imaged every 2 h in the IncuCyte S3 time-lapse microscopy system (Essen BioScience, Ann Arbor, MI, USA). Imaging was performed for 3 days (from the 10th to the 13th day of neural differentiation) and for 6 days at the end of the differentiation (from the 35th to the 41st day) at 37 • C. Phase-contrast and fluorescent images were acquired for every experiment. Analysis parameters for NeuroTrack software module-processing definitions were optimized individually for each experiment according to the workflow outlined in the manufacturer's manual. The optimized processing definitions were subsequently used for real-time image analysis. Microplate graphs were generated using the time plot feature in the graph/export menu of the IncuCyte Zoom software (Essen BioScience, Ann Arbor, MI, USA). Raw data neurite lengths were exported to Microsoft Excel and GraphPad Prism to calculate mean values ± SEM and perform ad hoc statistical analyses.
RNA Isolation and Reverse Transcriptase-Polymerase Chain Reaction Analysis
Total RNA was extracted from iPSCs or from iPSCs-derived cortical neurons with the single-step acid phenol method using TRIzol (15596018; Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's instructions. Each RNA sample was treated with recombinant DNase I (AM2235, Thermo Fisher Scientific, Waltham, MA, USA) and quantified by NanoDrop 2000 (Thermo Fisher Scientific Life Sciences, Waltham, MA, USA). The reverse transcription reaction was performed in 20 µL, starting from 1 µg of total RNA, and cDNA was generated by using the ImProm-II Reverse Transcription System (A3800; Promega, Madison, WI, USA) or Superscript II reverse transcriptase (18064; Thermo Fisher Scientific, Waltham, MA, USA) using random hexamers. Three independent reversetranscription quantitative real-time PCR (RT-qPCR) were performed for each sample.
Quantitative Real-Time PCR
Total RNA (0.2-1 µg) from iPSCs or differentiating iPSCs into cortical neurons was used for RT-qPCR using M-MLV reverse transcriptase (Thermo Fisher Scientific, Waltham, MA, USA). Five percent of the reaction was used as template, together with the primers specific to the analyzed list of genes (Table 1). qPCR analysis was performed using Power SYBR Green PCR Master Mix (4367659, Applied Biosystems, Waltham, MA, USA) and the 7900HT Fast Real-Time PCR System (Applied Biosystem, Waltham, MA, USA), according to the manufacturer's instructions. The ∆∆Ct method was used to calculate the fold change in gene expression. ∆Ct values were obtained by subtracting the Ct value obtained for the specific gene from the Ct value of the housekeeping gene for each sample and normalized the housekeeping gene levels (TBP). Data represent fold increase versus control sample, calculated by the 2t -(∆∆C) formula. Expression levels were represented in arbitrary units, calculated as a relative-fold increase compared with the control sample, which was arbitrarily set to 1. Quantitative RT-PCRs were repeated in triplicate from at least two independent experiments. Table 1. Primers used for RT-qPCR experiments. The annealing temperature for all primers is 62 • C.
Gene
Forward Reverse
PCDH19 mRNA Silencing
To silence PCDH19 a heterogeneous mixture of small interfering RNA (siRNA) was used to obtain highly specific and effective gene silencing with low risk for off-target effects. In particular, 5 × 10 5 of CTRL-iPSCs were electroporated with 500 ng of endoribonuclease prepared siRNA for human PCDH19 (MISSION esiRNA EHU032881, Sigma Aldrich, St. Louis, MO, USA), using Nucleofection kit P2 solution with 4D-Nucleofector System. Cells were plated in 6-well plates containing cover glasses pre-coated with Matrigel. Twenty-four hours after nucleofection, cover glasses were fixed to perform an immunofluorescence assay with anti-PCDH19 antibody (HPA001461, Sigma Aldrich, St. Louis, MO, USA) and the remaining cells were collected to perform protein extraction for Western blot analyses.
Calcium Imaging
Calcium studies were carried out by plating cells on glass bottom microwell dishes (81156, Ibidi, Martinsried, Planegg, Germany) and incubating them in Tyrode's solution (in mM: 129 NaCl, 5 KCl, 2 CaCl 2 , 1 MgCl 2 , 25 HEPES, 30 Glucose, pH 7.4) supplemented with 5 mM Fluo-4 (F10489, ThermoFisher Scientific, Waltham, MA, USA) for 15 min at RT and 5% CO 2 in the dark. Fluorescence microscopy was performed using a Leica SP8X resonant scanner confocal microscope using time-series frames of 2 fs/sec. After baseline interval, Ionomycin (I24222, Thermo Fisher Scientific, Waltham, MA, USA) diluted in Tyrode's solution was added (adapted from [38]). Following the addition of (20 µM) ionomycin, the maximum peak in fluorescence was recorded and, following 30 s (60 mM) EGTA (SLBR7504V, Sigma Aldrich, St. Louis, MO, USA), was supplemented to the media to measure minimum fluorescence intensity. For each biological replicate, 10-20 cells were measured. Traces in the graphs represent the normalized average fluorescence intensity change over time. For quantification, the area under the curve (AUC) of the whole Fluo-4 fluorescence peak area was determined using GraphPad Prism (San Diego, CA, USA).
Statistical Analyses
Data were expressed as the mean and standard error of the mean (mean ± SEM), where the normality of the distribution could be verified. For all experiments, multiple technical replicates and biological replicates were utilized (indicating the number of biological replicates with n). Detailed information regarding the number of replicates for each experiment can be found in the respective figure legend. Where n = 3, we assessed that the statistical test could be applied, and the distribution of the data could be verified. Significance was assessed using parametric tests (Student's t test, ANOVA) for normally distributed data and non-parametric tests (Mann-Whitney U test, Kruskal-Wallis) when normal distribution could not be verified. A p value < 0.05 was considered to indicate significance. Data were analyzed using GraphPad Prism software (Prism 8.0.2, GraphPad Software, San Diego, CA, USA) and Microsoft Excel (Microsoft, Redmond, Washington, DC, USA).
Derivation of PCDH19-CE iPSCs
Three pluripotent iPSC clones were obtained from primary skin fibroblasts of an affected mosaic male individual (c.1352 C > T, p.Pro451Leu; [26]). Sanger sequencing confirmed the occurrence of the hemizygous PCDH19 variant after reprogramming ( Figure S1a) and excluded the occurrence of any functionally relevant variant throughout the PCDH19coding sequence in control iPSCs. Clones were selected based on the ESC-like morphology (i.e., rounded-shape colonies with defined edges) (Figure 1a), and pluripotency was confirmed by enzymatic assay using ALP (Figure 1b), positivity to pluripotency markers (Figure S1b), and differentiation into lineages belonging to the three germ layers (endoderm, mesoderm and ectoderm).
Immunofluorescence assays for the pluripotency markers OCT4, SOX2, SSEA4 and TRA1-60 validated the full reprogramming of colonies (Figure 1d,e). The genomic integrity of the selected clones was verified by qPCR assay to exclude occurrence of the structural rearrangements which frequently occurred during reprogramming (Figure 1c). Finally, the ability of the lines to differentiate into cells belonging to the three germ layers was verified by assessing the protein expression of endoderm (SOX17), mesoderm (TBXT) and ectoderm (NCAM) markers by immunostaining (Figure 2a,b), confirming their pluripotency potential.
PCDH19-iPSCs Show Accelerated Differentiation In Vitro
To investigate the role of PCDH19 during neurogenesis, iPSCs were differentiated into cortical neurons. The neuronal differentiation of the generated iPSC lines was assessed in vitro by immunofluorescence analysis. Since all iPSC clones obtained from the mosaic male patient were hemizygous for the PCDH19 variant, to recreate the mosaic conditions, CTRL and PCDH19mut iPSCs were cultured as a mixed culture (1:1 ratio). Immunofluorescence for the neuronal marker βIII-tubulin showed an increased number of neurons in the mixed culture compared to what was observed in individual line at each timepoint ( Figure 3a). The quantification of βIII-tubulin-positive cells confirmed that, at the end of differentiation (T30), the mixed culture had a significantly increased signal compared to the parental cultures ( Figure 3b). These findings were confirmed by independent experiments aiming to assess the expression levels of TUBB3 mRNA by RT-qPCR assays, which consistently confirmed that there were increased expression levels in mixed cultures (although not statistically significant, Figure 3c).
Since an accelerated differentiation was observed in patient-derived neuronal cultures, morphological analyses of the obtained neurons were performed. To this end, we monitored neurite growth at the stage of neural rosettes' formation, when the first differences among the neural cultures appear through live cell imaging analyses of neurite length from the 10th to 13th day. Neurites' length, measured using the Incucyte System (Essen BioScience, Ann Arbor, MI, USA), documented that PCDH19mut and mixed neurons had significantly longer neurites when compared to CTRL neurons (**** p < 0.0001 and * p < 0.05, respectively) ( Figure 4a). To accomplish a detailed analysis of neurites, we monitored the neurite growth at the end of neuronal differentiation, and quantified neurite length by performing a live cell imaging experiment from the 35th to 41st day, documenting that length was significantly increased in PCDH19mut and mixed neurons compared to CTRL neurons (Figure 4b). Since an accelerated differentiation was observed in patient-derived neuronal cultures, morphological analyses of the obtained neurons were performed. To this end, we monitored neurite growth at the stage of neural rosettes' formation, when the first differences among the neural cultures appear through live cell imaging analyses of neurite length from the 10th to 13th day. Neurites' length, measured using the Incucyte System (Essen BioScience, Ann Arbor, MI, USA), documented that PCDH19mut and mixed neurons had significantly longer neurites when compared to CTRL neurons (**** p < 0.0001 (c) Bar graph showing RT-qPCR analyses for TUBB3 in PCDH19mut and mixed culture derived cortical neurons compared to CTRL cortical neurons. Data are presented as the mean ± SEM (normalized to control), n = 3. * p < 0.05, according to ordinary one-way ANOVA parametric test. and * p < 0.05, respectively) (Figure 4a). To accomplish a detailed analysis of neur monitored the neurite growth at the end of neuronal differentiation, and quantifi rite length by performing a live cell imaging experiment from the 35th to 41st day menting that length was significantly increased in PCDH19mut and mixed neuro pared to CTRL neurons (Figure 4b). To evaluate the neuronal activity, we measured intracellular calcium (Ca 2+ which are known to play a fundamental role in synaptic activity. In basal conditio levels were lower in PCDH19mut cortical neurons and, following ionomycin, the m Data are presented as mean ± SEM, n = 3. **** p < 0.0001, according to ordinary one-way ANOVA parametric test; (c) Graphical representation of intracellular Ca 2+ flux in CTRL, PCDH19mut and mixed cortical neurons, before (basal level), following stimulation with 20 µM ionomycin (at 3 min, as indicated by the black arrow) and following addition of EGTA to the medium (3 30 '); four biological replicates were performed (n = 4) and, for each replicate, 10 cells were analyzed.
To evaluate the neuronal activity, we measured intracellular calcium (Ca 2+ ) levels, which are known to play a fundamental role in synaptic activity. In basal conditions, Ca 2+ levels were lower in PCDH19mut cortical neurons and, following ionomycin, the maximal peak in intracellular calcium was decreased compared to CTRL. We also observed a spontaneous influx of Ca 2+ in PCDH19mut neurons, even before ionomycin stimulation, thus representing a spontaneous Ca 2+ influx. However, in the mixed culture, Ca 2+ influx was similar to that of the CTRL culture (Figure 4c).
PCDH19-iPSCs Present an Altered Neural Rosette Organization
The rosette morphology and organization were analyzed by performing immunofluorescence assays using anti-PCDH19 and anti-βIII-tubulin antibodies. In CTRL cultures, the neural rosettes appeared around the 25th day of in vitro cortical neuronal differentiation, while they were distinguishable earlier (around the 10th day) in PCDH19mut cultures, showing a disorganized structure with reduced lumen. In the mixed CTRL + PCDH19mut culture, rosettes were not easily distinguishable, due to the high number of differentiated neuronal cells (Figure 5a).
To understand whether this defective organization was due to an enhanced predisposition of patient's iPSCs toward neuronal differentiation, we performed RT-PCR analyses to quantify the levels of genes which were differently expressed during neurogenesis. Specifically, we investigated the expression levels of NCAD, as a marker of neuronal cells, and microtubule-associated protein 2 (MAP2) and TUBB3 (encoding βIII-tubulin) as markers of mature neurons. Firstly, we evaluated NCAD levels to assess the predisposition of iPSCs to spontaneously differentiate them into neuronal cells. In patient's iPSCs, cultured with and without CTRL iPSCs, the expression levels of this neuronal cadherin were increased compared to CTRL iPSCs, suggesting a predisposition towards neural differentiation ( Figure S2). In support of this observation, the levels of MAP2 and TUBB3 mRNA were increased in PCDH19mut iPSCs compared to CTRL iPSCs, especially when cultured in mosaic condition, indicating that patient's iPSCs expressed genes involved in neuronal differentiation, even without inducing neurogenesis, with defined culture media and specific growth factors ( Figure S2).
In vivo neurogenesis is tightly regulated by the fine equilibrium between the symmetric and asymmetric cell division of neural progenitors; at this stage of differentiation, the orientation of the mitotic spindle of dividing progenitor cells drives the cell fate of the dividing cells [31]. Specifically, cells that undergo symmetric division, with the mitotic spindle oriented perpendicular to the center of the rosette, will give rise to two daughter progenitor cells. On the contrary, cells undergoing asymmetric division will generate a neural progenitor and a cell that will undertake neuronal differentiation migrating away from the lumen of the rosette and resembling the migrating neurons that populate the layer of the cerebral cortex. For this reason, we analyzed and estimated the number of symmetric vs. asymmetric divisions close to the center of the neural rosettes and documented a significant increase in the percentage of asymmetric division in PCDH19mut and in mixed cultures compared to CTRL cultures (Figure 5b). 3D imaging allowed for appreciation of the asymmetric positioning of dividing cells in PCDH19mut and in mixed cultures, while those of CTRL cultures were dividing symmetrically.
The rosette morphology and organization were analyzed by performing immunoflu-orescence assays using anti-PCDH19 and anti-βIII-tubulin antibodies. In CTRL cultures, the neural rosettes appeared around the 25th day of in vitro cortical neuronal differentiation, while they were distinguishable earlier (around the 10th day) in PCDH19mut cultures, showing a disorganized structure with reduced lumen. In the mixed CTRL + PCDH19mut culture, rosettes were not easily distinguishable, due to the high number of differentiated neuronal cells (Figure 5a).
Mutated PCDH19 Affects Mitotic Spindle Formation
Since the precocious differentiation of neurons is often related to alterations in the plane of cell division, we decided to investigate the effect that mutated PCDH19 has on the mitotic spindle. Importantly, we observed altered mitotic structures in patient's and mixed iPSC populations. To further examine the impact of defective PCDH19 function on mitotic spindle organization, confocal analysis was directed to assessment of the possible cooperation between PCDH19 and a protein involved in the nucleation and polar orientation of microtubules. Therefore, we performed immunostaining assays for PCDH19 and γ-tubulin, which revealed the colocalization of these proteins (Figure 6a).
To evaluate possible perturbation on mitosis and microtubule spindle configuration in iPSCs derived from affected subject we performed immunostaining experiments using anti-γ-and anti-β-tubulin to reveal the mitotic spindle in dividing cells. We observed that proliferating PCDH19mut iPSCs cultured with CTRL iPSCs presented with multiple mitotic spindle structures during cell division (Figure 6b). Altered metaphases accounted for 17% of the total metaphases in the mixed culture, which was a significantly higher proportion compared to that observed inPCDH19mut iPSCs (7%) and CTRL iPSCs (4%) (Figure 6c). These results are consistent with the role of PCDH19 in regulating the plane of cell division, a process with great relevance during neurogenesis.
Silencing of PCDH19 mRNA Alters Cell Division
To further understand the effect of PCDH19 loss of function, we performed Western blot assays directed to assess the reduction in the translated PCDH19 protein using silenced iPSCs. PCDH19 silencing was attained using a heterogeneous mixture of siRNA targeting the PCDH19 mRNA sequence (specifically, from exon 3 to 6) in CTRL iPSCs at a confluence of 500,000 cells. The levels of PCDH19, detected using the anti-PCDH19 antibody, resulted in 65.3% fewer silenced cells than were found in CTRL cells (Figure 6d).
While documenting reduced levels of PCDH19 in immunofluorescence assays, we observed that the silenced cultures presented cells with altered mitotic spindle organization. To further investigate this aspect, the organization of the mitotic spindle was analyzed in dividing cells after PCDH19 silencing, which documented the presence of multipolar and aberrant mitotic spindle structures (Figure 6e). Data quantification showed that altered metaphases were significantly increased in silenced cells (esiRNA-PCDH19) when compared with those nucleofected with an empty vector (Mock) (Figure 6f).
To evaluate if PCDH19 loss of function leads to altered metaphases with centrosome hyperamplification, we immunostained iPSCs with an anti-centriolin antibody. Confocal images show that PCDH19 was not only colocalized with γ-tubulin but also with centriolin in CTRL, patient's and mixed iPSCs ( Figure S3a).
PCDH19mut iPSCs-Derived Brain Organoid Growth Is Decreased
To further model the PCDH19 neural phenotypes, we generated 3D cultures of brain organoids, and their size was assessed as a function of days in vitro (Figure 7a) At day 8, the total brain organoid area was indistinguishable among CTRL, PCDH19mut and mixed condition. However, by day 16, affected organoids (PCDH19mut and CTRL + PCDH19mut) were noticeably smaller than CTRL organoids (Figure 7b). showing brain organoids at early stages of formation (from 8 to 13 days in vitro); (b) Graph representing the analysis of brain organoid size from the 8th day to the 16th day. Three organoids for each sample have been analyzed. Data are presented as mean ± SEM, n = 3. ** p < 0.005, according to ordinary one-way ANOVA parametric test.
Discussion
PCDH19 encodes a cell-surface-exposed adhesion molecule belonging to non-clustered protocadherins, expressed predominantly in the developing and adult brain. It is involved in calcium-dependent cell-to-cell adhesion, and studies in mice highlighted the role of PCDH19 in determining cell adhesion affinities during cortical development [21]. Moreover, PCDH19 is involved in regulating neurogenesis, since its loss of function leads to impaired neuronal migration and the accelerated development of cortical neurons [18,19,22,23]. Mutations or partial deletion of PCDH19 lead to an X-linked form of childhood clustering epilepsy, which is usually resistant to antiepileptic drugs [39,40], with early onset seizures, initially associated with fever. Unlike classical X-linked diseases, this disorder affects females and mosaic males. Mutations mainly affect exon 1, and include missense and nonsense changes, and small frameshift indels, affecting highly conserved amino acids, all in the extracellular domain of the protein [4]. PCDH19 mutations are thought to cause a loss of function of the protein [6,10]. With more than 175 mutations Figure 7. Brain organoids derived from CTRL, PCDH19mut and mixed iPSCs. (a) Brightfield images showing brain organoids at early stages of formation (from 8 to 13 days in vitro); (b) Graph representing the analysis of brain organoid size from the 8th day to the 16th day. Three organoids for each sample have been analyzed. Data are presented as mean ± SEM, n = 3. ** p < 0.005, according to ordinary one-way ANOVA parametric test.
Discussion
PCDH19 encodes a cell-surface-exposed adhesion molecule belonging to non-clustered protocadherins, expressed predominantly in the developing and adult brain. It is involved in calcium-dependent cell-to-cell adhesion, and studies in mice highlighted the role of PCDH19 in determining cell adhesion affinities during cortical development [21]. Moreover, PCDH19 is involved in regulating neurogenesis, since its loss of function leads to impaired neuronal migration and the accelerated development of cortical neurons [18,19,22,23]. Mutations or partial deletion of PCDH19 lead to an X-linked form of childhood clustering epilepsy, which is usually resistant to antiepileptic drugs [39,40], with early onset seizures, initially associated with fever. Unlike classical X-linked diseases, this disorder affects females and mosaic males. Mutations mainly affect exon 1, and include missense and nonsense changes, and small frameshift indels, affecting highly conserved amino acids, all in the extracellular domain of the protein [4]. PCDH19 mutations are thought to cause a loss of function of the protein [6,10]. With more than 175 mutations reported to date, PCDH19 is now clinically considered as the second major disease gene implicated in epilepsy, after SCN1A [4]. In addition to PCDH19, the defective function of other δ-PCDHs has been associated with neurological disease. For example, PCDH10 mutations have been linked to autism [41,42], PCDH12 mutations have been associated with schizophrenia [42] and microcephaly and seizures [43], and PCDH17 mutations have been involved in the pathogenesis of schizophrenia [14]. These recent findings confirm a crucial role of nonclustered protocadherins in brain development.
PCDH19-CE is characterized by a variable phenotypic spectrum that ranges from benign focal epilepsy with normal intelligence to severe generalized/multifocal epilepsy, resembling Dravet syndrome. Some individuals with autism-like behavioral problems have also been reported [2,7,[44][45][46]. Brain imaging with nuclear magnetic resonance is typically described as normal at onset of disease, but recent reports highlight the presence of acquired microcephaly [47], and structural lesions as cortical dysplasia, abnormal cortical sulcation, blurring of grey-white matter interface and clustering of dysplastic pyramidal neurons [19,39,48].
Since PCDH19-CE is poorly understood and an efficacious treatment is lacking, a better understanding of the pathophysiology of this disorder is required. A promising and informative in vitro model system is based on the modeling of cortical neuronal pathophysiology using patient-derived iPSCs. To achieve this goal, we reprogrammed patient's fibroblasts into iPSCs. Multiple clones were characterized for their pluripotency and differentiated to cortical neurons. Our results demonstrate that iPSCs obtained from PCDH19 patients can undergo neurogenesis and are able to differentiate in cortical neurons, despite some alterations occurring, as was previously documented [22]. Different from what was observed in control iPSCs, whose neural rosettes are visible at day 15 of differentiation, the neural rosettes appear earlier in PCHD19mut iPSCs (day 5). In line with an accelerated neurogenesis in PCDH19mut cultures, neurites showed an increased length when compared with CTRL neurons. To further characterize the model, a mixed iPSC population culture was generated to obtain a mosaic condition to recapitulate the symptomatic mosaic males and heterozygous females and characterize the neuronal phenotype associated with cellular interference, the pathogenetic mechanism believed to underlie PCDH19-CE [6]. By comparing the growth and differentiation of individual and mixed control and patient's iPSC cultures, we showed that accelerated differentiation occurs in PCDH19mut iPSCs. Interestingly, neuronal differentiation was significantly accelerated in mixed cultures. In line with these data, we observed that, during the last days of neural differentiation, the neurite length of the mixed cultures was significantly increased compared to CTRL cultures. Overall, these data unveil that increased neurogenesis occurs earlier in PCDH19mut cultures (with the appearance of precocious neural rosettes and an increased neurite length) and is even more accelerated in the mixed cultures (as observed for the TUBB3 levels and neurite length).
Before becoming adult neurons, stem cells undergo several maturation steps, in which they completely change their transcriptome. To unveil the molecular phenotype of our iPSC model system, we focused on the expression of a set of different genes that are expressed early during neurogenesis in vivo. NCAD is an adhesion glycoprotein expressed in neuroepithelium cells, with a critical role in neural progenitor cells' fate, establishing whether they undergo symmetrical or asymmetrical division and, interestingly, cis-interacts with PCDH19 to reinforce cell adhesion [17,24]. Since NCAD is often used as a marker of neuronal lineage, the fact that its expression is increased in PCDH19mut and mixed iPSC cultures indicates that these cells have a spontaneous ability to differentiate toward neurons. This finding, in addition to the initial increase in neurite length and the early appearance of neural rosettes, suggests that patient's iPSCs are more predisposed to neural differentiation. Consistently, mRNA analysis, considering MAP2 and TUBB3 as markers of mature neurons, revealed that PCDH19mut and mixed iPSCs were in line with an accelerated differentiation when compared to CTRL iPSCs. These results consistently support the role of PCDH19 loss of function in promoting neuronal differentiation, augmented in the mixed condition (thus supporting a non-cell autonomous mechanism for this phenotype).
To investigate the functionality of CTRL cortical neurons, we performed analyses of intracellular Ca 2+ influx following ionomycin stimulation and observed that PCDH19mut cortical neurons show a spontaneous Ca 2+ intracellular influx before ionomycin stimulation. These results suggest the increased excitability of PCDH19mut neurons and are in accordance with the increased excitability of hippocampal neurons, in which PCDH19 is downregulated [49]. Surprisingly, the Ca 2+ imaging profile of mixed neurons is like that of CTRL neurons, although, following the addition of EGTA, the decrease in intracellular Ca 2+ is delayed. This unexpected finding requires further study.
Previous works used iPSCs as a model system to understand the localization of PCDH19 in stem cells and during cortical neurogenesis [50]. In iPSCs, PCDH19 is localized at one pole of the cell, and is possibly responsible for informing the position of one cell relative to the neighbouring cells. In addition, during cell division, PCDH19 is positioned at the two poles of the mitotic spindle, suggesting its involvement in the orientation of the spindle and the regulation of the type of cell division. Since in vivo neurogenesis is tightly regulated by the equilibrium between the symmetric and asymmetric cell division of neural progenitor cells (NPCs) in relation to their apical-basal polarity [32], and PCDH19 regulates this polarity [22], it is possible that PCDH19 mutations alter the equilibrium between symmetric vs. asymmetric division in a cell-autonomous manner, thus affecting neurogenesis. To focus on this, we investigated the orientation of mitoses in neural rosettes, which are structures developing during cortical neuronal differentiation by the re-organization of the cells and composed of NPCs positioned around a lumen which resembles the neural tube during in vivo neurogenesis. It is known that PCDH19 is highly expressed at the stage of neural rosettes and located at the centre of these structures, thus defining the proliferative zone [22,50]. We observed that the number of asymmetric divisions is significantly higher in PCDH19mut rosettes than in control and in mixed cultures (Figure 5b), suggesting that PCDH19 plays a role in informing the correct positioning of the mitotic spindle. PCDH19 dysfunction may, therefore, cause an imbalance between symmetric and asymmetric divisions in the proliferative zone of the neural rosette, leading to accelerated neural differentiation at the expense of the NPCs. According to this model, the authors of [51] recently showed that PCDH19 knock-down, through in utero intraventricular injection, results in a reduction in intermediate progenitor number in mice developing cortex. Moreover, an additional break-of-symmetry event, which is most likely what occurs in PCDH19mut and mixed cultures, leads to the appearance of the first neurite, thus expediting precocious neurogenesis.
Since PCDH19 is localized at the pole of the mitotic spindle in dividing cells [50] and its mutations alter the orientation of dividing cells during neurogenesis, we hypothesized that it interacts with proteins of the centrosome complex. To verify this hypothesis, we performed immunofluorescence assays for PCDH19, γ-tubulin, and centriolin in CTRL, PCDH19mut and mixed iPSC cultures. The colocalization of PCDH19 with both proteins of the centrosome was observed, and a significant alteration of mitotic spindle structures, with a consequent centrosome hyperamplification, was documented. To understand whether PCDH19 regulates the mitotic spindle formation, we also performed silencing experiments in CTRL iPSCs, documenting an increase in cells displaying multiple mitotic spindles, thus reinforcing the hypothesis that PCDH19 plays a role in controlling cell division.
Since in vitro neurogenesis resulted accelerated in mixed iPSC cultures, we decided to assess the effects of PCDH19 on brain development using a 3D in vitro model. To accomplish this, we generated iPSCs-derived brain organoids and monitored their growth up to day 16th. On the 8th day, the PCDH19mut organoid area was indistinguishable from CTRL ones, but on the 13th day, they appeared to be smaller than CTRL organoids. In particular, an analyses of the total area show that, from the 14th day, PCDH19mut organoid growth was significantly decreased. The fact that the WT cells show an initially slow proliferation growth, followed by a dramatic increase in the organoid area after day 14, would reflect an initial time when the cells need to reorganize their transcriptional machinery under the influence of a specific growth factor before undergoing neural 3D differentiation, while the PCDH19mut and CTRL + PCDH19mut cells have an innate predisposition towards neural differentiation. These results suggest that brain organoids from PCDH19-CE patients are reduced in size, modeling the increased neurogenesis observed in 2D cultures. Further analyses are needed to better characterize the cellular structure of the brain organoids.
Together, our findings provide evidence that PCDH19 regulates neurogenesis by controlling the mitotic spindle organization and its mutations lead to accelerated in vitro differentiation, which is in line with previous studies [22]. The accelerated neurogenesis is further evinced by the increased mRNA levels of genes relevant for neural differentiation at the iPSC stage (which suggests an increased susceptibility of PCDH19mut cells to differentiate), through the precocious appearance of rosettes. Of note, the increased percentage of asymmetric cell division of NPCs and altered mitotic cells in mixed iPSCs likely represent the molecular events explaining the observed increased neural differentiation rate leading to the observed decreased brain organoid size (Figure 8). In line with this hypothesis, mice with increased numbers of centrosomes present neural stem cell disorientation, decreased numbers of progenitor cells and premature neuronal differentiation [52]. In accordance with our findings, cortical dysplasia is a recurrent finding in PCDH19-CE [19,39,48], and acquired microcephaly has also been reported [48]. In the future, it would be important to deepen the morpho-functional phenotype of the brain organoids, derived from PCDH19mut iPSCs using isogenic cell lines, to more precisely investigate the processes related to neuronal differentiation and function [36,[53][54][55]. These studies represent a required step for the development of effective pharmacological treatments. The generation and characterization of model systems is a required step for the development of effective pharmacological treatments. Among in vitro disease models, iPSC-derived neurons appear to be well-suited for use in drug-screening strategies aiming to develop targeted therapeutic approaches, and represent an informative experimental tool to understand pathogenesis. Here, we characterized the neuronal phenotype of PCDH19-CE in terms of cell morphology, intracellular Ca 2+ flux, cell division and organoid formation/organization, and the collected data suggest that drugs acting on microtubule polymerization dynamics might be considered as slowing down the cell cycle in order to counteract premature differentiation of neural progenitor cells. to the center of the rosette (CR) (corresponding to basal zone), post-mitotic neurons (in green) which are migrating away from the lumen and the radial glia (or progenitor cells) toward the apical zone. Boxes show the orientation of metaphase plan in CTRL, PCDH19mut and mixed (CTRL+PCDH19mut) dividing cells. In CTRL rosettes, two cells are dividing: one in a symmetric and the other in an asymmetric way, generating two neural progenitor cells and one progenitor and one post mitotic neuron, respectively (in total, three progenitors and one neuron). In PCDH19mut rosettes, two cells are undergoing asymmetric division, and each generates one progenitor and one post-mitotic neuron (in total, two progenitor cells and two neurons). Mutated PCDH19 is responsible for a "cell autonomous" effect, which induces an increase in asymmetric divisions, resulting in increased neurogenesis and decreased brain organoids' size. In mixed iPSCs-derived rosettes, two cells divide, forming a multiple mitotic structure and resulting in increased neurogenesis (in total, two progenitor cells and four neurons). In addition to the cell-autonomous effect in PCDH19mut cell, the coexistence of mutated PCDH19 and wild type PCDH19 results in a "non-cell-autonomous" effect that induces the formation of altered dividing cells, which led to an exacerbation of the increased neurogenesis phenotype.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/jcm10132754/s1, Figure S1: Characterization of PCDH19mut iPSCs. (a) Sanger sequencing in PCDH19mut iPSCs. Electropherogram shows the pathogenic mutations (c.1352 C>T) in iPSC clones derived from a mosaic male patient. The arrow indicates the position of the mutation; (b) Relative expression levels of OCT4 and SOX2 mRNA in PCDH19mut iPSCs compared to CTRL iPSCs. Data are presented as the mean ± SEM (normalized to control), n = 4. One-way ANOVA test was used to perform statistical analysis of the obtained data. Figure S2: qRT-qPCR results in CTRL, patient's and mixed iPSCs of several markers expressed during neurogenesis. Bar graphs show mRNA levels of NCAD, MAP2 and TUBB3. Data are normalized to control and presented as the mean ± SEM, n = 3. * p < 0.05, according to ordinary one-way ANOVA parametric test. Figure S3: (a) Confocal micrographs of immunofluorescence images for PCDH19 (red) and centriolin (green) in CTRL, PCDH19mut and mixed iPSCs. Scale bar = 10 µm; (b) confocal micrographs showing PCDH19 (red) and centriolin (green) colocalization respectively during normal and altered mitoses in CTRL and silenced iPSCs. Scale bar = 5 µm; (c) Confocal micrographs of immunofluorescence for γ-TUBULIN (red) and centriolin (green) showing normal and altered mitoses respectively in CTRL and silenced iPSCs. Scale bar = 5 µm. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
|
v3-fos-license
|
2021-04-15T16:41:38.592Z
|
2021-04-14T00:00:00.000
|
233238904
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/jom-2020-0238/pdf",
"pdf_hash": "653f1b79e5d510dda7652e1aa967cf6b4ad4380f",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45992",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "dde7aae23533eac72909c344a552d78b0ac5208f",
"year": 2021
}
|
pes2o/s2orc
|
Cost comparison of osteopathic manipulative treatment for patients with chronic low back pain
Context: Chronic low back pain (cLBP) is the second leading cause of disability in the United States, with significant physical and financial implications. Development of a multifaceted treatment plan that is cost effective and optimizes patients’ ability to function on a daily basis is critical. To date, there have been no published prospective studies comparing the cost of osteopathic manipulative treatment to that of standard care for patients with cLBP. Objectives: To contrast the cost for standard of care treatment (SCT) for cLBP with standard of care plus osteopathic manipulative treatment (SCT + OMT). Methods: This prospective, observational study was conducted over the course of 4 months with two groups of patients with a diagnosis of cLBP. Once consent was obtained, patients were assigned to the SCT or the SCT + OMT group based on the specialty practice of their physician. At enrollment and after 4 months of treatment, all patients in both groups completed two questionnaires: the 11 point pain intensity numerical scale (PI-NRS) and the Roland Morris Disability Questionnaire (RMDQ). Cost data was collected from the electronic medical record of each patient enrolled in the study. Chi-square (χYates) tests for independence using Yates’ correction for continuity were performed to compare the results for each group. Results: Therewas a total of 146 patients: 71 (48.6%) in the SCT + OMT group and 75 (51.4%) in the SCT group. The results showed no significant differences between the mean total costs for the SCT + OMT ($831.48 ± $553.59) and SCT ($997.90 ± $1,053.22) groups. However, the utilization of interventional therapies (2; 2.8%) and radiology (4; 5.6%) services were significantly less for the SCT + OMT group than the utilization of interventional (31; 41.3%) and radiology (17; 22.7%) therapies were for the SCT group (p<0.001). Additionally, the patients in the SCT + OMT group were prescribed fewer opioid medications (15; 21.1) than the SCT (37; 49.3%) patients (p.001). Patients in the SCT group were approximately 14.7 times more likely to have received interventional therapies than patients in the SCT + OMT group. Likewise, the patients in the SCT group were approximately four timesmore likely to have received radiological services. Paired t tests comparing the mean preand 4 month self reported pain severity scores on the RMDQ for 68 SCT + OMT patients (9.91 ± 5.88 vs. 6.40± 5.24) and 66 SCTpatients (11.44± 6.10 vs. 8.52± 6.14) found highly significant decreases in pain for both group (<0.001). Conclusions: The mean total costs for the SCT and SCT + OMT patients were statistically comparable across 4 months of treatment. SCT + OMT was comparable to SCT alone in reducing pain and improving function in patients with chronic low back pain; however, there was less utilization of opioid analgesics, physical therapy, interventional therapies, radiologic, and diagnostic services for patients in the SCT + OMT group.
Abstract
Context: Chronic low back pain (cLBP) is the second leading cause of disability in the United States, with significant physical and financial implications. Development of a multifaceted treatment plan that is cost effective and optimizes patients' ability to function on a daily basis is critical. To date, there have been no published prospective studies comparing the cost of osteopathic manipulative treatment to that of standard care for patients with cLBP. Objectives: To contrast the cost for standard of care treatment (SCT) for cLBP with standard of care plus osteopathic manipulative treatment (SCT + OMT). Methods: This prospective, observational study was conducted over the course of 4 months with two groups of patients with a diagnosis of cLBP. Once consent was obtained, patients were assigned to the SCT or the SCT + OMT group based on the specialty practice of their physician. At enrollment and after 4 months of treatment, all patients in both groups completed two questionnaires: the 11 point pain intensity numerical scale (PI-NRS) and the Roland Morris Disability Questionnaire (RMDQ). Cost data was collected from the electronic medical record of each patient enrolled in the study. Chi-square (χ 2 Yates ) tests for independence using Yates' correction for continuity were performed to compare the results for each group.
Results: There was a total of 146 patients: 71 (48.6%) in the SCT + OMT group and 75 (51.4%) in the SCT group. The results showed no significant differences between the mean total costs for the SCT + OMT ($831.48 ± $553.59) and SCT ($997.90 ± $1,053.22) groups. However, the utilization of interventional therapies (2; 2.8%) and radiology (4; 5.6%) services were significantly less for the SCT + OMT group than the utilization of interventional (31; 41.3%) and radiology (17; 22.7%) therapies were for the SCT group (p<0.001). Additionally, the patients in the SCT + OMT group were prescribed fewer opioid medications (15; 21.1) than the SCT (37; 49.3%) patients (p.001). Patients in the SCT group were approximately 14.7 times more likely to have received interventional therapies than patients in the SCT + OMT group. Likewise, the patients in the SCT group were approximately four times more likely to have received radiological services. Paired t tests comparing the mean pre-and 4 month self reported pain severity scores on the RMDQ for 68 SCT + OMT patients (9.91 ± 5.88 vs. 6.40 ± 5.24) and 66 SCT patients (11.44 ± 6.10 vs. 8.52 ± 6.14) found highly significant decreases in pain for both group (<0.001).
Conclusions: The mean total costs for the SCT and SCT + OMT patients were statistically comparable across 4 months of treatment. SCT + OMT was comparable to SCT alone in reducing pain and improving function in patients with chronic low back pain; however, there was less utilization of opioid analgesics, physical therapy, interventional therapies, radiologic, and diagnostic services for patients in the SCT + OMT group.
Chronic low back pain (cLBP) is characterized by pain lasting longer than 3 months [1] and is accompanied by physical disabilities and psychological distress [2]. CLBP is the second leading cause of disability in the United States, impacting more than 32 million Americans with an annual economic burden of $177 billion [3]; it is also now the leading cause of disability worldwide [4]. Disability and costs associated with cLBP are projected to increase in the future [4]. Although cLBP affects people of all incomes, age groups, and ethnicities [4], several features increase an individual's risk for low back pain, including being over 30 years of age, having a BMI ≥ 30, pregnancy, decreased physical activity, and psychosocial factors [5].
Considering the physical and financial implications of cLBP, development of an appropriate treatment plan is crucial. Treatment for low back pain can be multifaceted [6]. Pharmacologic therapy ranges from over the counter medications such as acetaminophen or ibuprofen to nonsteroidal anti inflammatories to muscle relaxants and opiates [7]. Nonpharmacologic treatments are also available, including physical therapy, acupuncture, exercise therapy, behavioral therapy, and osteopathic manipulative treatment (OMT) [8]. More invasive treatments such as steroid injections, facet injections, and epidural injections may also be utilized in the management of low back pain [8]. All of these treatment options have an associated cost, as well as an impact on patients' ability to function on a daily basis, but as of this writing, there were no previously published prospective studies comparing the treatment options for cost.
We designed this study to compare the differences in cost associated with standard of care treatment (SCT) vs. SCT + OMT in patients with cLBP. We hypothesized that the costs associated with SCT + OMT would be lower than costs related to SCT. A secondary outcome of this study was to examine utilization of treatments and the comparative clinical effectiveness.
Methods
This study was approved by the Institutional Review Boards at Rowan University School of Osteopathic Medicine (RowanSOM) and Michigan State University and funded by a grant from the American Osteopathic Association. It was registered at ClinicalTrials.gov (No. NCT03532230).
Sites
This study was conducted through RowanSOM in southern New Jersey, Pain Associates in southern New Jersey, and Michigan State University. The NeuroMusculoskeletal Institute at RowanSOM houses the clinical department of osteopathic manipulative medicine (OMM), was the site of SCT + OMT, and partnered with the Pain Associates to provide SCT. Michigan State University has a clinical department of OMM that provided SCT + OMT and a pain management clinic that provided care for the SCT group. These two sites were selected to control for possible regional differences in treatments for back pain. The study was intended to encompass multiple sites across the country, but the sites in Michigan and New Jersey were the only sites interested in participation in this study. Other sites were invited to participate in the study. Recruitment for the study occurred at a biannual Educational Council of Osteopathic Principles meeting. The requirements for inclusion were an OMM clinic and local partnering in a pain management clinic. Unfortunately, no other sites were able to meet the deadline for participation.
Patients
Beginning on March 19, 2019, patients between the ages of 18 and 84 years who had been diagnosed with cLBP lasting more than 3 months (confirmed with ICD-10 codes M54.5, M54.16, G57.01, G57.02, M48.062, M48.062, M47.816, M48.07, M54.41, and M54.42) and who were being treated at RowanSOM, Pain Associates, and Michigan State University were invited to participate in this prospective, observational study. Patients with a history or presence of diabetic neuropathy, congenital lumbar or sacral abnormalities, lumbar fracture, multiple myeloma, metastatic bone disease, spinal surgery or low back pain lasting less than 3 months were excluded from the study. Patients were categorized into SCT or SCT + OMT based on the specialty certification of the physician treating the patient and that physician's management plan. Those in the SCT group were seen by physicians specializing in physical medicine and rehabilitation who do not provide osteopathic manipulative treatments to patients. Patients in the SCT + OMT group were seen by physicians specializing in physical medicine and rehabilitation (J.B., R.J.), who perform OMT regularly or referred patients to colleagues in the same practice who specialized in OMT (D.C., J.B.).
The duration of the entire study was 18 months (June 2018-December 2019), but each patient only participated in the study for approximately 4 months. Written informed consent was obtained from each participant prior to the start of any study related activities.
Treatment
All patients in both groups received SCT for cLBP, which may have included prescription medications (i.e., opiate pain relievers, nonsteroidal anti inflammatories, and muscle relaxants); steroid injections; and referrals for imaging, physical therapy, psychotherapy, spinal cord stimulators, spinal pump insertions, and other therapies. The SCT + OMT group additionally received OMT for cLBP. No specific OMT techniques were required for the study, but data on the number of OMT sessions per patient was collected.
Outcomes measures
The 11 point Pain Intensity Numerical Rating Scale (PI-NRS) [9] and Roland Morris Disability Questionnaire (RMDQ) [10] were administered at the time of enrollment and after 4 treatment months for both groups. These self reported instruments were specifically developed to measure changes in chronic pain and have been widely used to investigate the severity of chronic back pain. Both instruments have demonstrated strong convergent validity with respect to other measures of back pain and have high test-retest reliabilities [11]. The PI-NRS rating scale ranges from 0 (no pain) to 10 (worst possible pain), whereas the RMDQ is scored by counting the number of items checked by the patient as "Yes." The primary outcome of this study was the mean total cost per patient from each group for 4 months of treatment. The cost data were collected from the electronic medical record of each patient enrolled in the study. Because the costs for comparable treatments varied with respect to different socioeconomic and geographical regions in the United States, the mean total healthcare cost per patient was calculated based on codes for 2018 Medicare fee schedules listed in the current procedural terminology (CPT) for office visits, OMT, interventional procedures, and radiological/diagnostic testing [12]. The costs were also standardized according to the 2018 Medicare fees for physical therapy sessions for the patients at both sites. The total morphine milligram equivalents for the various prescribed pain medications were calculated and summed to permit comparisons across the three centers, as well as at admission and after 4 months of treatment.
Data analysis
Before performing any statistical analyses, the distributional characteristics for all variables were examined, and skewness indices were calculated for each of the continuous variables, such as age, morphine milligram equivalents, and mean cost (US dollars) for each type of treatments. The skewness indices for most of the cost data were positive and ranged from a low of 0.26 for age to 5.70 for interventional therapies. The latter skewness index was especially high for interventional costs, because one SCT patient was billed $52,700 for the insertion of a spinal pump after having been admitted to the study. Because the majority of the skewness indices were >2.0, square root and log10 transformations were employed to not only decrease the levels of skewness, but also to achieve or approximately approach normality as confirmed by Kolmogorov-Smirnov (K-S) tests for normality. The transformations were able to reduce all of the skewness indices to <2.0, and even the K-S test value of 0.12 for interventional procedures was not significant (p<0. 20), indicating that the transformed interventional costs were normally distributed. Therefore, the transformed cost data were considered to be appropriately distributed for parametric analyses, such as independent t tests. To determine whether the magnitudes of the mean decreases in pain were different between the two groups, two analyses of covariance (ANCOVA) were performed using baseline scores as the covariate and type of group as the main effect for both the RMDQ and PI-NRS scores.
Chi-square (χ 2 Yates ) tests for independence using Yates' correction for continuity were performed to compare the percentages use for different treatments between groups, and phi (φ) correlations were calculated to estimate the effect sizes of the proportional differences. Before comparing the grand total mean treatment costs and specific mean treatment costs for each of the treatment modalities, correlations between each treatment modality with sex and age were calculated to determine whether either variable or both had to be controlled for in subsequent statistical analyses.
Results
This study recruited 146 total patients: 71 patients (48.6%) in the SCT + OMT group and 75 patients (51.4%) in the SCT group ( Table 1). The percentages of the patients recruited to each group were comparable between the two sites in New Jersey (STC + OMT, 35 [49.3%]) and Michigan (STC + OMT, 37 [49.3%]). With respect to patient sex, the percentage of women in the SCT + OMT group (59; 83.1%) was significantly higher than the percentage of women in the SCT group (49; 65.3%; p<0.05). However, the φ correlation of sex with type of group was only 0.20, indicating a small effect size based on Cohen's interpretive guidelines [13]; those effect-size guidelines for φ correlations of 0.10, 0.30, and 0.50, were described, respectively in a separate study, as small, medium, and large [11]. The mean (± standard deviation [SD]) age of the patients in the SCT + OMT group was 58.1 ± 13.0 years (range, 29-80 years), and the mean ± SD age of the patients in the SCT group was 53.77 ± 16.22 years (range, 23-83 years); these two mean ages were comparable (t[144] = 1.78). The mean difference in ages between the two groups was 4.35 years, which represented a medium effect size of 0.30.
The number of patients treated in each group is also shown in Table 1 Neither sex (r=0.08) nor age (r=0.08) was significantly correlated with mean grand total costs for the 145 patients for whom sex, age, and total cost data were available. Consequently, we elected not to control for sex or age in any of the subsequent mean cost comparisons. Table 2 lists the mean total costs or each of the seven treatment modalities (OMT, office visits, referrals, physical therapy, interventional therapies, radiology services, and medications) along with the total costs for all services incurred during the 4 months. Figure 1 shows the mean costs by group across different treatment modalities.
Transformed scores were employed in all statistical analyses, but raw means and standard deviations are listed in the text and tables to facilitate interpretation. The pretreatment and 4 month posttreatment means, SD, and correlations of the RMDQ and PI-NRS scales are displayed in Table 3 for both groups. The RMDQ total scores decreased significantly from 11.44 ± 6.10 to 8.52 ± 6.14 for the SCT group (p<0.001) and from 9.91 ± 5.88 to 6.40 ± 5.24 for the SCT + OMT group over 4 months (p<0.001). The PI-NRS total scores only decreased for SCT patients, from 5.17 ± 2.35 to 4.61 ± 2.23 (p<0.05). The d statistics (0.64) given in Table 3 indicated that the RMDQ mean decrease for SCT + OMT patients represented a large effect size. Likewise, the d statistic of 0.53 for the mean decrease in the RMDQ total scores for SCT patients also reflected a large effects size, whereas the mean decreases in PI-NRS scores over time for both groups indicated small effect sizes of 0.23 for SCT + OMT patients and 0.25 for the SCT patients. The adjusted mean decreases in pain for RMDQ and PI-NRS were comparable (SCT + OMT, F[1, 131] = 2.50 partial η 2 = 0.02; SCT, F[1,129] = 0, partial η 2 = 0). Furthermore, the morphine milligram equivalent dose for each group was not significantly correlated with decreases in RMDQ (r=0.05) or PI-NRS (r=0.15) scores. Likewise, the number of OMT sessions performed on SCT + OMT patients was not significantly correlated with decreases in RMDQ (r=0.03) or PI-NRS (r=0.04) scores.
Discussion
The initial objective of this study was to compare the costs associated with standard of care treatment with or without OMT for cLBP. We hypothesized that the SCT + OMT group would have lower costs than the SCT group, but our results showed no significant differences in mean total cost. However, the utilization of interventional therapies and radiology services were significantly different between the groups. Only 2.8% of patients in the SCT + OMT group received interventional therapies as part of the treatment plan compared with 41.3% of patients in the SCT group. Additionally, only 5.6% of the SCT + OMT group had included radiology services in their treatment plan as opposed to 22.7% of the SCT group. This difference may not be attributable to OMT, but might be related to the structural examination that preceded treatment. The osteopathic structural examination can inherently identify biomechanical and fascial dysfunctions contributing to low back pain that are not identifiable on standard neurologic and musculoskeletal examinations. During an osteopathic physical and structural examination, identifying somatic dysfunction in the absence of historical red flags or neurologic deficits affords the provider an opportunity to correct these dysfunctions and offer an immediate treatment modality to the patient. If the patient has an improvement or resolution of symptoms, both the provider and the patient may opt to assess the effects of OMT before pursuing radiology services or interventional therapies. This theory is also supported by previous research from Licciardone et al. [1] in a randomized, controlled trial. Radiological studies are not completely risk free for the patient, as they result in radiation exposure. Lumbar spine lateral X-rays, for example, can have an effective dose of 0.38-0.5 mSv of radiation exposure [14]. Limiting a patient's exposure to unnecessary radiation is a critical consideration in quality patient care. Additionally, interventional therapies may be indicated for appropriate management of a patient's pain, but they are not entirely risk free. Between 2000 and 2014, the utilization of lumbar transforaminal epidural steroid injections (L-TFESI) increased more than 600% per 100,000 Medicare patients [15]. L-TFESIs and lumbar facet injections can expose a patient to 0.24 and 0.1 mSv of radiation, respectively [16]. While OMT does have some risks of adverse response such as muscle pain and soreness, it does not carry the same risks of exposure to radiation and adverse effects of interventional therapies. Our results also showed that osteopathic physicians are obtaining appropriate compensation for OMT, which explains the cost equivalency even in comparison with the SCT group. We hope this evidence will encourage more osteopathic physicians to continue incorporating OMT into the care of their patients regardless of their speciality.
Results of our study demonstrated that 49.3% of SCT patients were prescribed opioid medications for pain management compared with 21.1% of SCT + OMT patients. This indicates another important difference between study groups. Currently, in the United States, the opioid crisis has become the worst drug epidemic in history [17]. A large portion of the opioid abuse epidemic is related to exposure through prescription medications, which has risen dramatically since the 1980s [17]. Thus, appropriately prescribing opioids can help to reduce this issue [18]. In addition to the high potential for addiction, opioid medications also carry side effects like sedation, nausea, vomiting, delirium, myoclonus, and pruritus [18]. All patients taking opioids must be educated on proper bowel regimens to treat opioid induced constipation [18]. The most severe and potentially fatal side effect is respiratory depression [18]. Crow et al. [19] indicated that providing OMT in addition to SCT for patients with cLBP could aid in decreasing patient exposure to opioids and associated adverse effects. Our study supports this finding. A previous randomized, controlled trial of 155 subjects by Andersson et al. [20] also demonstrated a statistically significant decrease (p<0.001) in nonsteroidal anti inflammatory drug (NSAID) and muscle relaxant use in an OMT group. In the standard care group in that study, 54.3% of subjects received prescriptions for NSAIDs, whereas only 24.3% of subjects received prescriptions for NSAIDs in the OMT group [20]. Although that study did not show significant differences in pain improvement when spinal manipulation was compared with SCT for lower back pain management, Andersson et al. [20] indicated that spinal manipulation for lower back pain should be investigated more closely due to the reduction in medication usage.
Our SCT + OMT and SCT treatment groups showed similar decreases in pain. According to the RMDQ and PI-NRS, both groups reported comparable decreases in pain after 4 months, despite 49.3% of SCT patients and 21.1% of SCT + OMT patients having been prescribed opioids. The mean morphine milligram equivalent doses prescribed for both groups were comparable, but neither group's morphine milligram equivalent dose was significantly correlated with decreases in pain according to the RMDQ and PI-NRS. Decreases in pain could be associated with the number of OMT visits, but the number of OMT visits was not significantly related to decreases in pain according to the RMDQ and PI-NRS. Thus, our study demonstrated that OMT was an effective non pharmaceutical treatment in reducing pain levels for patients with cLBP and does not carry the risks of opioids. Guidelines from the American Osteopathic Association (AOA) on OMT for patients with low back pain also supported the idea that OMT can effectively reduce chronic non specific low back pain [21].
In the most basic sense, OMT provides a biomechanical treatment for biomechanical problems (somatic dysfunctions), which in the case of cLBP can include innominate, sacral, and lumbar dysfunctions as well as functional leg length discrepancies. Osteopathic structural exams can diagnose myofascial and ligamentous asymmetries that cannot be detected in radiologic or laboratory studies. These asymmetries can contribute to symptoms of low back pain; moreover, opioids cannot correct the aforementioned biomechanical dysfunctions and myofascial asymmetries.
Providers who utilize OMT regularly can educate patients on the potential causes of non malignant low back pain, which offers patients another treatment modality aside from medications. The AOA Guidelines recommended that osteopathic physicians utilize OMT when there is a diagnosis of somatic dysfunction and other causes of lower back pain have been excluded or deemed doubtful [21]. Previous research from Von Korff et al. [22] supported the notion that physicians who educate their patients about conditions such as low back pain and encourage activity prescribe fewer medications compared with physicians who do not [22]. Gamber et al. [23] also demonstrated that OMT plus SCT can benefit patients with fibromyalgia, which is a diagnosis of exclusion. Patients in that study who received OMT and education along with their current medications showed more functional and affective improvements as well as increases in pain thresholds at various fibromyalgia tender points when compared with subjects who did not receive OMT [23]. Fibromyalgia treatment plans heavily rely on patient education and prudent use of appropriate analgesic medications, a notion that also concurs with the features of effective treatment plans for cLBP [20]. OMT does not cure low back pain and can require multiple treatments, sometimes on a monthly basis, but opioids also offer no cure and many require frequent dosing.
Limitations
One of the limitations of our study was the homogenous population; another was short follow up. The majority of patients were women, of middle age (between 40 and 60 years of age), and White. Further, we only followed them during the 4 months of treatment for cLBP. Additionally, both clinical sites in Michigan and New Jersey were affiliated with osteopathic medical schools, which may have influenced the number of patients treated with OMT and limited the number of patients available for the study who were treated for low back pain but did not have OMT. Another limitation is that the proposed target sample size of 35 patients per clinic did not provide a sufficient number of patients representing three of the treatment modalities (physical therapy, interventional services, and radiology/diagnostic testing). As a result, the costs of those services were adjusted based on skew and limited a true generalized cost comparison between groups. Another limitation was the duration of this study. In looking at the costs of care between the two groups over our 4 month study period, those in the SCT group had an exponential increase in costs over the 4 months, while the SCT + OMT group had a steady cost with little increase. If the period had been longer, there would have been a statistically significant difference in cost between the two groups, based on our 4 month results. A final limitation is the manner in which OMT and office visit costs were billed; office visits were only billed twice per month for the SCT + OMT group according to insurance regulations.
Conclusions
Although the mean total costs for SCT and SCT + OMT patients were statistically comparable across 4 months of treatment, the types of treatments differed. Patients in the SCT group (41.3%) were approximately 13 times more likely to have been treated with interventional therapies than patients in the SCT + OMT group (2.8%). Likewise, the patients in the SCT group were approximately four times more likely to have received radiological/diagnostic services. Despite limitations in our study, the data demonstrates important evidence that OMT can be as effective as SCT treatment alone for patients with cLBP, and has distinct advantages in that it utilizes fewer opiates and interventional therapies. Thus, utilization of OMT should be encouraged by osteopathic physicians who treat patients with cLBP.
Research funding: This current study was funded by a grant from the American Osteopathic Association (Grant No. 3011806720). The funds covered honoraria, subcontracts, and travel expenses for conference presentations. Participants were not reimbursed for participation in the study. Author contributions: All authors provided substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; all authors drafted the article or revised it critically for important intellectual content; all authors gave final approval of the version of the article to be published; and all authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Competing interests: Authors state no conflict of interest. Informed consent: Written informed consent was obtained from each participant prior to the start of any study related activities. Ethical approval: This study was approved by the Institutional Review Boards at Rowan University School of Osteopathic Medicine and Michigan State University. This study was registered at ClinicalTrials.gov (No. NCT03532230).
|
v3-fos-license
|
2020-08-20T10:11:49.072Z
|
2020-08-14T00:00:00.000
|
225380428
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.iupui.edu/index.php/muj/article/download/23920/23163",
"pdf_hash": "f2522ec232cc89f43035ab4dc91f65c998455d71",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45995",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "03918cc4e676b0f9fef8eb75272236b6cf9e3345",
"year": 2020
}
|
pes2o/s2orc
|
Confirmatory Factor Analysis for the Service-Learning Outcomes Measurement Scale (S-LOMS)
,
Introduction
In comparison with similar measurement instruments that have been adopted in the past, S-LOMS carries several merits. First, it has been designed for the context of Hong Kong, reflecting the local culture and recent developments within the higher education sector there (Snell & Lau, 2020). Second, the set of domains included in S-LOMS comprehensively covers the desired developmental outcomes of Hong Kong based service-learning programs. Third, the administration of S-LOM is both standardized and flexible, such that practitioners can elect to measure developmental categories or domains, according to their needs. Fourth, S-LOM is expected to undergo rigorous validation before its practical implementation. In a previous validation study (Snell & Lau, 2020), S-LOMS was tested with 400 Hong Kong university students, and the current study involves a further 600-plus respondents. It is intended that there will be subsequent studies of test-retest reliability and criterion validity, which will engage additional respondents. We anticipate that the conceptual relevance and scale validity of S-LOMS will attract its usage by service-learning practitioners as a tool for assessing progress on the enhancement of developmental outcomes for students.
The starting point for the development of S-LOMS as a measurement instrument was a review of the common student developmental domains arising from service-learning, as documented in past literature. This was followed by considering the special educational and social context for service-learning in Hong Kong. For example, within the overarching category of civic orientation and engagement, the instrument was oriented more toward moral development than participatory democracy. To further match the emerging instrument to the local context, the authors also invited local service-learning practitioners to examine the developmental domains and proposed items in the development process. As a result, 15 developmental domains under the four aforementioned overarching categories were identified.
An initial study (Snell & Lau, 2020) was then conducted to validate S-LOMS based on its administration with a sample of 400 university students. S-LOMS was found to have satisfactory internal consistency with the underlying dimensionality uncovered through exploratory factor analysis (EFA) by using the method of Principle Components with oblimin rotation. In that study, regarding reliability, S-LOMS achieved the Cronbach's alpha value above .70 for its four categories, while the 15 original domains collapsed into 11, as follows. Creativity and problem solving skills combined into the higher-order domain of creative problem solving skills. Another higher-order domain comprised relationship and team skills. A third higher-order domain, community commitment and understanding, combined commitment to social betterment with understanding community. A fourth higher-order domain, caring and respect, combined empathy and caring for others with respecting diversity. The other domains remained discrete.
The current study continues the measurement instrument validation journey. This paper reports the validation results of testing S-LOMS with a new sample through confirmatory factor analysis (CFA) against the above factor structure that emerged in the previous EFA study (Snell & Lau, 2020). It is intended that subsequent research not reported here will test for other types of validity regarding S-LOMS, such as test-retest reliability, and will then use S-LOMS to measure developmental outcomes for students through before and after administration around servicelearning experience. The above practice is a typical step in the scale development process (e.g., Brown, 2015;Hurley, Scandura, Schriesheim, Brannick, Seers, Vandenberg, & Williams, 1997;Tay & Jebb, 2017;Worthington & Whittaker, 2006). While EFA is used to identify the dimensionality for a set of variables, it does not force variables to be loaded on certain factors in advance. By contrast, CFA tests whether data fits a pre-specified factor structure (Stevens, 2009). The current study tested a series of alternative models with various factor structures. Since the 11-domain factor structure discussed above had received empirical support from only one prior EFA study, the current study adopted a prudent approach in testing that structure together with the originally theoretical 15-domain factor structure proposed by Snell and Lau (2020), together with other possible structures, so as to compare which one would provide a better fit with a new set of data.
Participants
The current study recruited 629 university students from four Hong Kong government universities, namely Lingnan University, The Hong Kong Polytechnic University, Hong Kong Baptist University, and The Education University of Hong Kong. Female respondents constituted a larger part of the sample (59.5%) and the average age was 20.5 (s.d. = 2.21). Broken down by major disciplines, the sample comprised engineering and science (40.9%), business (19.9%), social sciences (14.3%), arts (12.7%), and healthcare (12.2%). Among the respondents, 65.8% had previous service-learning experience or were in the process of taking service-learning programs or courses.
Instrument
The original structure with four overarching categories and 15 domains described above was employed in the construction of S-LOMS. In the 56-item instrument administered to the students (see Appendix 1), there were three to four items for each of the 15 domains, in the form of selfdescriptive statements. Respondents were asked to indicate the extent of their agreement with the items on a 10-point Likert scale (from 1, "strongly disagree" to 10, "strongly agree").
Procedures
The respondents were invited to answer S-LOMS on a voluntarily basis in a classroom setting, with consent from the instructors of the respective courses, which did not necessarily involve service-learning. Besides S-LOMS, the students completed some demographic items about gender, age, academic background, and prior service-learning experience. Upon completing the questionnaire, the students received a HK$50 supermarket voucher.
Statistical Analysis
CFA was employed in the analysis, using EQS version 6.4 for Windows, and the extent of missingness of the sample data and the assumption of multivariate normality were checked, in order to decide the estimation methods. Regarding data missingness, 520 of the 629 participants (82.7%) provided no missing responses. The mean percentage of missing responses in an item was 0.4%. 63 missing patterns were identified among the 109 respondents with missing responses. Moreover, the sample was tested with multivariate normality. The related indices provided by the EQS indicated violation of the assumption. Specifically, both the Yuan, Lambert, and Fouladi's coefficient (1,332.76) and its normalized estimate (208.24) showed values over 5.00, inferring the nonnormally distributed pattern of the sample data (Bentler, 2006).
As the data showed incomplete and nonnormal patterns, the full information maximum likelihood (FIML) method with robust correction was employed in EQS for the CFA execution, recommended by Bentler (2006). The scaled chi-square (Yuan-Bentler, i.e. Y-B c 2 ) and other indices under Yuan-Bentler's correction of the results were adopted for deciding goodness of fit for the models. This approach is regarded as an effective adjustment procedure when the model violates multivariate normality and is applied to incomplete data (Blunch, 2016;Byrne, 2008;Savalei & Bentler, 2005).
In executing the analysis on the models specified below, a typical CFA parameterization was adopted, as described in steps (a)-(d). In step (a), the first path between each designated factor (whether a learning domain or an overarching category) and its first variable (whether an assigned item or a learning domain) was set as 1.0, for the sake of model identification and latent variable scaling. In step (b), all other parameters and factor variances were freely estimated. In step (c), a constant variable V999 with no variance and a mean value of 1.0 was created for each variable equation. In step (d), covariances were freely estimated between each designated factor. For the sake of comparison, no modification such as error covariances were made to the models.
As the model chi-square test, although commonly used, is subject to a number of limitations (Hooper, Coughlan, & Mullen, 2008) and tends to be rejected as not fitting (Thompson, 2004), other goodness of fit indices, including CFI, NNFI, and RMSEA were also used for assessing the models (Tabachnick & Fidell, 2013). Since the robust correction was implemented, the values of the above indices under the results of Yuen-Bentler correction was adopted. Acceptable model fit was defined as follows: CFI (³ .90); NNFI (³ .90); RMSEA (≤ .08) (Bentler, 1990;Brown, 2015;Browne & Cudeck, 1992). Since a series of models (see the next section) with different factor structures were tested, model AIC indices were employed in comparing the competing models. These are among the most commonly adopted indices for the comparison of non-nested models by using chi-square values (Brown, 2015). The smallest AIC value indicates the best fitting model, under the condition that the models are non-nested.
Models Specification
Since S-LOMS is a newly established measurement instrument with only one prior EFA validation to support its internal factor structure, the current study tested, through a series of CFAs, whether the data fitted other possible factor structures for the instrument, besides the one already reported by Snell and Lau (2020). The seven models that were tested are represented in Table 1 and are explained next.
Model 1 serves as a baseline model, within which all items are loaded onto a single factor. Model 2 is theoretically grounded to the extent that the items are assumed to load directly onto their respective overarching categories identified in prior literature, of which there are four: knowledge application, personal and professional skills, civic orientation and engagement, and self-awareness. The developmental outcome domains such as relationship skills were omitted from this model. Models 3 and 4 tested whether items loaded onto their corresponding developmental outcome domains irrespective of the overarching categories (developmental domains directly to corresponding items). Model 3 was theoretically based, to the extent that it comprised the original 15 domains that S-LOMS had originally been designed to measure (Snell & Lau, 2020). For example, creativity and problem-solving skills were retained as two separate domains instead of being merged into the single domain of creative problem solving skills. However, the four overarching categories were not included in this model. By contrast, Model 4 was empirically based, to the extent that it combined some pairs among the original 15 outcomes to match the 11 domains that had been discovered in the previous EFA study (Snell & Lau, 2020).
Model 5 and Model 6 involved two layers of factors, and constituted hybrids of Model 2 with either Model 3 or Model 4. Both Model 5 and Model 6 were theoretically based, to the extent that they included the four overarching categories. In addition, Model 5 included the 15 theoretically based original outcome domains, whereas Model 6 included the 11 domains from the previous EFA study.
An additional model, Model 7, was a modification of Model 6 that was created by combining the domains of sense of social responsibility with the domain of community commitment and understanding under the overarching category of civic orientation and engagement, as is explained in the next section. Table 2 reports the CFA results in terms of the chi-square test, goodness of fit indices, and AIC indices. The chi-square values for all these models were statistically significant, reflecting that large sample size increased the power of the test and thus the likelihood of rejection as not an exact fit. Accordingly, the goodness of the fit indices were taken into consideration (Bentler, 1990), and Models 3, 4, 5, and 6 demonstrated acceptable model fit, with both NNFI and CFI at marginally 0.9 or above, and RMSEA and its 90% confidence interval at or lower than .05. Moreover, all absolute values of standardized residual were small, indicating that those models fit the data well enough. Note: * Y-B c 2 denotes the Yuan-Bentler scaled chi-square values with robust correction applied. The fit indices in the table are also adopting the version of robust correction.
Model Comparison
Comparing the AIC indices for the above four models indicates that Model 3 is the best fit, followed by Model 4, 6 and 5, in preference order. Despite being the best fit, the results for Model 3 nonetheless indicate two issues. Specifically, the factor correlations between two pairs of learning domains, namely 1) creativity and problem solving skills; and 2) commitment to social betterment and understanding community, are 1.0, and correspond to Snell and Lau's (2020) results in the earlier EFA, which led to the creation of higher-order domains, such as "creative problem solving skills". Factor correlations approaching 1.0 constitute strong grounds for combining multiple factors into a single factor, given the poor discriminant validity that is implied (Brown, 2015).
A similar issue was found with Model 5, which put the 15 domains under four overarching categories, in that the factor coefficient between understanding community and its overarching category of civic orientation and engagement was found to be 1.0. Because of these issues, Model 3 and Model 5 were dropped and only those models with a structure involving 11 domains were considered. Among the remaining models, Model 4 was preferred, given its low AIC value, acceptable goodness of fit (Y-B c 2 = 3,450.80; df = 1,429; p = .00; NNFI = .902; CFI = .909; RMSEA = .047, CI = .045, .049), and good factor loadings and factor correlations.
Model 6, with 11 domains under four overarching categories, was also found to have an issue, in that the path of the domain of sense of social responsibility obtained 1.0 of factor loading from its parent category, indicating the need for further structure simplification under civic orientation and engagement. Accordingly, Model 7 was created as a modification of Model 6 by combining the two conceptually related domains of sense of social responsibility and community commitment and understanding. Model 7 obtained acceptable overall goodness of fit (Y-B c 2 = 3,631.76; df = 1,470; p = .00; NNFI = .898; CFI = .902; RMSEA = .048, CI = .046, .050), and a relatively low AIC value (691.76). The 56 items and the 10 domains loaded with statistical significance on their respective domains and categories, nearly all with scores over .60 (except two items with loadings close to .60), while the four categories were significantly yet not perfectly correlated. Although usually more parsimonious models (i.e. Model 4) would be preferred, a more complex model may also be considered if it is based on a theory that can "substantially improve understanding of the phenomenon or can substantially broaden the types of phenomena understood using that theoretical approach" (Stevens, 2009, p. 572). In our case, the results of the CFA for Model 7 imply that S-LOMS can also further understood as a 10domain model with four overarching categories.
In summary, while the fit indices implied that Model 4 was the best model for S-LOMS, inspection of factor loadings led to the creation of Model 7, which was retained for further consideration in the next step, where Model 4 and Model 7 were examined for their stability on gender by using multi-sample analysis.
Multi-sample Analysis
Multi-sample analysis, or the factorial invariance test, is especially suitable for testing whether a particular model structure or relationships between factors in a model is applicable across samples by different types of categorization (Schumacker & Lomax, 1996). The dataset was divided into two samples by gender (248 male and 364 female). The demographic profile, including mean age and academic backgrounds, for the two sub-samples is listed in Table 3 below. The missingness of both male and female samples revealed an acceptable pattern, with around or over 80% of the responses did not contain any missing responses. As with the previous analysis, the FIML method with the Yuan-Bentler Correction was employed in model estimation, given the incomplete data with the multivariate nonnormal pattern for both samples (see Table 3). We followed the approach recommended by Tabachnick and Fidell (2013) to perform the multisample analysis. We began with the baseline model for the two samples, and constrained a different parameter in each round to test whether the chi-square difference for each group between the less restrictive and more restrictive model was statistically significant. In EQS, this result is presented as the overall chi-square values of the two models against their summative degrees of freedom, in accordance with Bentler's (2006) recommendation. In this procedure, if the result is insignificant, the next step is to add another set of constraints followed by another test, with further steps taken until the result is significant. For our analysis, the parameters comprised, in order, factor loadings, factor coefficients, and factor covariances, but disturbance variances and error variances were not tested due to concern about the sub-group sample size. Model 4 and Model 7 were tested by means of the above method.
The results of the multi-sample analyses for Model 4 and Model 7 are given in Table 4 The multi-sample analyses indicated that both Model 4 and 7 were stable across the sample by gender, with acceptable goodness of fit (NNFI and CFI at .90 or above; and RMSEA <.06). Table 5 and 6 display the reliability results of the developmental outcome domains and overarching categories of the two models. These results indicate satisfactory reliability (see Lance, Butts, & Michels, 2006), with most Cronbach's alpha scores above .80 and a small number just below .80. This was the case for the entire scale (.981), for the four overarching categories (.866 to .957 for Model 7), for the 10 developmental outcome domains in Model 7 (.794 to .925) and for the 11 domains in Model 4 (.790 to .915).
Selected Models and Summary
Based on the above analyses, Model 4 and Model 7 were selected as potential final models, but with inclination toward Model 4 because of its lower AIC value. The final findings for Model 4 and Model 7, in terms of factor loadings, factor coefficients, factor correlations, and reliability indices are illustrated in Figure 1 and 2, and Table 5 and 6. All items in Model 4 and Model 7 were loaded on their designated domains and categories, except that for Model 7 the domain of sense of social responsibility was combined with that of community commitment and understanding. As a result, the constituent domains within the category of civic orientation and engagement distinguish interpersonal-level issues, i.e., caring and respect, from community-level issues. Although the structure of both models received confirmation, the high factor correlations and coefficients illustrated that a more parsimonious solution could be obtained (Brown, 2015). We will discuss this further in the next section. Note: * Y-B c 2 denotes the Yuan-Bentler scaled chi-square values with robust correction applied. The fit indices in the table are also adopting the version of robust correction. Self-efficacy; SU: Self-understanding; CSI: Commitment to Self-improvement Self-efficacy; SU: Self-understanding; CSI: Commitment to Self-improvement .805 Commitment of Self-improvement .807 Note: *The cognominal category was created above the domain "Knowledge Application" for the sake of providing a clear model structure
Conclusion
By using CFA with a relatively large sample, the current study sought to confirm the dimensionality and factor structure of S-LOMS that had been obtained through EFA in a previous study (Snell & Lau, 2020). Seven alternative models were specified and tested. The results indicated that an 11-domain model without overarching categories (Model 4) was the best fit, outperforming the single factor model (Model 1) and four-category level model (Model 2) in terms of the AIC values and goodness of fit indices. By contrast, the analysis indicated that both models that contained 15 developmental outcome domains (Model 3 and Model 5) could not fit the data well, because of ill-fitting patterns in factor correlations and coefficients between particular pairs of domains. Thus, in Model 3, there was a factor correlation of 1.0 between the domains of creativity and problem solving skills, and between the domains of commitment to social betterment and understanding community; while in Model 5 a factor coefficient of 1.0 was found between the domain of understanding community and its overarching category of civic orientation and engagement. The discovery of factor correlations or coefficients approaching 1.0 indicates that there may be more parsimonious model structures (Brown, 2015), and is consistent with the EFA results in the prior study (Snell & Lau, 2020).
Model 6, with a structure of 15 developmental outcome domains under four overarching categories was also rejected due to its factor coefficient of 1.0 between the domain of sense of social responsibility and its overarching category of civic orientation and engagement. Model 7 was therefore created based on a modification of Model 6, with the two domains subsumed under the overarching category of civic orientation and engagement. The first of these domains, a composite of community commitment and understanding and sense of social responsibility, reflects concern for societal level issues. The second domain, caring and respect, reflects interpersonal-level sensitivity. Acceptable goodness of fit was found between the data and Model 7, albeit with an AIC that was larger than for Model 4. Both the 11-domain model without overarching categories (Model 4) and the 10-domain model with four overarching categories (Model 7) were found to be invariant in terms of factor structure, factor loadings, factor coefficients, and factor correlations between male and female groups in the sample, indicating the stability of both models across gender (Schumacker & Lomax, 1996).
The results, indicating preference for 11 over 15 developmental domains, confirmed the previous EFA findings of Snell and Lau (2020). Specifically in Model 4, creativity and problem solving skills were combined into creative problem solving skills; relationship skills and team skills were combined under relationship and team skills; community understanding and commitment to community were integrated under the community commitment and understanding; and empathy and caring for others along with respecting diversity were subsumed under caring and respect.
The four overarching categories confirmed in Model 7 are consistent with typologies of the major developmental outcomes of service-learning in the past literature, which include academic enhancement, personal growth and civic learning (e.g. Driscoll et al., 1996;Elyer & Giles, 1999;Elyer et al., 2001;Felton & Clayton, 2011). Model 7 also includes self-awareness as an overarching category, which was created by Snell & Lau (2020) to capture the developmental outcomes associated with Confucian self-cultivation, which has influenced tertiary education policy in Hong Kong. At the developmental outcome domain level, Model 7 further reduces the number of domains from 11 to 10, by combining sense of social responsibility with community commitment and understanding. In summary, by comparing the 11-and 15-domain structure through CFA with a new sample, the current study confirmed that S-LOMS can be structured as an 11-domain model (Model 4) without an overarching category level, which was a better fit with the data than the alternative models. Nonetheless, the study also offered some support for a model with 10 developmental outcome domains under four overarching categories (Model 7), resembling the findings of past literature. Multi-sample analysis indicated that both Model 4 and Model 7 were stable across male and female groups in the current sample.
Practical Implications
Because of the satisfactory factor validity and internal consistency reported above, S-LOMS offers flexibility in how the developmental impacts on students engaging in service-learning can be measured, with a number of options besides using the entire 56-item scale. For example, an instructor with a specific interest only in the two developmental outcome domains of critical thinking skills, which has three items, and creative problem-solving skills, which has eight items, need only use those 11 items for measurement, thereby streamlining the data collection process. Another example is that an investigator, who wishes to focus on measuring impact within the overarching category of civic orientation and engagement need only use the 18 associated items instead of the entire S-LOMS. Thus, S-LOMS can be administrated flexibly in accordance with instructors' or researchers' needs. Overall scores for any particular developmental outcome domain can be derived by averaging the scores of the associated items. It is assumed that investigators would adopt a pretest-posttest research design for measuring developmental impacts.
Limitations and Further Studies
The first limitation lies in the level of fitness of the models in the current study. Although both Model 4 and 7 achieved acceptable goodness of fit indices (i.e..90 or above for NNFI and CFI ), they did not meet the satisfactory level, which is .95 for NNFI and CFI indices (Hu & Bentler, 1999). Further studies should apply S-LOMS into more new samples to test whether consistently satisfactory goodness of fit indices can be obtained, and if so, discover what modifications are necessary in order to achieve this. The second limitation arises from the multivariate nonnormality of the data from the current sample, resulting in bias over the ML methods in model estimation. Despite our attempt to apply corrections through Yuan-Bentler correction with the FIML method, other researchers have stated that better results can be achieved by adopting a two-stage robust method for non-normal missing data (e.g. Tong, Zhang, & Yuan, 2014), and further studies can consider adopting the latter approach.
Third, the numerous factor correlations and factor coefficients exceeding .85 that were found for both Model 4 and Model 7 warrant attention. They imply poor discriminant validity (Brown, 2015) and may raise questions about the unique predictive validity of their individual factor. Further research is thus required into the predictive validity of S-LOMS's domains and categories. Further limitations, in the case of Model 7, concern the high factor coefficients between the 10 domains and their corresponding overarching categories, as well as the high factor correlations between the overarching categories. This phenomenon matches the observation by Snell and Lau (2020) that although the four overarching categories are conceptually distinct, they are empirically inter-related. The limitation of high factor correlations and coefficients suggests that S-LOMS may need further refinement, and that there is scope for testing a set of simplified models against data from new samples.
Despite the above limitations, the current study has provided empirical evidence about the construct validity of S-LOMS, from which further validation work can be done. The next steps being undertaken include validating test-retest reliability over an interval of time, and testing
|
v3-fos-license
|
2022-03-07T14:36:43.294Z
|
2022-03-07T00:00:00.000
|
247246046
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-022-07678-z",
"pdf_hash": "38dee5144d966229b36ff0662049074fb8b120e1",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45998",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "5452515a938c6e69c826e440c81cac173d089a21",
"year": 2022
}
|
pes2o/s2orc
|
Mental healthcare-seeking behavior during the perinatal period among women in rural Bangladesh
Introduction Mental health conditions are of rising concern due to their increased contribution to the global burden of disease. Mental health issues are inextricably linked with other socio-cultural and health dimensions, especially in the rural areas in developing countries. The complex relationship between mental health issues and socio-cultural settings may largely toll upon healthcare-seeking behavior. So, it urges to document the current status of mental healthcare-seeking behavior during the perinatal period among rural women in Bangladesh to develop a context-specific intervention in the future. Methods This study was carried out in one sub-district in Bangladesh from April 2017 to June 2018. We conducted 21 In-depth Interviews (IDIs) and seven Focus Group Discussions (FGDs) with different groups of purposively selected participants. After collecting the recorded interview and making the verbatim transcription, the data were coded through Atlasti 5.7.a. Data were analyzed thematically to interpret the findings. Results Two-thirds of the total respondents did not seek mental healthcare during the perinatal period at the community level. They also did not know about the mental health service provider or the facility to get set these services. Only one respondent out of twenty-one sought maternal mental healthcare from a gynecologist from a private hospital. Socio-cultural factors such as social stigma, traditional beliefs and practices, social and religious taboos, and social capital negatively influence healthcare-seeking behaviors. Besides, the community-level service providers were not found to be adequately trained and did not have proper guidelines regarding its management. Conclusion The findings provide evidence that there is an urgent need to increase the awareness for service users and formulate a guideline for the community-level service provider to manage maternal mental problems during the perinatal period of women in rural Bangladesh.
Introduction
Globally, mental health conditions are of rising concern due to increased contribution to disease burden [1]. Mental well-being is inextricably linked with the social and physical environment. Thus it, cannot be determined by only the absence of mental disorders but also by related socio-economic, biological, and environmental factors [2]. Mental health disorders refer to a set of Open Access *Correspondence: goutam.dutta@icddrb.org; goutamru11@gmail.com 1 icddr,b, 68 Shaheed Tajuddin Ahmed Sarani, Mohakhali, Dhaka 1212, Bangladesh Full list of author information is available at the end of the article medical conditions that can affect a person's thinking, feelings, mood, ability to relate to others, and daily functioning [3]. Poor maternal mental illnesses affect more than 1 in 10 women during pregnancy and the first year after childbirth and can devastating impact on them and women [4][5][6].
Due to physiological changes during the perinatal period -defined as the time spans from conception to when the infant reaches the age of one, many women are affected by mental disorders [5]. However, there is still a big concern over how pregnant mothers seek care for mental health problems, especially in low and middleincome countries [7]. Evidence suggests, only 13.6% of women have sought help for their depressive symptoms [8]. Healthcare-seeking behavior and the socio-cultural influencing factors have a considerable impact on the lives of women as well as on the early childhood development (ECD) of the baby [4,9]. In terms of the perinatal health services in low and middle-income countries where women's familial, social, and physical environment are crucial determining factors, the decision to seek care is highly complicated [2,[9][10][11][12][13]. Besides, at the population level, there is also a lack of clear understanding among the service users regarding where to seek service for particular types of disease, which often hampers appropriate and timely care-seeking [14][15][16][17][18].
Mental health issues have been incorporated as an essential prerequisite for good health and well-being in the United Nations (UN)-declared Sustainable Development Goals (SDGs) [19]. Therefore, mental health support services are needed to ensure the perinatal mother's good health and well-being, at the community level. But there is a considerable knowledge gap regarding the availability and accessibility of these services in Bangladesh, especially when there are traditional beliefs and social taboos [20] which ultimately hinder them from seeking care for mental health problems fearing discrimination [14].
Understanding healthcare-seeking behavior in a community is necessary to develop appropriate health policies, health systems, and educational strategies to facilitate access [10]. Besides, its determinants of optimal healthcare-seeking behavior of perinatal women in this period can significantly reduce the impact of severe illness on children's growth and development [21]. Given the variability of socio-demographic and cultural contexts, there are differentials in the perception of vulnerability or risk for newborns and prevailing customs, traditions, and beliefs within communities. Therefore, it is critically important to understand community-specific patterns and determinants of population-level antenatal, delivery, and postnatal care-seeking practices, especially for the perinatal period of women. The public health system in Bangladesh is well-organized, starting from the community level to the national level. Still, there is a lack of research-based data or evidence for estimation, service, or support for a particular group of people, service provider, and budget (0.05% is designated for mental health services) to promote perinatal mental healthcare of women in community settings. Since there is a lack of enough evidence around community people's knowledge and health service providers' expertise on psychosocial support to perinatal women in rural Bangladesh. From this context, this paper aims to explore the healthcare-seeking behavior and the influencing factors for healthcare-seeking and document the service gaps at the community level for perinatal mental disorders of women in Bangladesh.
Research design and methods
It was formative cross-sectional research where we applied a qualitative approach for data collection [22]. We employed thematic analysis for this study to describe the meaning and significance of respondents' experiences regarding healthcare-seeking behavior of women's perinatal mental health. We conducted both IDIs and FGDs to collect data from different populations.
Study settings
The study was carried out at the community level in one sub-district in the Rajbari district of Bangladesh. We purposively selected the study area based on the low proportion of care-seeking during the perinatal period and the high mortality rate of under-five children [23]. The Upazila (Local administration in sub-district level) is divided into one municipality and seven unions (the smallest administrative unit in Bangladesh). We covered all unions of this Upazila for the study.
Study population, sample size, and sampling criteria
We carried out 21 IDIs in this study with a different group of people. Among them, 14 IDIs were conducted with mothers who had at least one-year-old children to know their perception of having mental health issues and experience of seeking care for these illnesses. Besides, seven IDIs were conducted with the community service providers such as Sub-Assistant Community Medical Officer, Family Welfare Visitor, Family Welfare Assistant, Health Assistant, Community Health Care Provider (CHCP), and Non-Formal Practitioner. We labeled this particular form of data saturation to fix the number of IDIs in this study. Data saturation refers to reaching a point of informational redundancy where additional data collection contributes little or nothing new to the study [24]. In this regard, Cohen et al.,2000, p. 56 suggested that between 10 and 30 interviews are the best to explore an objective in phenomenological research [25]. All the IDI participants were selected purposively considering the types of respondents and willingness to participate in this study as an interviewee [26]. To triangulate data, we also conducted seven FGDs with community stakeholders like household heads, Union Council (A Union Council consists of an elected chairman and twelve members including three members exclusively reserved for women) members, teachers, religious leaders, and service users who took other services without maternal health care. For the selection of FGD participants, we considered heterogeneity regarding age, sex, education, occupation, etc., to gather information for a different aspect of the participants. However, before selection, we did not use any diagnostic tool to identify the mental illness in this study. Therefore, all of the interviewed participants were undiagnosed whether they had any mental illness or not.
Data collection and quality control
The data collection guidelines were developed separately for different groups. The data collection tools were finalized after incorporating of findings from pre-testing done with a similar group of the population living in another area far from the study site. Then, it was translated from English to Bengali and in the local dialect for conducting interviews due to understanding their local dialect as an outsider. Furthermore, to ensure reliability, we assessed the meaning and explored health and mental healthcareseeking behaviors in different socio-cultural influencing factors during the perinatal period of women at the community level. Content validity was assured by seeking confirmation of each health belief by other women on different days to understand each from various sources. We ensured comprehensive collection and assessment of the socio-cultural health beliefs and practices on healthcare-seeking behavior during the perinatal period of women in the study area. A data collection team with two Senior Research Assistants and one Research officer, experienced in qualitative data collection, was formed. The data collection team has been trained intensely on all pros and cons of data collection, including consent taking, different data collection methods, antenatal, delivery, postnatal maternal mental health problems of women, and community health service in the perinatal period. The team visited the household of the selected antenatal and postnatal women, and mothers who had recent experience in child birthing and caring at their homes for conducting IDIs. The IDIs were taken one to one basis at the home of the respondents and took 30-40 min each. In addition, seven FGDs were conducted with 7-8 respondents at Community Clinic (CC) and took 40-50 min each. To obtain data, we followed the saturation level of information. Data were checked every day through feedback sessions at the end of the day. We listened to the recorded interviews to identify new issues and find out any missed opportunities to further explore them. A central monitoring team of investigators was involved in continuous monitoring of the data collection to ensure quality.
Data analysis
All the data were collected through audio recording along with note-taking. The audio recordings were transcribed verbatim (in their original form). Then the transcripts were organized through cross-checking with the interview notes. Transcripts were randomly checked against audio recordings to ensure the quality of the transcription. The data were analyzed using a thematic approach. We identified themes, as per the research objectives. The transcribed data were systematically coded, synthesized, and interpreted to explain the findings. Results on the same issues from different types of respondents and areas were compared to strengthen the validity of the findings. We used Atlas ti 5.7.a. software for coding and organizing the data.
Results
We have categorized four broad themes to explore maternal mental healthcare-seeking behavior of the perinatal women in the study area. In the first theme, we showed the essential characteristics of the participants that can influence their decision during this period. After that, we explored the respondents' perceptions of maternal mental health problems. Then, we accumulated a broad theme named maternal mental healthcare-seeking during the perinatal period of women. Next, we investigated under this theme into how the respondents recognized perinatal mental health and where we sought perinatal mental healthcare, and who provided this support in seeking care during the perinatal period. Finally, we found out the socio-cultural factors and neighborhood support that influence care-seeking practice.
Socio-demographic characteristics of the respondents
A total of 74 participants (21 from 21 IDIs and 53 from seven FGDs) participated in the study. The study participants' socio-demographic characteristics revealed that the highest percentage (64%) of study participant's age was 30 years or above. Among the total participant's 35% was male and 66% female. Most of the participants (87%) were Muslims and the rest of the participants (14%) were Hindus by religion. About one-third of study participants (32%) had higher secondary or above level education, 34% had secondary level education, but one-fifth of the participants had no formal education. Among the study participants, 39% were housewives, 39% were service holders, 11% were businessmen, 8% were farmers, and 4% were teachers. There were also members of local governments such as Upazila Parishad (local administration in sub-district level), religious leaders, and retired persons. Forty percent of the participants had a monthly household income of more than 12,000 takas, while 24% had fewer than 3000 takas.
Perception on maternal mental health problems
Two-thirds of the respondents said that maternal mental health problems are not a problem to them during the perinatal period of women because it's a natural phenomenon. The mood swings, dizziness, bad dreams in the sleep, and the fears of death for pregnancy, that the mother experiences are explained due to the extra burden of being pregnant and child-rearing this period and are viewed as usual symptoms of this period. Therefore, additional support or medications are not deemed necessary. In addition, these symptoms are regarded as usual for women not labeled as a disease condition to treat.
On the other hand, mental health issues are significantly related to the matter of social stigma for a woman in the community. If the woman has mental diseases, they call her "pagol" (mentally sick/mad). So, they think that woman has to seek treatment from Mental Hospital locally known as "Pagla Garod" (loosely translated as a sanctuary for the mad people). Five respondents said that they were reluctant to seek healthcare due to this perception of maternal mental health and any mental health in the study area. They do not know who serves this support or medication and practitioner in the community (primary level) for mental health, especially maternal mental health problems during the perinatal period of women. One respondent (mother) said that, "I felt worried during pregnancy. I think this is typical for women in the perinatal period. But I did not seek any doctor. Even I did not know who provides the treatment. " (Age: 19 years' female, Education: class five, Occupation: Housewife) Two-thirds of community service providers said they did not hear about the women's perinatal mental health problems. Instead, they listened to their colleague's mental health diseases names, such as depression, anxiety, stress, etc. They reported that they provide counseling for taking nutritious food, preparing for arranging money, and vehicles for the emergency period. Apart from these, they did not provide any psycho-social counseling during the perinatal period of women for their mental health and well-being. One community healthcare provider said maternal mental health is closely related to their circumstances and social and physical factors during this period. Suppose they cannot treat any symptoms of women. In that case, they can refer to the Upazila Health Complex (UHC), a high-level facility of primary level healthcare in Bangladesh. So, they cannot suggest any support and management for mental health-related problems like depression, anxiety, stress, and postnatal psychotic disorders in general due to the absence of training and treatment guidelines.
Maternal mental healthcare-seeking during the perinatal period
Recognitions of antenatal mental health and care-seeking
All of the respondents said that when women conceived, they locally called her 'poati' (pregnant), pet hoice (being pregnant), or 'Maa hote cholechhe' (mother-to-be). After being confirmed, they inform their senior family members (mother-in-law if present, husband). In most cases, the husband and/ or aging family members decide to seek care if needed. The three-fourths of the participants reported that being pregnant is not a serious issue requiring doctors. Regarding maternal mental health problems, two-third of the respondents reported that pregnancy might be associated with would-be mother's mental concerns. In this connection, many mothers have experienced fear of delivery, pounding heart, and sweating (anxiety) for their upcoming child's good health and wellbeing, the impending birth, during the pregnancy period. They also added that poor mental health conditions might lead to increased risk in childbirth, followed by postnatal mental illness and improper child care. Some women had a mental illness when they became pregnant, and some had mental health problems during the maiden pregnancy. One respondent described her experience as, One respondent reported that since the husband is not aware of the mental state of women during pregnancy, the wife does not get any support from them that makes the mental problems worse. A few respondents reported that husbands had seen the adverse mental condition of their wives in the perinatal period regarding pregnancy complications. For example, a member from an FGD said, The other dimension of mental health is closely related to the sex of the upcoming baby. For example, one respondent (Husband) reported that his wife was tense due to her expectation for a boy baby. He quoted,
"They were agitated during the fourth pregnancy because they already have three daughters. If it is repeated (daughter), what will happen then? This made them anxious. "
Two-thirds of the respondents (service users) said that there are no mental service providers in the primary and secondary level hospitals in the health system in Bangladesh. Only one respondent out of twenty-one sought maternal mental healthcare from a gynecologist in a private hospital.
From the supply side perspective, at the community level, the service provider (CHCP) mentioned that they do not have any guidelines and knowledge to provide maternal mental disorders in the perinatal period of women. One respondent shared her experiences, "When I was pregnant, I felt apprehension or dread, tense about my delivery, and panicked regularly. Then I shared it with my husband. He told me to go to CC for taking counseling, but Apa (CHCP), could not provide any suggestions on these. " One-third of the respondents (services providers and stakeholders) opined that unintended pregnancies happened in most cases at the community level in our country, making pregnant women mentally depressed.
Delivery care-seeking
In terms of physical health, two-thirds of the respondents reported that they sought treatment during the delivery period from a private hospital, clinic, or Mother & Child Welfare Centre (MCWC), popularly known as "maternity" at the district level. A few respondents also revealed that they also went UHC for delivery purposes during the delivery period. Five out of fourteen respondents informed that they sought delivery care from the district hospital. Among all the respondents, only one respondent went to Faridpur Medical College and Hospital, a tertiary level hospital for delivery care seeking due to prolonged labor pain. In terms of mental health care seeking, two-thirds of the respondents said that they felt tensed and became frustrated over the danger signs, fatigue, hopelessness, body pain, and labor pain during the delivery period. During the delivery period, they did not seek doctors' treatment for these types of maternal mental crises. They seem that it is a more natural process for human beings and will be cured naturally. Three respondents said that they are followers of Atrashi pak Darbar Sharif (religious and spiritual place); they get talisman (spiritual healer) from this Darber Sharif for any mental health problems during the delivery period. Another two respondents opined that they took pani pora (blessed water) from the mosque's Imam (Muslim religious leader) to cure worries during the delivery period.
Postnatal (6 weeks or 42 days after delivery) care-seeking
Two-thirds of community-level service providers reported that they do not have formal knowledge and treatment guidelines to deal with postnatal mental disorders such as depression, anxiety, stress, postpartum psychosis, post-partum blue, and related symptoms for the postnatal period of women. Even if they have no idea that poor mental health conditions may lead to increased risk in childbirth followed by postpartum depression. One respondent said,
"I think maternal mental disorders have seen in the perinatal period, especially after delivery due to her physical poor health conditions and new kid's crying and disturbance. " (Age: 18 years female, Education: Secondary School Certificate, Occupation: Housewife)
On the other hand, five respondents said that they have experienced mental health problems during the postnatal period although they did not seek treatment. One mother reported that she thought of seeking mental health treatment during the postnatal period, but she did not know where to go for the treatment. Only one respondent involved in the teaching profession revealed that depression, anxiety, postpartum blue, and psychosis are the most common mental health problems in the postnatal period, but treatment management is not available in the community level facility.
Socio-cultural influencing factors
In the study area, two-thirds of the respondents revealed that healthcare-seeking behavior had been influenced by many confounding factors furthermore, some factors have more significant influences on maternal mental disorders treatment seeking as follows; socio-cultural and religious beliefs, practices, taboos, and restrictions during the perinatal period of women. These factors have been elaborated by respondents' experience below.
Beliefs and practices
Two-thirds of community people have different beliefs in social and religious entities on maternal health and mental health issues during the perinatal period. Often the traditional healers, religious leaders, folk, and spiritual healers are referred to as the sources for treatment-seeking. Three out of 14 respondents revealed that they did not go to the doctor to seek mental healthcare during the pregnancy period because the doctors might give tests for pregnant women that might be harmful to the fetus. Besides, it was also costly to go to doctors. In that case, they had to abide by their mother-in-law and husband's decision that led them to go to the religious leaders for spiritual blessings. Three respondents out of 14 said they sought treatment from Sasto Kormi (Health worker) of Bangladesh Rural Advancement Committee because they knew her and cared for community people through household visits.
Support from neighborhood
Two-thirds of the respondents reported on this common issue that social capital was a leading social determinant to motivate maternal mental healthcare-seeking behavior in the perinatal period of women. Through this relationship, a person gets support to improve the mental well-being of women in the perinatal period. Twothirds of pregnant women who participated in this study explained that they usually got help from their neighbors during pregnancy and in any critical situation. One pregnant woman quoted, "I felt severe pain in the lower abdomen during the eighth month of my pregnancy. I did not find any way. I shared with my husband, but he did not solve the problem from his knowledge; then I went to my neighbors. She told me that it is very usual in the pregnancy. I got relief after hearing this". (Age: 28 years' female, Education: class seven, Occupation: Housewife) Another female respondent who has 1 year's child stated, "When I conceived, I did not know who would be better for medication at that period. My neighbor told me to go to either the private clinic or Maternity in Rajbari Sadar. After that, I went to maternity and got checked by a gynecologist. " (Age: 18 years' female, Education: class seven, Occupation: Housewife) Two-thirds of respondents opined that husbands' support is the most trusted and closest support than female relatives and friends. So, his support is considered the most important support during pregnancy. Besides, the intimacy in the husband-wife relationship has a central role in social support received and their sense of togetherness.
Discussion
Our study suggested that healthcare-seeking behavior regarding maternal health and mental health disorders during the perinatal period has a considerable impact on the lives of women during the perinatal period. Moreover, the finding also emphasizes that healthcare-seeking practice is like a process that begins at the community level and ends with the specialized doctor in the sub-district or the district level.
However, the result shows rural women does not usually seek treatment for physical health and mental health problems during the perinatal period. This finding is similar to other studies showing that parents are reluctant to seek mental health issues [27]. Even 78% of parents sought 'no care' for their preterm newborn [28]. Our result shows pregnant women faced many problems during perinaltal period like sadness, eating disorder, fear for delivery unable to proper sleep, tension (excessive thinking about the outcome), but they rarely go to seek care for identifying or differentiating to normal sadness or some pathological conditions like anxiety, stress, etc., from pathological depression, anxiety, trauma etc. those lead to mental disorders. Similar finding from a systematic review that, women do not seek medication for any maternal mental health disorders during the perinatal period of women [29].
At the individual level, married pregnant adolescent girls usually avoid health facilities for pregnancy and delivery care because of their perception that pregnancy is a natural phenomenon. Therefore, there is no need to receive pregnancy care and other medical supports. Sometimes, shyness is seen for male service providers, and some women are found to be afraid of instrumental delivery and surgical intervention if needed. In interpersonal and family-level factors, they pointed out that decision-makers (husband, mother-in-law, senior family members, and relatives) play an essential role in using skilled maternal health services. Across the low and middle-income country settings, there is still a big concern over how pregnant mothers seek care for mental health problems, especially in as evidence suggests, only 13.6% of women have sought help for their depressive symptoms [9].
Even our research found that some women went to discuss with the community service providers but did not get any fruitful treatment or effective support on mental health problems during the perinatal period as the service providers are not oriented about the perinatal mental health problems. So, the healthcare providers at the community level need to know the proper guidelines for the initial management of mental health problems and appropriate referrals. A similar finding was reported in another study that a high prevalence of antenatal depression among rural women, who rarely seek treatment for their depression which goes with our study findings [7]. Gausia K. et al. 2009 revealed that 14% of women with depression admitted that they felt like doing self-harm during their current pregnancy [30].
Findings show that most mothers stated that they felt panicked, phobia, stress, physical aches, and pains during the delivery period. Still, there are no people beside them to give mental support. Similarly, other studies found that most respondents seek healthcare from traditional birth attendants (TBA) and Non-Formal Practitioner during the delivery period though they do not have any knowledge or training regarding maternal mental health. In many other pieces of literature that delivery usually takes place at home and is attended by TBA [31,32]. On the other hand, a national survey in Bangladesh found that the health facility delivery rate is very low [33], similar to this study's findings. So, they think it is a natural process that is obvious in life. But they felt a need for mental support during the delivery period [34].
Our study found that the religious beliefs and practice of family influence care-seeking. Almost similar findings were found from a study that showed women sought faith-based medication from 'local religious leaders 'along with formal Healthcare services [10]. Likewise, another study found that women receive pregnancy-related care from multiple sources based on the nature and type of threats they associate with their pregnancy [35]. Moreover, there is an influence of social capital on healthcareseeking behaviors on maternal health and mental health. From the same aspect, Ahmad R, et al., found a strong influence of healthcare-seeking behaviors during the perinatal period of women. This study found that women's social networks help seek treatment as needed [36]. In another study by Yakong, V N et al. found that the healthcare-seeking behavior of mothers during the perinatal period was influenced by interpersonal communication between providers and patient parties. That study further showed that women received treatment surreptitiously for mental health-related disease but some patients experience a lack of privacy during treatment-seeking [37].
Besides, women's help-seeking was influenced by their expectations and experience of healthcare professionals and the healthcare system's structural factors [9]. We found in this study that community health care service providers had no necessary training and a guideline for the patient's management who has mental health problems during the perinatal period. That's why perinatal women cannot get adequate services to meet the requirements regarding mental health problems. Another study also found that inadequate service providers and their management guidelines are the vital barriers to providing services for mentally disordered women during this period and hampers the quality of life of people in developing countries [34]. However, the findings suggest that the community health service providers should be capacitated by providing training on identifying mental disorders and the proper management, including referral. Along with this, different awareness campaigns regarding perinatal diseases at the community level should be arranged by the government and non-government development partners to educate community women regarding the mental disorder and inform them about the appropriate service points.
This study has several limitations
Firstly, this is a relatively small-scale study with a small sample size that does not represent the whole country. The study also experienced time and fund constraints. It is maternal mental health research, and we faced difficulties conceptualizing and exploring it with a different experience of women in the community. These difficulties have been overcome after repeated meetings and consultations with experts in the area from the government health system of Bangladesh.
Conclusion
The study documented some supply-side barriers to providing maternal mental health services in community-level health facilities such as CC, Union Health and Family Welfare Centre, and Union Sub-Centre. As found, community-level health providers do not have the knowledge and skills set to provide mental health support during this time, nor do they have proper guidelines and training regarding this. Despite having the positive significance of mental health during the perinatal period, this service is not yet available in community-level public health facilities in Bangladesh. From the demand-side perspective, the study also explored that the participants were not aware of getting service points and treatment, including medication for maternal mental disorders. They did not know who could provide the treatment and its impact on the mother and child in their future. This study documented the healthcare-seeking behavior and community healthcare service providers' knowledge (guideline and training) gap regarding psycho-social support maternal mental health during the perinatal period of women in rural Bangladesh. Therefore, we emphasized integrating maternal mental healthcare support within the existing health system and promoting community mobilization for improving maternal mental health in rural settings of Bangladesh. Besides, the findings or evidence may also help the policymakers and program implementers to formulate appropriate policies addressing the community-level peoples' knowledge and service gaps identified in this study. Furthermore, well-designed epidemiological and clinical research is needed to generate evidence to improve the mental health of perinatal women services in the community in Bangladesh. In addition, a feasibility study could be done to know how a psychosocial support intervention can be effective for perinatal women and their baby's cognitive development.
|
v3-fos-license
|
2020-10-02T13:26:47.687Z
|
2020-01-01T00:00:00.000
|
222095946
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09204705.pdf",
"pdf_hash": "7b92bfb15f66a9f3615d9cad609db07a369ba1e1",
"pdf_src": "IEEE",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45999",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science",
"Engineering",
"Geography"
],
"sha1": "7b92bfb15f66a9f3615d9cad609db07a369ba1e1",
"year": 2020
}
|
pes2o/s2orc
|
An Ensemble Wasserstein Generative Adversarial Network Method for Road Extraction From High Resolution Remote Sensing Images in Rural Areas
Road extraction from high resolution remote sensing (HR-RS) images is an important yet challenging computer vision task. In this study, we propose an ensemble Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP) method called E-WGAN-GP for road extraction from HR-RS images in rural areas. The WGAN-GP model modifies the standard GANs with Wasserstein distance and gradient penalty. We add a spatial penalty term in the loss function of the WGAN-GP model to solve the class imbalance problem typically in road extraction. Parameter experiments are undertaken to determine the best spatial penalty and the weight term in the new loss function based on GaoFen-2 dataset. In addition, we execute an ensemble strategy in which we first train two WGAN-GP models using the U-Net and BiSeNet as generator respectively, and then intersect their inferred outputs to yield better road vectors. We train our new model with GaoFen-2 HR-RS images in rural areas from China and also the DeepGlobe Road Extraction dataset. Compared with the U-Net, BiSeNet, D-LinkNet and WGAN-GP methods without ensemble, our new method makes a good trade-off between precision and recall with F1-score = 0.85 and IoU = 0.73.
I. INTRODUCTION
Up-to-date road networks are crucial for vehicle navigation, urban-rural planning, disaster relief and traffic management [1], [2]. However, manual road extraction is costly and slow, especially for rural areas which are weakly reachable and have limited access to GPS data unlike urban areas and highways [3], [4]. Instead, automatic road extraction from high resolution remote sensing (HR-RS) images have attracted much attention, since HR-RS images such as GanFen-2 (GF-2) have the advantages of wide imaging coverage, frequent revisit and sufficient spectral and spatial information of land covers [5], [6]. Most researchers regard road extraction from HR-RS images as a pixel-wise segmentation task which classifies each pixel into road or non-road according to geometric, photometric and texture The associate editor coordinating the review of this manuscript and approving it for publication was Seifedine Kadry . features of roads (e.g. [2], [7], [8]). Thus, the performance of road extraction highly depends on the chosen semantic segmentation methods.
In the last decade, the deep convolutional neural network (CNN) has been introduced to the road extraction due to its impressive performances in computer vision [9]- [11]. CNN is a powerful feature extractor for road extraction which can automatically learn low to high level semantic features with consecutive convolutional operations. Based on CNN, full convolutional network (FCN) and its variants use encoder-decoder structures with skip connections to fuse multi-scale and multi-level features and have achieved stateof-the-art accuracy in road extraction tasks [8], [12]. Nevertheless, although numerous methods have been proposed over the decades, road extraction from HR-RS images is still far from large-scale practical application. The spectral heterogeneity in HR-RS images [6], [13] and the occlusions by trees, shadows and other non-road objects on roads are the key factors that make the segmented road networks messy and unclean [14], [15]. A more general and robust road segmentation method is needed to improve the performance of road extraction from HR-RS images.
Recently, the generative adversarial networks (GAN) has taken over the deep learning community by storm due to their unparalleled flexibility and accuracy in image segmentation tasks [16]. It consists of a generator G and a discriminator D where G generates images that are indistinguishable from the ground truth and D distinguishes between the ground truth and the generated images. Few works have used standard GANs in road extraction from HR-RS images and achieved good performances [4], [7], [15], [17], [18]. However, these standard GANs may suffer from vanishing gradients, mode collapse, and unstable training [19], [20]. To solve the problems, [4] recently introduces the Wasserstein GAN with gradient penalty (WGAN-GP) model into road extraction. The WGAN-GP modifies the standard GANs using Wasserstein distance and gradient penalty which is more stable and easier to train [21]. However, his work neglects the class imbalance problem since the roads are spatially sparse in HR-RS images. In addition, his work only uses the U-Net [12] as G which can be improved by other state-of-the-art segmentation models such as BiSeNet [22]. And to further improve the overall performance in CNN methods, creating ensembles of multimodels may be an effective way [40]- [42].
In summary, the main contributions of this study are as follows: 1. A new WGAN-GP-based method named E-WGAN-GP is introduced into road extraction from HR-RS images in rural areas. The E-WGAN-GP method trains two WGAN-GP models with U-Net and BiSeNet as G, respectively. Then use an ensemble strategy of intersection calculation to improve the overall performance.
2. A spatial penalty term is added in the loss function of the WGAN-GP model to solve the class imbalance issue typically in road extraction.
3. Parameter experiments are undertaken to determine the best spatial penalty and the weight term of the E-WGAN-GP method in GF-2 dataset. 4. The E-WGAN-GP method outperforms the state-ofthe-art method: U-Net, D-LinkNet, BiSeNet and WGAN-GP without ensemble in terms of F 1 -score and IoU in both GF-2 and DeepGlobe datasets.
II. RELATED WORKS
There are intensive studies of road extraction from HR-RS images. Most of them use classification-based methods which can be further divided into supervised and unsupervised [23]. For supervised classification, [2] propose an artificial neural network (ANN) method with a Restricted Boltzmann Machines (RBM) initialization procedure to segment road. [24] use an object-oriented method based on support vector machine (SVM) to extract road centerline. [25] compare Markov random fields (MRF) with a hybrid model of the SVM and Fuzzy C-Mean (FCM) in road extraction. For unsupervised classification, [26] apply K-means clustering to extract road areas followed by mathematical morphology to remove non-road areas. [27] present a semi-automatic approach to delineate road networks incorporating the geodesic and mean shift methods. [28] introduce probabilistic and graph theory to infer road networks. Other classic methods represented by the snake and conditional random field (CRF) models are also adopted in road extraction. For example, [29] evolve a snake to capture road networks by minimizing an energy function which specifies geometric and appearance constraints of roads. [30] employ a high-order CRF to obtain the structures of road networks.
Inspired by the outstanding performances of deep learning in computer vision since 2012, CNN methods have been introduced into road extraction from HR-RS images. Compared with traditional road extraction methods which mostly utilize low level handcrafted features, CNN can automatically extract low to high level semantic features with consecutive convolutional operations [31], [32]. On the basis of CNN, FCN and its variants use encoder-decoder structures with skip connections to fuse multi-scale and multi-level features. In addition, they can output pixel-wise segmentation images with the same size as input images, which is suitable for end-to-end road extraction tasks [8], [12]. For example, [33] refine the U-Net with residual units to improve its performance. [34] propose the D-LinkNet which combines the LinkNet architecture with dilated convolutional layers. Other CNN methods are also adopted in road extraction. For example, [35] build the cascaded CNN (CasNet) to address road segmentation and centerline extraction tasks. [14] create the RoadTracer to get road network graphs following an iterative search procedure guided by a decision function based on CNN. [36] introduce DeepLab to detect roads with the MobileNet-v2 as the backbone. However, despite numerous studies, road extraction from HR-RS images is still far from large-scale practical application. The spectral heterogeneity in HR-RS images [6], [13] (e.g. large intra-class or small inter-class spectral variability) and the presence of occlusions (e.g. shadows and trees) often makes segmented road networks unclean and noisy [13]- [15]. A more general and robust road segmentation method is still needed to improve the performance of road extraction from HR-RS images.
The revolutionary technique in computer vision GAN is promising to fix these problems [16]. In a GAN we train two different neural networks called generator G and discriminator D, each one adversarial to the other. GAN-based methods regard road extraction as a task of image-to-image translation and work the same way as a conditional GAN (cGAN) [17], [37]. Few works have adopted the cGAN in road extraction from HR-RS images and made some progress. For example, [15] present a two-stage framework to extract roads, in which two GANs are used to detect roads and intersections followed by a smoothing-based optimization algorithm. [7] propose an improved GAN using the U-Net as G and suggest a simple loss function with an L2 loss and a cGAN loss. [18] create a multi-supervised GAN with two D to infer road networks with improved topology. However, these methods are too complex either in architecture (e.g. two GANs in [15]) or loss function (e.g. four loss functions in [18]). Moreover, standard GANs struggle in vanishing gradients, mode collapse, and unstable training which are not easy to train [19], [20]. Recently, [4] introduces a WGAN-GP method to translate HR-RS images to a cleaner road network. The WGAN-GP model is proposed by [21] in 2017. It replaces the KL divergence in standard GANs with Wasserstein distance and employ a gradient penalty to enforce a Lipschitz constraint on the loss function. The WGAN-GP model has more stable training and outperforms other methods in road extraction [4]. Our study will further refine the WGAN-GP model to improve its performance in road extraction from HR-RS images in rural areas, which will be presented in the following sections.
A. MODEL ARCHITECTURE
The architecture of WGAN-GP model follows the pix2pix [37], in which two adversarial models (a generator G and a discriminator D) are trained simultaneously [37]. G is given the image x in addition to a random noise z and aims to produce the realistic image G(x) similar to the label y, while D has to decide whether the image belong to y or obtained from G(x). The detailed architecture of WGAN-GP model is shown in Figure 1. Since G is flexible with any segmentation models, we choose the widely used U-Net and the state-of-theart BiSeNet as G respectively in this study. The U-Net is characterized by a symmetric encoder-decoder pathway with 4 down-sampling modules and 4 up-sampling modules (Figure 2a) [12]. The down-sampling module consists of repeated operations of two 3 × 3 convolutions, each followed by a batch normalization (BN), a rectified linear unit (ReLU) and a 2 × 2 max pooling operation. The up-sampling module contains a 2×2 up-convolution, a concatenation with the corresponding feature map from the decoder pathway at the same level, and two 3 × 3 convolutions, each followed by a BN and a ReLU. The BiSeNet is composed of a Spatial Path with a small stride to preserve the spatial information, and a Context Path with a fast down-sampling strategy (e.g. ResNet 101) to obtain sufficient receptive field (Figure 2b) [22]. To refine the output feature of each stage in the Context Path, the BiSeNet proposes an Attention Refinement Module (ARM) which employs global average pooling to capture global context and computes an attention vector to guide the feature learning. To fuse the output features of the two paths, the BiSeNet proposes a Feature Fusion Module (FFM) which employs concatenate operation and a weight vector to re-weight the features. In order to restrict our attention to the structure in local image patches, we use PatchGAN [37] discriminator following pix2pix with 5 down-sampling modules as D in the WGAN-GP model which classifies whether each N× N patch in an image is real or fake and average the classification outputs of each patch to obtain the final results (Figure 2c).
B. LOSS FUNCTION
The total loss function L total (G,D) of our WGAN-GP model mixes the cGAN loss with the pixel-wise loss [37], which can be expressed as follows: where L cGAN (G,D) is the standard cGAN loss, L L1 (G) is the pixel-wise loss, β is the weight to balance the two losses. The L L1 (G) is used to evaluate the pixel-wise quality of the image generated by G, which encourages less blurring than the L2 loss [36]. The L L1 (G) can be written as: where x denotes the distribution of image x, z denotes the distribution of random noise, y denotes the distribution of label image y, and E x,y,z is the expected value of the pixelwise loss. Since roads are spatially sparse and only account for a small proportion of an image as shown in Figure 4, the class imbalance problem cannot be ignored [39], [40]. Here we add a spatial penalty term ω to conserve the impact of the class imbalance. The ω will be set to 1 if the pixel is the background and to α (α >1) if the pixel is the road. Accordingly, there will be a larger punishment if the road is VOLUME 8, 2020 misclassified as the background: The L cGAN (G,D) is the standard cGAN loss and can be expressed as: (D(x, G(x, z)))], (3) where G tries to minimize this loss while D tries to maximize it, E x,y denotes the expected value over all real data instances, and E x,z denotes the expected value over all generated fake instances G(x,z). However, [19] argued that the JS divergence used in standard GANs fails to provide a usable gradient when two distributions are disjoint, leading to the difficulty of training. They suggest replacing the JS divergence with the Wasserstein distance, which is informally defined as the minimum cost of transporting mass from one distribution to another [20]. In contrast to the JS divergence, the Wasserstein distance is continuous and differentiable almost everywhere. We modify the L cGAN (G,D) to the WGAN loss with the Wasserstein distance as follows: In addition, the realization of the WGAN loss enforces a Lipschitz constraint on D and the gradient penalty is considered better than clipping weights [21]. We add a gradient penalty term to our final L cGAN (G,D) as follows: wherex = εy + (1−ε)G(x,z) which is sampled uniformly along straight lines between pairs of points from the distribution of label image y and the distribution of generated image G(x,z), λ is the penalty coefficient, ∇G(x) is the gradient of the distributionx , and Ex is the expected value of the gradient penalty term.
C. OPTIMIZATION PROCEDURE
The training dataset including image x and label y is fed to the model. The overall training objective aims at optimizing the total loss function L total (G,D) to yield the final model G * , which can be expressed as follows: Specifically, an alternate optimization procedure is adopted. We first optimize D by fixing G (Equation 7) and then optimize G by fixing D (Equation 8), which is presented as follows: To accelerate the optimization speed, the mini-batch Adam is used. Repeat the training process till the training loss converges. A pseudocode for the optimization procedure is presented in Table 1.
D. ENSEMBLE STRATEGY
Due to the spectral heterogeneity in HR-RS images and the occlusions on roads, mistakes of road extraction are inevitable even for the state-of-the-art models. Creating ensembles of multi-models is proved to be an effective way of improving accuracies in CNN methods, because models trained with different architectures or hyper-parameters may have complementary information [40]- [42].
In this study, the ensemble strategy is carried out as follows: in the training stage, two WGAN-GP models in which G are tuned as U-Net and BiSeNet are trained respectively with the same training dataset and the same hyperparameters. Here we do not train more models by tuning the hyper-parameters since too many models will affect the inference time. In the inference stage, we use the trained models to infer testing images and yield different sets of segmentation outputs. Since the intersection calculation of vector objects is more time-saving than fusing pixel values in segmentation images, we then apply a vectorization to the segmentation outputs and calculate the intersected vectors. Last we revert the intersected vectors to segmentation images and apply the skeleton thinning method followed by another vectorization to yield the final road vectors. Our ensemble strategy can take advantage of the best of different segmentation models and get road vectors with high precision as shown in Figure 5. The overall ensemble strategy is shown in Figure 3.
IV. EXPERIMENTAL RESULTS AND ANALYSIS A. DATASETS
We use GF-2 images in this study. Launched in 2014, GF-2 satellite is the first civil optical remote sensing satellite in China with a spatial resolution of 0.8 m. GF-2 is an ideal HR-RS data source for road extraction because of its high resolution, wide imaging coverage, frequent revisit and high image quality [6], [38].
We select over 30 scenes of GF-2 images in Fujian and Hainan Provinces from China covering various types of rural areas. The scenes are all in the visible spectrum with RGB bands. The pre-processing method contains radiometric calibration, atmospheric correction, and orthorectification. The original scenes are cropped to 512 × 512 pixels and ended up with 40000 images. Data augmentation methods including rotation and flipping are also applied to expand the dataset. We randomly split our dataset into a training dataset with 36000 images and a validation dataset with 4000 images. For testing, we choose extra three typical scenes of rural areas in Fujian and Hainan Provinces covering mountainous, plain and sub-urban areas, each with a size of 60000×60000 pixels. Each image x has an accompanying binary label y, indicating whether a pixel in the aerial image belongs to road (denoted as 1) or non-road (denoted as 0) as shown in Figure 4. To generate the label image, we follow the centerline-based approach to label road vectors manually in ArcGIS software and then rasterize them with a line width of 5 pixels [7], [39].
To verify the effectiveness of our method, we also conduct experiments on the famous DeepGlobe Road Extraction Challenge 2018 dataset [44]. We only use the training dataset of DeepGlobe dataset since the dataset has not published label images for the validation and testing dataset. The training dataset consists of a total of 6,226 HR-RS images with a size of 1024 × 1024 pixels. We randomly split our dataset into a training dataset with 5500 images, a validation dataset with 500 images and a testing dataset with 300 images. The spatial resolution of the dataset is 0.5 m which is higher than the GF-2 images.
B. EXPERIMENTAL SETTINGS
We implement our E-WGAN-GP method with the PyTorch framework and train them using NVIDIA Titan RTX GPUs with HR-RS images and paired label images from the training dataset. The spatial penalty term α in L L1 (G) and the weight term β in L total (G,D) are set to 100 and 10 according to our parameter-sensitivity experiment. The penalty coefficient λ is set to 10 following [21]. For optimization, we use mini-batch Adam to train our model with β 1 = 0.3, β 2 = 0.99, weight decay = 5e −5 . The initial learning rate is set to 0.0005 and halves every 50 epochs. The max epoch is 300 and the batch size is 20.
C. COMPARISON METHODS AND EVALUATION METRICS
Our new method is compared with the other four methods: the U-Net [33], the BiSeNet, the D-LinkNet [34] and the WGAN-GP method using BiSeNet as G.
Traditional segmentation-based metrics such as precision, recall and IoU fail to be evaluation metrics in this study, since a slight error in the road width will heavily penalize the metrics while a small gap in an inferred road will not [43]. In this study, since our final outputs are road vectors, we suggest using vector-based precision, recall, F 1 -score and IoU to evaluate the performance of our method. Precision and recall show the correctness and completeness of the inferred road vector, respectively. F 1 -score considers the balance between the precision and recall and reaches its best value at 1. IoU represents the intersection of the prediction and ground truth over their region. Precision (P), recall (R), F 1 -score (F 1 ) and IoU can be expressed as follows: where TP, FN and FP denote the true positive, false negative, and false positive respectively.
D. EVALUATION ON GF-2
We use hit/miss images to show our experimental results which superpose the extracted and labelled road vectors upon the original images to detect the hit and miss areas ( Figure 5). We choose three typical scenes from rural areas to demonstrate the experimental results. Scene 1 represents a mountainous area where roads are distributed along valleys. Scene 2 displays a plain area with a river crossing by. Scene 3 is a sub-urban area with a huge body of water and many residences.
In the hit/miss images, yellow lines denote the roads which are extracted correctly (TP), red lines denote the roads which are missed (FP), and blue lines denote the roads which do not exist but are extracted incorrectly (FN). The E-WGAN-GP, WGAN-GP (BiSeNet) and U-Net have much more yellow lines and fewer red lines than other methods suggesting they are superior in terms of R. The E-WGAN-GP and the D-LinkNet show much fewer blue lines than other methods indicating they outperforms others in terms of P. Accordingly, to consider both the R and P, the E-WGAN-GP method yields the best results in the hit/miss images. Table 2 shows a quantitative comparison of five methods, including P, R, F 1 and IoU. The E-WGAN-GP and U-Net give the highest R (0.85-0.87), and the E-WGAN-GP and D-LinkNet yield the highest P (0.86-0.90). As a result, our E-WGAN-GP makes the best trade-off between P and R with the highest F 1 (0.85) and IoU (0.73) and is more reliable in large-scale practical applications.
To get a deep understanding of the errors our E-WGAN-GP makes in GF-2 dataset, Figure 6 shows one result from Scene 2 with paired original image and hit/miss image. Typical FNs are in areas where roads are thin and narrow or hard to be distinguished from the background due to shadows and trees (purple box). Possible ways to tackle these problems include using different road widths in dataset generation [7] and improve the post-processing method to the segmentation images. Typical FPs are caused by either narrow roads (blue box) or interior roads in residences (orange box). One possible way to tackle this problem is adding more corresponding negative samples to the training dataset.
E. PARAMETER ANALYSIS
Since the performance of the E-WGAN-GP method is highly parameter-sensitive in its loss function, we undertake experiments to determine the best parameters in GF-2 dataset. The parameters involved in our experiments are the spatial penalty term α in L L1 (G) and the weight term β in L total (G,D) which are the most critical parameters during training. From Figure 7 and Table 3, we can observe that adding the spatial penalty term α in L L1 (G) will significantly improve the performance. In addition, when we increase α and β (α from 5 to 100 and β from 1 to 10), IoU increases from 0.57 to 0.74 as well. These results suggest that the spatial penalty term α may help us eliminate the class imbalance issue typically in road extraction task [39], [40].
However, when α and β become too large (α from 100 to 200 and β from 10 to 100), IoU decreases from 0.74 to 0.69 instead. This may be caused by that many non-road objects are ''punished'' to be roads by the large α and β. We suggest the proper range of the α and β to be 50-150 and 5-50, respectively. In this study, we set the α and β as 100 and 10 in our study.
F. EVALUATION ON DEEPGLOBE
To verify the effectiveness of our method, we also compare the performance of our method on the public DeepGlobe Road Extraction Challenge 2018 dataset. The comparison method is the D-LinkNet which is the champion method in 2018. The hit/miss images of three typical scenes are shown in Figure 8. Scene 1 and Scene 2 are rural areas and Scene 3 is an urban area. Our method yields more yellow lines and fewer red lines compared with the D-LinkNet suggesting it is superior in terms of R. It is noteworthy that our method may be more suitable for the rural areas, since our method gives much more blue lines in the interior roads of urban areas compared with the D-LinkNet. The quantitative comparison including P, R, F 1 and IoU is displayed in Table 4. Our method has higher averaged R, F 1 and IoU compared with the D-LinkNet, suggesting the effectiveness of our method in road extraction from HR-RS data.
V. CONCLUSION
In this study, we present an E-WGAN-GP method for road extraction from HR-RS images in rural areas. The WGAN-GP model used in our method is a modification of standard GANs with Wasserstein distance and gradient penalty. To solve the class imbalance problem specially in road extraction, we add a spatial penalty term in the loss function. Parameter experiments are undertaken to determine the best parameters of the new loss function based on GaoFen-2 dataset. We also follow the ensemble strategy first to train different WGAN-GP models with different G (U-Net and BiSeNet) and then to ensemble their results to yield better road vectors. The E-WGAN-GP method outperforms the state-of-the-art method: U-Net, D-LinkNet, BiSeNet and WGAN-GP without ensemble in terms of F 1 -score and IoU in both GF-2 and DeepGlobe datasets.
|
v3-fos-license
|
2021-04-27T05:16:42.603Z
|
2021-04-01T00:00:00.000
|
233395983
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2309-608X/7/4/301/pdf",
"pdf_hash": "0450cc8d1c02f966fa2fedc3c0d6da004ebfcf62",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46000",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "0450cc8d1c02f966fa2fedc3c0d6da004ebfcf62",
"year": 2021
}
|
pes2o/s2orc
|
Molecular Phylogeny of Endophytic Fungi from Rattan (Calamus castaneus Griff.) Spines and Their Antagonistic Activities against Plant Pathogenic Fungi
Calamus castaneus is a common rattan palm species in the tropical forests of Peninsular Malaysia and is noticeable by the yellow-based spines that cover the stems. This study aimed to determine the prevalence of fungal endophytes within C. castaneus spines and whether they inhibit the growth of fungal pathogens. Twenty-one genera with 40 species of fungal endophytes were isolated and identified from rattan palm spines. Based on molecular identification, the most common isolates recovered from the spines were Colletotrichum (n = 19) and Diaporthe spp. (n = 18), followed by Phyllosticta spp., Xylaria sp., Trichoderma spp., Helminthosporium spp., Penicillium spp., Fusarium spp., Neopestalotiopsis spp., Arthrinium sp., Cyphellophora sp., Cladosporium spp., Curvularia sp., Bionectria sp., and Acremonium spp. Non-sporulating fungi were also identified, namely Nemania primolutea, Pidoplitchkoviella terricola, Muyocopron laterale, Acrocalymma fici, Acrocalymma medicaginis, and Endomelanconiopsis endophytica. The isolation of these endophytes showed that the spines harbor endophytic fungi. Most of the fungal endophytes inhibited the growth of several plant pathogenic fungi, with 68% of the interactions resulting in mutual inhibition, producing a clear inhibition zone of <2 mm. Our findings demonstrate the potential of the fungal endophytes from C. castaneus spines as biocontrol agents.
Introduction
Endophytic fungi are ubiquitous and found in almost all plant parts, including stems, leaves, and roots, and colonize the host plants without causing any disease symptoms throughout their life cycle [1]. These microorganisms have shown the potential to enhance host resistance to pathogens and pests as well as tolerance to abiotic stress [2]. Bilal et al. (2008) [3] reported that endophytic Aspergillus fumigatus and Fusarium proliferatum produce growth regulators and promote plant growth under abiotic conditions. Some endophytic fungi have been reported to improve plant growth and reduce the severity of plant diseases; therefore, these fungi have the potential to be used in plant disease management strategies [4]. For example, fungal endophytes from cocoa (Theobroma cacao) inhibit the growth of several major pathogens of the crop [5]. Endophytic fungi may be antagonistic and inhibit the growth of other fungi, and many have been reported as potential biocontrol agents [5,6]. Biological control using endophytic fungi is an alternative method for sustainable plant disease management and contributes to environmental conservation.
Plants use several sharp structures, such as spines, thorns, and prickles, for defense. Spines are modified leaves, whereas thorns are a modification of branches, and prickles result from the outgrowth of cortical tissues in the bark [7]. Calamus castaneus Griff. is a common rattan species that grows in the Malaysian tropical rainforest and is classified in the palm family, Palmae or Areceae. Calamus castaneus is recognized by its yellow-based spines, which cover the stems and the middle part of the upper leaves. The spines are arranged as a single line on the stem, while at the bottom of the leaves, the spines are arranged in two parallel lines [8]. These sharp structures may harbor various types of fungi as the presence of endophytic fungi, particularly dermatophytes in spines, thorns, and prickles, has been reported by Halpern et al. (2011) [9]. As C. castaneus is common and relatively easy to find in the forests, studying the presence of endophytic fungi in the spines of this rattan species is of interest. Novel endophytic fungal isolates that have the potential to be developed as biocontrol agents against several plant pathogenic fungi might also be recovered from spines of C. castaneus. As there is a lack of information on the fungal endophytes from spines, the objectives of this study were to determine the occurrence of endophytic fungi in the spines of C. castaneus and identify the endophytic fungi through molecular methods. The antagonistic activity of the fungal endophytes from the spines to inhibit growth of several plant pathogenic fungi was also tested using a dual culture method. Knowledge on the endophytic fungal community in spines of C. castaneus contributes to in-depth information on the occurrence of fungal endophytes in various plant parts as well as identifying potential biocontrol agents against plant pathogens.
Sample Collection and Isolation of Endophytic Fungi
The spines of C. castaneus were randomly collected from rattan trees found in three rainforests, in two states of the Peninsula Malaysia, namely in Bukit Panchor State Park, Penang (5. 30.7 E). The spines were kept in an envelope and transported to the laboratory. The spines were placed in a beaker, covered with a net cloth, and placed under running tap water overnight to remove any debris, dirt, and epiphytes adhered to the surface. Thereafter, the spines were surface sterilized by soaking in 70% ethanol for 5 min, followed by 5% sodium hypochlorite (NaOCl) for 5 min. Then, the samples were washed with sterile distilled water three times for 2 min and blotted dry using sterile filter papers to remove excess water. The sterilized spines were plated onto potato dextrose agar (PDA, HiMedia Laboratory, Maharashta, India) plates and incubated at room temperature (27 ± 1 • C) until there was visible mycelial growth from the spine tissues ( Figure 1). Sixty spine samples were used for isolation. The efficiency of the surface sterilization technique was determined using an imprint method [1]. The surface sterilized spines were imprinted or dabbed on the surface of a PDA plate and the plate was incubated at room temperature. Surface sterilization is considered effective if no fungal colony grows on the imprint plate. Mycelia growing from the spine tissue were sub cultured onto new PDA plates. A pure culture of the isolate was obtained using the spore suspension method and the plates were incubated at room temperature for seven days.
The fungal isolates were sorted into their respective groups or genera based on the appearance of the colonies and microscopic characteristics.
DNA Extraction and PCR Amplification
The fungal isolates were grown in potato dextrose broth and incubated at room temperature for six days. Mycelia were harvested and ground with liquid nitrogen in a sterile mortar and pestle to a fine powder. The DNeasy ® Plant Mini kit (Qiagen, Hilden, Germany) was used to extract genomic DNA, according to the manufacturer's instructions.
The internal transcribed spacer (ITS) region was used to identify all endophytic fungal isolates recovered from the spines except Xylaria. The primers used were ITS1 and ITS4 [10]. After amplification of the ITS, species identity was obtained based on the basic local alignment search (BLAST) and a combination of at least two genes/regions was used for further confirmation of the species (Table 1). However, for several fungal genera, the analysis of the ITS region was not sufficient to differentiate closely related species. A 1% agarose gel (Promega, Middleton, WI, USA) was used to detect the PCR products in 1 × Tris-Borate-EDTA (TBE) buffer stained with FloroSafe DNA stain (Axil Scientific, Singapore). PCR products were sent to a service provider for Sanger DNA sequencing.
Molecular Identification and Phylogenetic Analysis
The DNA sequences were aligned manually and edited using the Molecular Evolution Genetic Analysis version 7 (MEGA7 version 7) [18]. Forward and reverse sequences were aligned with ClustalW using pairwise alignments. The aligned forward and reverse sequences were edited when necessary to form a consensus sequence. For species identity, a BLAST search was used to analyze the number of bases and determine the maximum identity of the consensus sequences from the GenBank database.
A phylogenetic analysis was also conducted, particularly for species that are known to belong to a species complex or for isolates whose ITS sequences cannot be used to confidently identify the isolates to the species levels. Multiple sequence alignments were generated and used to construct phylogenetic trees based on combined sequences. A maximum likelihood (ML) tree was constructed with 1000 bootstraps replicates. The heuristic method used in ML was the nearest neighbor interchange (NNI) and the initial tree for ML was generated automatically. The best model for ML tree was determined from the model search with number of discrete gamma categories 5. The results show that the Kimura 2 parameter model was the best model. Missing data or gaps were treated as complete deletion.
Antagonistic Activity
The ability of the fungal endophytes to inhibit the mycelial growth of several plant pathogenic fungi was determined with a dual culture method using PDA. Several endophytic fungi from C. castaneus spines were selected to assess their antagonistic activity against several plant pathogenic fungi. The endophytic fungi were chosen based on fungal genera or species that have been reported as antagonists against plant pathogens, such as Xylaria cubensis, Penicillium indicum, Penicillium oxalicum, Trichoderma harzianum, and Trichoderma koningiopsis. Endophytic fungal species that have not been reported as antagonists were also tested, namely Endomelanconiopsis endophytica, Neopestalotiopsis saprophytica, Colletotrichum endophytica, Colletotrichum siamense, Colletotrichum boninense, Diaporthe arengae, Diaporthe tectonae, Diaporthe cf. nobilis, and Diaporthe cf. heveae.
Selected plant pathogenic fungi were obtained from the culture collection at the Plant Pathology Laboratory, School of Biological Sciences, Universiti Sains Malaysia, Penang, Malaysia. The pathogenic fungi included two anthracnose chili pathogens, C. truncatum and C. scovellei; two pathogens that cause dragon fruit stem rot, Fusarium proliferatum and F. fujikuroi; and F. solani and F. oxysporum, which are associated with crown disease in oil palm. Four pathogens associated with mango diseases were also included: Lasiodiplodia theobromae and Pestalotiopsis mangiferae, which are the causal pathogens of the mango leaf spot, and L. pseudotheobromae and D. pascoei, which cause mango stem-end rot.
A combination of the endophytic fungi and plant pathogenic fungi tested in dual culture test is shown in Table 2. A control plate harbored only plant pathogenic fungi without the endophytes. Mycelial plugs (5 mm) of the pathogen and endophyte were cultured 6 cm apart. The plates and three replications were incubated at room temperature for seven days. The experiment was repeated twice.
After seven days, the percentage of the pathogen growth inhibition (PGI) was calculated according to the method described by Skidmore and Dickinson (1976) [19]: R1-radial growth of plant pathogenic fungi in control plate. R2-radial growth of plant pathogenic fungi in dual culture plate. R1 was measured from the point of inoculation to the pathogen colony margin on the control plate and R2 was measured from the point of inoculation to the colony margin on the dual culture plate in the direction of the endophytes.
Statistical analysis of the PGI value was performed using ANOVA in SPSS statistical software version 24. Interactions between plant pathogens and endophytic fungi were assigned in a range of interactions from types A to E, according to the interactions described by Skidmore and Dickinson (1976) [19]. Type A interactions occurred when the pathogens and endophytic fungi displayed intermingling growth; type B interactions represented the overgrowth of pathogens by endophytic fungi; type C interactions represented the overgrowth of endophytic fungi by pathogens; type D interactions represented mutual inhibition with a clear inhibition zone at small distance (<2 mm); and type E interactions represented mutual inhibition with a clear inhibition zone at a greater distance (>2 mm).
Molecular Identification
A total of 108 isolates of endophytic fungi comprising 21 genera with 40 species were recovered from the C. castaneus spines (Table 3). Fungi isolated from the spines were confirmed as endophytes as no fungal growth on the imprinted plates was observed. The imprint method was used as an indication that the epiphytes from the surface of the spines had been removed. A successful and correct procedure of surface sterilization removes epiphytes from the surface of the spines, which results in no fungal growth and must be used in all studies concerning endophytes [20,21]. Table 3. Molecular identification of endophytic fungi isolated from C. castaneus spines. Table 3. Cont.
Genbank Accession Number
Isolates Note: Colletotrichum endophytica is synonymous with Colletotrichum endophyticum.
Endophytic fungal species recovered from C. castaneus spines identified using ITS and other additional markers are shown in Table 2. Most of the isolates were successfully identified to the species levels except for three isolates of Diaporthe. The most common isolates recovered from the spines were Colletotrichum spp. (n = 19) and Diaporthe spp. Six species of Colletotrichum were identified using ITS and GAPDH sequences, namely C. horii (n = 4), C. siamense (n = 3), C. fructicola (n = 2), C. cliviae (n = 2), C. endophytica (n = 7), and C. boninense (n = 1) ( Table 3). All the species identified are members of the C. gloeosporioides species complex. In addition to ITS, the GAPDH gene was included as an additional marker as the gene is among the most effective secondary markers to distinguish species in the genus Colletotrichum. Moreover, GAPDH is the easiest gene to amplify and sequence [22,23]. The phylogenetic analysis showed that isolates from the same species were grouped in the same clade as their epitype strains (Figure 2), which confirmed the identity of the endophytic Colletotrichum species obtained from C. castaneus spines.
Antagonistic Activity
In general, most of the endophytic fungi from C. castaneus spines inhibited mycelial growth of the plant pathogenic fungi tested (Table 4). Only three species of Diaporthe, D. cf. nobilis, D. cf. heveae, and D. tectonae, as well as two isolates of X. cubensis did not show antagonistic activity against L. theobromae and L. pseudotheobromae (Table 4). Both pathogens overgrew the endophytic fungi as L. theobromae and L. pseudotheobromae are fast growing fungi able to compete for space and nutrients. Based on the observation of the dual culture plates, the most common interactions between the fungal endophytes and plant pathogenic fungi were type D interaction, which is mutual inhibition with a clear inhibition zone (<2 mm).
Both endophytic T. harzianum and T. koningiospsis overgrew the pathogens on the 7th day of incubation. Endomelanconiopsis endophytica and D. tectonae moderately inhibited all tested plant pathogens ( Figure 11). The results showed that the pathogens were lysed and subsequently killed as no growth was observed when the hyphae from the contact point of both fungi in the dual culture test were transferred onto PDA. A high percentage of growth inhibition was shown by the endophytic T. harzianum and T. koningiopsis that inhibited the mycelial growth of all tested plant pathogens (Table 4).
Discussion
A total of 108 isolates of endophytic fungi comprising 21 genera with 40 species were recovered from C. castaneus spines. The results showed that endophytic fungi residing in the spines are mostly Ascomycetes, class Sardariomycetes, order Glomerellales (Colletotrichum), Diaporthales (Diaporthe), Xylariales (Xylaria), Hypocreales (Trichoderma, Fusarium), as well as several other classes and orders. The present study demonstrated that endophytic fungi isolated residing in C. castaneus spines may be considered as cosmopolitan fungal isolates.
The endophytic fungi from C. castaneus spines were identified using ITS and other suitable markers. Despite the advantages of the ITS region for fungal identification, the region may not be useful to distinguish species in a species complex or closely related species, such as Colletotrichum and Diaporthe. This may be due to lower sequence variation in many closely related species, the presence of sequence heterogeneity among the ITS copies, and the inability of some groups of fungi to amplify the ITS region resulting in poor sequencing success [24,25]. Hence, several genes were also used to accurately identify the fungal isolates and for phylogenetic analysis. The gene chosen depends on the fungal genera; TEF-1α, β-tubulin, GAPDH, and ACT genes were used in this study. Introns in protein-coding genes are highly variable, which make them useful for species identification and phylogenetic analyses. Several of these genes are considered secondary barcode markers with adequate intra-and interspecies variation often used as part of identification using multiple gene phylogeny [25].
The endophytic fungal species from genera Colletotrichum, Trichoderma, Penicillium, Phomopsis, Phyllosticta, and Xylaria are among common fast-growing culturable fungi, which might be one of the reasons these genera were mostly recovered as endophytic fungi from the spines. Moreover, the methods used in this study were culture-dependent methods of which only culturable isolates were recovered from the spines. In culture-dependent methods, several growth parameters including temperature, light, nutrient, and aeration contribute to the growth of the endophytic fungi [31]. By using culture-dependent methods, fast-growing fungal isolates commonly inhibit the growth of slow-growing isolates and thus many fast-growing fungi were recovered [32]. Unculturable endophytic fungi could not grow or were difficult to grow on culture media. Thus, unculturable endophytic fungi are commonly analyzed using culture-independent methods such as denaturing gradient gel electrophoresis and high-throughput sequencing methods [33,34]. These methods can directly amplify endophytic fungi residing in the plant tissues.
Colletotrichum spp. (n = 19) and Diaporthe spp. (n = 18) were the most common endophytes isolated from C. castaneus spines. Species from both genera have been reported as endophytes in the roots, leaves, and stem of several plants, including mangrove tree leaves (Acanthus ebracteatus and Phoenix paludosa) [35], leaves of Sapindus saponaria [36], and twigs of a woody tree (Acer truncatum) [37]. Therefore, the endophytic fungal species from both genera isolated from C. castaneus spines are similar to those previously reported from other types of plants that harbor fungal endophytes [35][36][37].
Although numerous endophytic species from C. castaneus spines are common endophytes, several species have not been reported as endophytes from any plant. These endophytes are P. carochlae, P. indicum, Arthrinium urticae, C. guyanensis, A. hennebertiiennebertii, and P. terricola. Among these endophytic fungi, P. terricola is a rare species and was only reported in the rhizosphere of Quercus rubra in Ukraine [38] and from earthworm casts in Domica Cave, Slovakia [39].
Dermatophytes of animals and humans have been reported from spines, thorns, and prickles [40]. Dermatophytes causing subcutaneous mycosis and infection may occur by inoculation of the dermatophytes into subcutaneous tissues by penetration of spines and thorns [41,42]. Among the dermatophytes from plants, Fonsecaea pedrosoi was reported in thorns of Mimosa pudica isolated from the site of infection [43]. Cladophialophora carrionii has also been isolated from plants. Another dermatophyte, Sporothrix schenckii, is commonly transmitted through a prick from roses [44,45]. However, in the present study, dermatophytes were not recovered from C. castaneus spines, which might be due to different host plants, environmental conditions, and geographical location. These factors may contribute to the endophytic fungi occurrence and diversity in the host plant [46,47].
An antagonistic activity assay was conducted to assess the ability of the fungal endophytes from C. castaneus spines to be used as antagonists that inhibit the growth of plant pathogens. Among the endophytic fungi recovered from C. castaneus spines, T. harzianum, and T. koningiospsis highly inhibited growth of all tested plant pathogens. Other endophytic fungi tested produced low to moderate inhibition. The results of the present study indicated endophytic T. harzianum and T. koningiopsis showed strong antagonistic effects against all the pathogens tested and successfully inhibited the growth of the pathogens. Trichoderma harzianum has been reported to inhibit growth of C. truncatum, causal pathogen of strawberry anthracnose [48], and mango anthracnose [49]. So far, there are no reports on antagonistic activity of T. koningiopsis against anthracnose pathogens, but this species has strong antagonistic activity against F. oxysporum, Rhizoctonia solani, and Botrytis cinerea that infected tomato and cucumber seedlings [50]. Trichoderma koningiopsis was also reported as strong antagonistic fungus, showing 85% growth inhibition of Calonectria pseudonaviculata causing blight of boxwood plant [51].
Several reports are available on the antagonistic activity of T. harzianum against plant pathogenic Fusarium spp. Trichoderma harzianum inhibited growth of F. proliferatum, causing basal rot of onion bulb [52] and stalk rot of maize [53] as well as inhibiting growth of F. solani, causal pathogen of root rot of olive tree [54]. As for T. koningiopsis, this fungus exhibited strong antagonistic activity against F. proliferatum, causal pathogen of soybean damping-off [55].
As one of the effective antagonistic fungi, Trichoderma spp. have several mechanisms of inhibition, which include competition for space and nutrients, antibiosis by secretion of antifungal compounds, mycoparasitism, and induced resistance [56]. These mechanisms may occur with T. harzianum and T. koningiospsis as both grew faster than the pathogens.
Endomelanconiopsis endophytica and D. tectonae may also be considered as effective antagonistic fungi. Both endophytic fungi moderately inhibited the mycelial growth of all tested plant pathogens except for L. theobromae and L. pseudotheobromae, whereby both pathogens grew faster than the endophytes. The inhibition mechanisms might be similar to that of Trichoderma spp., in which the mycelial growth of the tested pathogens was inhibited by competition, antibiosis, or mycoparasitism.
Antagonistic activity of E. endophytica against other plant pathogenic fungi has not been reported, but in a study by Ferreira et al. (2015) [26], the extract of this endophytic fungus displayed trypanocidal activity against amastigote forms of Trypanosoma cruzi. For endophytic D. tectonae, this fungus moderately inhibited growth of Phytopthora palamivora, pathogen of cocoa black pod [57].
Endophytic fungi residing in the spines exhibited antagonistic activity, indicating their ability to produce bioactive compounds. These bioactive compounds may be involved in defense mechanisms against pathogen infections, chemical defense [6,58], and adaption and survival in the host plant [26].
As a conclusion, a total of 108 isolates of endophytic fungi were isolated from C. castaneus spines and 40 species were identified. The results demonstrate that C. castaneus spines harbor diverse groups of endophytic fungi with an antagonistic activity against several plant pathogenic fungi. Among the endophytic fungi, T. harzianum and T. koningiopsis inhibited all plant pathogens tested with a high percentage of inhibition. The antagonistic activity against plant pathogenic fungi indicated that the endophytic fungi have the potential to be developed for use as biocontrol agents. Therefore, further studies should be performed to detect and identify bioactive compounds produced by the endophytic fungi as well as to understand the mechanism the endophytes used to inhibit the pathogen growth. To the best of our knowledge, the present study is the first to determine the occurrence and diversity of filamentous fungi in spines of rattan palm.
|
v3-fos-license
|
2020-12-24T09:11:34.608Z
|
2020-01-01T00:00:00.000
|
235903475
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/82/e3sconf_daic2020_03002.pdf",
"pdf_hash": "1c50c376e15d195498a803f7c58e8f1724123f66",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46001",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "6a861650a00bb15f96247a4879e4f991f597e3cb",
"year": 2020
}
|
pes2o/s2orc
|
Improvement of dill freeze-drying technology
. In this work, we studied the use of IR pretreatments before freeze drying of dill. The indicators of the drying period, the process of rehydration of different samples dried by the sublimation method with and without pretreatment are compared. Experimentally, data have been obtained that when using IR pretreatment, the drying process is reduced by about 4-5 hours. The main reasons for the shortening of the freeze drying period is to reduce the moisture content to 25-30% during pre-treatment. The data confirms a reduction in the drying period for freeze-dried dill, and also provides data on the dehydration of dried samples.
Introduction
Drying is the oldest food preservation method and one of the most common processes used to improve food stability, as it reduces the water activity of the food, reduces microbiological activity and minimizes physical and chemical changes during storage [1,2]. In addition to preserving nutrients, vitamins and minerals, this method significantly minimizes transportation and storage costs [3,4,5].
The development of new technologies for obtaining food products that satisfy the needs of the body is inextricably linked with the development of the microbiological industry, which currently produces a wide range of products that have found application in food technology, as well as in agriculture, medicine, and consumer services.
Many modern food technologists cannot do without the use of microbiological synthesis products. Therefore, in the development of processes and devices involved in production, their deep study, theoretical justification and practical analysis are required. This is especially true of the final stage of production -drying, where the product is exposed to the most dangerous influences. Drying is a very energy-consuming, complex, physicochemical and technological process, in which the phenomena of heat and matter transfer are interconnected. When drying products of microbiological synthesis, the main thing is to preserve and improve those properties that determine their further use. For example, for fodder yeast it is preservation of biological value, for enzyme preparations -their activity, microbiological plant protection meanspreservation and survival of microorganisms, etc.
The use of sublimation dehydration for many thermolabile biological materials is the only acceptable method of obtaining them in dry form, since in this case, irreversible changes in the product are minimal, its regeneration is easy when moistened, the original properties of the material being dried are preserved, such as smell, taste, color, food and biological value. Freeze drying as a technological process involves several successive stages, including material preparation, freezing, loading into a freeze drying chamber, freeze drying as such, unloading and packaging of the dried product.
Currently, the urgent problem in Uzbekistan is the processing of agricultural products with modern technologies. In Uzbekistan, the growing of dill and some types of vegetables is increasing due to potential demand. In 2019, Uzbekistan for the first time bypassed all competitors and became the largest supplier of fresh herbs to the Russian market.
At the end of 2019, 12.9 thousand tons of fresh herbs were already sent to Russia, which is 19% more than in the same period last year.
For many years in a row, Iran has been the leader in the supply of fresh herbs to Russia, but this year Iranian exports have dropped by 14% at once. The share of Uzbekistan reached 40% from January to August 2019 inclusive.
At the same time, problems arise during the storage and processing of large volumes of dill, due to the lack of logistics centers, even when exporting processed dill due to the poor quality of raw materials, which is disturbed during the heat treatment of the chemical composition of the product.
Based on this, our main goal is to dry dill using freeze-drying methods, which is responsible for the quality of the technological parameters. However, freeze drying is the most expensive drying method [6,7]. To do this, we used IR pretreatment prior to drying to ensure moisture release and to shorten the freeze drying period.
Objects and research methods
The temperature of pre-treatment with IR rays of dill seeds, which depends on the field strength in the material and the duration of treatment, is the main factor that has a stimulating effect on the seeds. Laboratory studies to study the effect of IR pretreatment modes were carried out on dill seeds of the Kibray variety.
For the experiment, dill seeds were moistened with water for 10 min, with a water temperature of 23 ºС. After moistening with an HB-600 electronic balance, 1000 g of dill seeds were weighed for placement in the IR field (preliminary treatment). Pretreatment in the IR range was carried out at 60s, 120s, 180s. Pre-processing was carried out under an IR lamp with a power of 500 kW, 4 pcs. Pretreatment pallet area 0.25 m2. The heat flux at IR pretreatment is 8 kW / m2. After the pre-treatment, the dill weights were re-measured. The resulting samples were placed in the freezer for 6 hours, the frozen dill was placed on a freeze dryer.
The time, duration of freeze drying, the process of rehydration of dried dill, and the change in the ascorbic acid of dill were analyzed.
In the course of the experiments, the input parameters were measured: preliminary processing with a stopwatch. The results of drying and rehydration of dill seed samples were obtained [8].
Results and Discussion
The study took place at the 2nd stage. In the first stage, the prepared samples (after washing) were treated under IR rays for 1 minute, 2 minutes. Each sample was weighed after pretreatment and checked for organoleptic parameters (color, smell, consistency). Dill samples after 1 min IR pre-treatment lose about 25-28% of their original weight. After IR pretreatment E3S Web of Conferences 222, 03002 (2020) DAIC 2020 https://doi.org/10.1051/e3sconf/202022203002 for 2 minutes, the proportion of yellowish and brown parts of the leaf began to grow in the dill samples, and foreign tastes and odors appeared. After that, for further experiment, only the samples were selected that were treated with IR pretreatment for 1 minute. The IR-treated samples were placed in a freezer for freezing for 6-8 hours. After the freezing process, the samples were placed on a freeze dryer and the change in weight was checked every three hours. The control of the weight indicators of the dried samples was carried out up to the final moisture content of 14% of the dry sample, the total process duration is 20-22 hours [9,10,11].
Moisture contents
Interestingly, dill samples, after pretreatment for 1 minute, will lose about 20 to 28% of their original weight (most of all moisture). This directly affects the freeze-drying time.
As can be seen from the results obtained (Fig. 1), the use of IR pretreatment before freeze drying reduces the drying time by about 5-6 hours compared to the traditional one (without pretreatment).
In order to determine the quality indicator of dried dill, we checked the content of ascorbic acid, since during the pretreatment, loss of vitamin C is possible, as well as the process of rehydration of dried samples.
Ascorbic acid content
The dynamics of changes in the quantitative indicators of the content of ascorbic acid in dry matter, the curves of which are shown in Figure 2, make it possible to determine the loss of vitamin C in the process of short-term IR pretreatment of dill samples.
These graphs show that there is a slight loss of vitamin C in dill samples after using the treatment in the infrared electromagnetic field.
Rehydration
The rehydration factor was considered one of the important quality parameters for dried samples. The ratio of the rehydration coefficient of samples dried with IR pretreatment and without pretreatment is shown in Fig. 3.
Recovery from rehydration depends on different drying conditions and final moisture content, as shown in Fig. 3. The maximum water absorption capacity was for samples dried without pretreatment and for samples dried with IR pretreatment.
As shown samples of dried dill without pretreatment had a higher index of rehydration capacity, but the difference was not large between the samples dried with IR pretreatment and without.
Conclusions
With the method of drying with IR, pre-treatment is carried out rather intensively than by the method without pre-treatment during drying. At the same time, the drying speed is increased by 1.3 times compared to traditional drying without pre-treatment. And this one concerns the costs of energy resources.
The results obtained show that short-term IR pre-treatment of dill before drying reduces drying time and energy costs. The research results are of practical importance, since freeze drying is the highest quality technology today, which preserves the organoleptic characteristics and chemical composition of almost 98% natural. Experimental data allows one to consider the use of IR pretreatments suitable for use in drying.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2015-06-30T00:00:00.000
|
13964542
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://cardiothoracicsurgery.biomedcentral.com/track/pdf/10.1186/s13019-015-0297-7",
"pdf_hash": "fb13795910bfa65399322751f83df3112ca33150",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46003",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "fb13795910bfa65399322751f83df3112ca33150",
"year": 2015
}
|
pes2o/s2orc
|
Surgery for localized pulmonary mycotic infections in patients with hematopoietic disorder
Background Surgical resection is considered to be the most effective treatment for localized pulmonary mycotic infections. However it is also a particularly challenging procedure because it is associated with considerable mortality and morbidity. Furthermore, hematopoietic disorders usually cause immunosuppression, anemia, and coagulopathy, which are definite risk factors for surgery. The purpose of this study is to evaluate the surgical outcomes of pulmonary mycotic infections in hematopoietic disorder patients. Methods Between 2011 and 2013, 23 patients underwent surgical treatment for pulmonary mycotic infections at a single institution. The patients were divided into two groups; Group A (hematopoietic disorder patients, n = 9) and Group B (n = 14). We retrospectively reviewed medical and radiologic data. Results The complex type was more frequent in group A (66.6 %) than in group B (35.7 %). Postoperatively, there was no mortality. However, morbidity was 22.2 % (2 incomplete expansion) in group A, and 35.6 % (1 prolonged air leak, 3 bleeding, 1 Bronchopleural fistula) in group B. The difference in morbidity between the groups did not show any statistical significance (p = 0.657) as well as duration of chest tube drainage, and postoperative hospital stay. The hematopoietic disorder patients did not impose a risk factor for morbidity and mortality. Conclusions Although hematopoietic disorder patients have many surgical risk factors, the surgical treatment of pulmonary mycotic infections produces very acceptable outcomes in selected cases.
Background
Hematopoietic disorder is a common designation for hematopoiesis-related diseases such as aplastic anemia, myelodysplastic syndrome, and leukemia [1]. Because they often suffer from immunosuppression, anemia and/or coagulopathy, patients with this disorder are considered to be high-risk factor for operation [2]. The immunosuppression of these patients could be caused by the underlying disease itself, but in many cases it has intensified after chemotherapy treatment or hematopoietic stem cell transplantation (HSCT). Fungal infection is more likely to occur if neutropenia continues after chemotherapy or HSCT [3]. Fungal disease occurring in this type of patient is often invasive. Even though it can be initially diagnosed as localized disease, it can also be detected as extended disease from the beginning, and when not treated, mortality has been reported as high as 100 % due to dissemination [3]. Despite the use of antifungal agents in the treatment of all patients with invasive fungal infection, a high rate of mortality has nevertheless been reported, ranging from 20 to 40 % [4,5], and it is recognized that it is difficult in practice to achieve complete clearance of fungal infection by antifungal agent therapy alone [4,6]. Therefore, the most effective treatment is considered to be surgical clearance of fungus before HSCT, or if persistent immunosuppression is expected. However, a decision as to whether to perform surgery can be difficult due to the high surgical risk.
Various fungi can cause pulmonary mycotic infections, of which Aspergillus infection is the most frequent [7].
Of the many different forms of Aspergillus infection, allergic bronchopulomonary aspergillosis (ABPA), aspergilloma, and invasive pulmonary aspergillosis (IPA) are the most commonly found [7]. Among these, aspergilloma mainly manifests as a localized disease occurring in underlying lung disease. IPA outbreaks occur mostly in immunocompromised patients in the form of a localized or extended disease. In case of aspergilloma, many reports have confirmed the effectiveness of operative treatment, though the operative risk is somewhat high since in many cases the underlying lung disease is severe [8][9][10]. However, in the case of IPA, surgical treatment may be difficult to perform in many cases due to the high rate of parenchymal invasion. Despite reports of surgical treatment in cases where it has occurred as a localized disease, the paucity of such cases has given rise to considerable uncertainty as to the effectiveness of surgical treatment [11][12][13][14].
The purpose of this study was to evaluate the possibility of surgical treatment for hematopoietic disorder patients by analyzing the differences in operative risk and the outcome of surgical treatments between the general localized pulmonary mycotic infection and the localized pulmonary mycotic infection occurring in hematopoietic disorder patients with high operative risk.
Methods
A retrospective chart review was conducted on 23 patients who underwent surgical treatment for pulmonary mycotic infections at Seoul St Mary's Hospital between 2011 and 2013. In order to evaluate the surgical outcomes of hematopoietic disorder patients, the patients were divided into two groups; Group A (hematopoietic disorder patients, n = 9) and Group B (other patients, n = 14), and the characteristics and surgical outcomes of the two groups were compared. The underlying diseases of Group A were: acute myeloid leukemia (AML) (n = 3), acute lymphoblastic leukemia (ALL) (n = 3), myelodysplastic syndrome (MDS) (n = 1), and aplastic anemia (n = 2). Those of Group B were: bronchiectasis (n = 4), old tuberculosis (n = 4), end stage renal disease (n = 1), liver cirrhosis (n = 1) and non-specific underlying disease (n = 4).
Open thoracotomy or video-assisted thoracoscopic surgery (VATS) was performed, as was pulmonary resection, wedge resection, segmentectomy, or lobectomy, depending on the extent of the disease, as a curative procedure.
In the pre-surgery diagnosis, chest computed tomography (CT) scan played the most important role, and in cases where the 'halo sign' or 'air-crescent sign' was detected, early surgical resection treatment was performed when there was a strong suspicion of aspergilloma. However, in cases when atypical pneumonic consolidation was detected, surgical treatment was only performed on patients if complete surgical resection became feasible after antifungal agent therapy and in correlation with the clinical information.
Through histologic examination after surgery, mycotic infection was confirmed in all cases, there being 21 cases of aspergillosis and 2 cases of mucormycosis. In cases of Aspergillus infection, aspergilloma was diagnosed separately from IPA by confirming bronchopulmonary invasion, vessel invasion, or pulmonary infarction in the pathologic findings.
Aspergilloma occurred often in the form of a fungus ball and was categorized into simple and complex types by the shape of its lesion. Aspergilloma was classified as the simple type in cases where the surrounding lung was comparatively normal and the capsule was thin. It was classified as the complex type if the lung surrounding the lesion was diseased or the capsule was thick [15]. However, cases of mucormycosis and IPA were not classified into the types previously referred to in existing studies. In this study, the types of Group B patients (n = 14, all with aspergilloma) were classified in the same way by applying the pertinent type definition to the shape of the lesion as shown on the chest CT image. Out of the 9 patients in Group A, there were 1, 2, and 6 patients who were pathologically diagnosed as aspergilloma, mucormycosis, and IPA respectively. A patient with aspergilloma was classified as a simple type as per the CT image, and 2 patients with mucormycosis were also classified as simple types since both of them showed a single nodule with the lesion's surrounding lung parenchyma showing a normal shape on CT image. Six patients with IPA were classified as complex types since their parenchymal lesion was accompanied by infiltration and there were many similarities to the complex type as per the CT image. Thereafter the types were matched between the two groups ( Fig. 1).
Follow-up data was obtained by review of the outpatient charts. Chi-square or t test was used for the comparison of preoperative, intraoperative, and postoperative factors of the two groups and a value of P < 0.05 was considered statistically significant. Operative mortality was defined as in-hospital mortality or those cases in which the patient died within 30 days following surgery. The relapse rate after operation was counted and the all-cause mortality rate was evaluated. Logistic regression analysis was used to evaluate the risk factor of postoperative complications. The study was approved by the institutional Review Board of Seoul St. Mary's Hospital (The Catholic University of Korea).
Results
There were 9 patients in Group A and 14 patients in Group B, making a total of 23 patients. The mean age of Group A was 34.7 years (range 14-63) and of Group B, 49.1 years (range 20-69). The mean age of Group A was lower than that of Group B (p = 0.043). Females outnumbered males in Group A with 6 female patients (66.6 %) and 3 male patients (33.4 %); while in Group B the number of males and females was the same with 7 patients (50 %) of each sex. Regarding preoperative symptoms, in Group A fever was the most common with 6 instances (66.6 %), and 1 subsequent instance (11.1 %) of hemoptysis, cough, and subclinical. In Group B hemoptysis was the most frequent sympton with 7 instances (50 %) and subsequently there were 5 instances of subclinical (35.7 %) and 2 instances (14.3 %) of cough. Even though white blood cell (WBC) counts and absolute neutrophil counts (ANCs) of both groups in the preoperative complete blood cell count (CBC) did not show significant differences, both the hemoglobin and platelet counts of Group A were significantly less (9.8(±1.4)g/dl vs 12.2(±2.0)g/dl, p = 0.006), (112.2(±72.2) × 10 9 /L vs 201.5(±78.8) × 10 9 /L, p = 0.012). In pulmonary function testing, there were no differences in the two groups in forced expiratory volume in 1 s (FEV1) and forced vital capacity (FVC). In the type classification by image findings, there were 3 patients (33.3 %) of simple type and 6 patients (66.6 %) of complex type in Group A, and there were 9 patients (64.3 %) of simple type and 6 patients (35.7 %) of complex type in Group B. Even though the complex type ratio was higher in Group A, it was not statistically significant (p = 0.147) ( Table 1).
Before operation, all patients in Group A were treated with antifungal agents, but Group B patients were not.
Antifungal agents used in Group A were amphotericin B for 4 patients, IV voriconazole for 2 patients, and oral antifungal agent (fluconazole, itraconazole, posaconazole) for 3 patients, with the average period of treatment 44.3 days (16-147 days). Six patients (66.7 %) showed a partial response such as reduction of infiltration or size in the surrounding lung (Fig. 2). The surgical procedure was different depending on the range of the disease and the operation was performed for complete resection of the localized lesion. In the case of Group A, 2 procedures were performed on 3 patients out of the 9, making a total of 12 procedures in all. In the case of Group B, 2 procedures were performed on 4 patients out of 14, making a total of 18 procedures in all. In both groups lung wedge resection was the most frequently performed procedure with segmentectomy and lobectomy next in frequency. In cases of simple type, pulmonary lesions were easy to resect without 1 case of enucleation performed on a right upper lobe (RUL) nodule close to a RUL bronchus in Group B. However, in cases of complex type including invasive aspergillosis, pulmonary resections were difficult due to severe adhesion, fragility of lung parenchyme, and unresectable inflammatory lymph nodes. In some cases of complex type, other procedures were needed: 1 case of myocardial abscess drainage performed along with lung wedge resection in Group A, 1 case of cavernostomy performed on a fungus ball with chronic pleural empyema in Group B. The spectrum of surgical procedures did not differ significantly between the two groups (p = 0.500) ( Table 2).
There were no statistically significant differences in duration of operation and of the volume of blood loss during the operation between the two groups, and there were no statistically significant differences either in average duration of chest tube drainage (3.2(±1.8) days vs 5.0(±3.2) days, p = 0.104) and length of hospital stay (9.2(±5.0) days vs 11.9(±11.1) days, p = 0.512) after operation. There was no postoperative mortality. Postoperative complications occurred in 7 patients (30.4 %) in total. In Group A, complications occurred in 2 patients (22.2 %) in total and in both cases it was incomplete reexpansion of lung, with both cases recovering after a few months following operation. In Group B complications occurred in 5 patients (35.6 %) in total; with 1 patient with 8 days of prolonged air leak, 2 patients with postoperative bleeding with pleural blood drainage over 1000 ml within 24 h after operation (and who recovered after transfusion without reoperation), 1 patient with delayed hemothorax (which occurred on the 12 th day after operation and who recovered after intercostal artery embolization and pleural drainage), and finally 1 patient with bronchopleural fistula (BPF) after cavernostomy. The BPF patient was discharged from hospital with open chest tube drainage under careful monitoring and on the 48 th day after operation, the BPF closed spontaneously and the drainage tube was removed. After that, he was monitored for (Table 3). Logistic Regression analysis was used to evaluate the risk factors for complications. After univariable analysis, division of groups was not risk factors for complication incidence (p = 0.318) and analysis found that the possibility of the complication occurrence was higher in cases where the sex of the patient was male (OR = 8.25, p = 0.036) and the range of the operation was wide (wedge resection < segmentectomy < lobectomy, OR = 3.012, p = 0.029). In multivariate analysis none of the factors, including the division of groups, showed statistic significance (Table 4).
Mean follow-up period was 692.5 days (±351.2). During follow-up periods, 2 patients in Group A died 114 days and 187 days after operation respectively, and both of them died of complications (sepsis, graft-versus-host disease) after HSCT. During the follow-up period, no recurrence of fungal disease was observed in any patients.
Discussion
Although pulmonary mycotic infections can be caused by various types of fungi, aspergillosis and mucormycosis can be said to be the typical ones which cause localized pulmonary mycotic infections [16]. It is known that while aspergillosis can cause disease in both the immunocompetent patient and the immnucompromised patient, mucormycosis can cause disease only in the immunocompromised patient [17]. In this study, 21 of 23 patients were diagnosed with aspergillosis and 2 patients were diagnosed with mucormycosis.
When aspergillosis occurs as a localized lung disease, aspergilloma is a typical picture; it also occurs sometimes in IPA as a localized lung disease.
In cases of aspergilloma, hemoptysis is the most frequent symptom. If the patient is immunocompetent, antifungal agent is of no use and only surgical treatment is known to be effective treatment [9,18]. Although Jewkes el al. reported that it would be better to apply surgical treatment only if hemoptypsis was detected, due to the high morbidity and mortality rate of surgery [19], many studies have reported that surgical treatment could increase the survival rate even if there were no symptoms, since in recent years the severity of the underlying lung disease has been low in many cases and surgical outcomes were getting better along with the development of operative techniques [20,21]. Therefore, it is generally considered best to perform surgical treatment if there are symptoms, and if there are no symptoms, to perform surgical treatment only if the patient's general condition can withstand pulmonary resection [7,22].
In the case of IPA, since it is often not only invasive into surrounding tissue but also disseminated systemically through the whole body, initial treatment with antifungal agents is recommended. However mortality is known to be high even if treated with antifungal agents [5,23,24]. Even though a few studies have reported that the surgical treatment of IPA increased the survival rate, those studies are now considered inappropriate in evaluating current surgical outcomes since most of those studies evaluated patients at single centers and over a long period of time in the considerable past [11][12][13][14]21], and surgical treatment methods have developed since then. However, in this study we evaluated patients treated in a recent comparatively short period of 3 years, and treated by recent surgical techniques at a single center, so we believe the comparison of surgical outcomes in this study would be more realistic than those of the existing studies.
While mucormycosis also has a clinical course similar to IPA, it can proceed to disseminated disease in immunocompromised patients and it has been reported that the mortality is very high when treated only with medical treatment [17]. There have been reports that surgical treatment increased survival rates in cases of localized pulmonary mucormycosis [16,25]. However, since the disease incidence was very low and only a few studies were reported, there has been no certain established treatment. In this study, we evaluated 2 patients with the simple type of lesion and who were treated with resection (wedge resection). They were discharged from hospital without complications and without relapse for a mean 661.5 days of follow-up.
Aspergilloma can be divided into simple and complex types [15], and in the case of the complex type, the operation is known to be complicated with a high operative risk since the lung around the lesion is abnormal. In the case of IPA, even though there are some cases which look similar to the simple type on radiologic imaging, in many cases they are similar to the complex type since the disease infiltrates into the surrounding lung parenchyma, but IPA has not been divided into simple and complex types. In this study, infiltration into the surrounding lung was classified as the complex type, based on imaging study findings. In this way, both Group A and B could be divided into simple and complex types. With this division, we evaluated the distribution of types in both groups and found there was no statistical difference between the two groups. In addition, there was no difference in pulmonary function between the two groups and there was also no significant difference in the range of the surgical procedure. Therefore both groups were considered under about the same conditions and under these conditions it was considered meaningful to compare the surgical outcome.
In Group A, there were 6 cases of disease occurrence after chemotherapy, 1 case after immunotherapy, and 2 cases after HSCT. All of them were considered to have occurred after having gone through a neutropenic phase. Most of them had underlying conditions such as neutropenia, anemia, and thrombocytopenia. Delaying the operation, or giving blood transfusion were measures used to maintain the platelet count over 50(×10 9 /L) and the hemoglobin over 10.0 g/dl for the safety of operation. Nevertheless, the preoperative hemoglobin and platelet counts of Group A were significantly lower than those of Group B. Even though the average intraoperative blood loss of Group A (452.2 ml) was more than that of Group B (309.2 ml), there was no statistically significant difference. In addition in both groups there was no case of reoperation due to postoperative bleeding.
Among Group A patients, there was 1 case in which it was considered difficult to perform sugical resection due to too severe infiltration into the surrounding lung at the initial stage, but lung infiltration reduced significantly after use of antifungal agent, making it possible to perform pulmonary resection; and 6 patients out of total 9 showed a partial response so that it was possible to reduce the range of operation. In the case of IPA, studies have reported voriconazole to be most effective antifungal agent [2,4,6]. However, in this study, voriconazole was used only for 2 patients and amphotericin B for 4 patients, and oral antifungal agents (fluconazole, itraconazole, posaconazole) for the remaining 3 patients.
It has been reported in many studies that the surgical treatment of aspergilloma resulted in 20-30 % of morbidity and 5-10 % of mortality [7], and although there have not been many studies in the case of IPA, Nebiker et al. have reported 15-23 % morbidity and 7 % mortality [13]. In this study, morbidity was 22.2 and 35.6 % in Group A and B respectively and there was no significant difference from that of the existing studies. However in this study there was neither any case of major complication requiring management (like reoperation) nor any 30-day mortality. It could not be said that surgical treatment of hematopoietic disorder patient had more operative risk since the division of groups proved to be meaningless in the complications risk factor evaluation.
The limitations of this study were as follows. First, it was retrospective study. Second, a relatively small number of patients were included. Third, only selected patients with hematopoietic disorder underwent surgery; in other words, surgery was performed neither in cases where a higher level of surgery over pneumonectomy was required due to severe disseminated disease or wide lung infiltration, nor was it performed in cases of graft versus host disease after chemotherapy or HSCT. Surgery was not performed either if high risk factors of general anesthesia existed, namely renal failure, hepatic failure, sepsis, etc. However, surgery was performed as extensively as possible in case pulmonary resection was considered possible.
Conclusions
In summary, the surgical treatment of localized pulmonary mycotic infection can achieve good surgical outcomes if complete resection is possible in hematopoietic disorder patients who have had immunosupression, coagulopathy, anemia and severe pulmonary infiltration.
|
v3-fos-license
|
2024-05-14T13:05:38.878Z
|
2024-05-13T00:00:00.000
|
269756125
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "37470c7cab95a7434f03a47fe0813c45331f3d0c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46004",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "e275e66b5843a13e713287f0a580bd46c9a94d22",
"year": 2024
}
|
pes2o/s2orc
|
The big five factors as differential predictors of self-regulation, achievement emotions, coping and health behavior in undergraduate students
Background The aim of this research was to analyze whether the personality factors included in the Big Five model differentially predict the self-regulation and affective states of university students and health. Methods A total of 637 students completed validated self-report questionnaires. Using an ex post facto design, we conducted linear regression and structural prediction analyses. Results The findings showed that model factors were differential predictors of both self-regulation and affective states. Self-regulation and affective states, in turn, jointly predict emotional performance while learning and even student health. These results allow us to understand, through a holistic predictive model, the differential predictive relationships of all the factors: conscientiousness and extraversion were predictors regulating positive emotionality and health; the openness to experience factor was non-regulating; nonregulating; and agreeableness and neuroticism were dysregulating, hence precursors of negative emotionality and poorer student health. Conclusions These results are important because they allow us to infer implications for guidance and psychological health at university.
Introduction
The personality characteristics of students have proven to be essential explanatory and predictive factors of learning behavior and performance at universities [1][2][3][4].However, our knowledge about such factors does not exhaust further questions, such as which personality factors tend toward the regulation of learning behavior and which do not?Or can personality factors be arranged on a continuum to understand student differences in their emotions when learning?Consequently, the aim of this study was to analyze whether students' personality traits differentially predict the regulation of behavior and emotionality.These variables align as different motivational-affective profiles of students, through the type of achievement emotions they experience during study, as well as their coping strategies, motivational state, and ultimately health.
Five-factor model
Previous research has shown the value and consistency of the five-factor model for analyzing students' personality traits.Pervin, Cervone, and John [5] defined five factors as follows: (1) Conscientiousness includes a sense of duty, persistence, and behavior that is self-disciplined and goal-directed.The descriptors organized, responsible, and efficient are typically used to describe conscientious persons.(2) Extraversion is characterized by the quantity and intensity of interpersonal relationships, as well as sensation seeking.The descriptors sociable, assertive, and energetic are typically used to describe extraverted persons.(3) Openness to experience incorporates autonomous thinking and willingness to examine unfamiliar ideas and try new things.The descriptors inquisitive, philosophical, and innovative are typically used to describe persons open to experience.(4) Agreeableness is quantified along a continuum from social antagonism to compassion in one's quality of interpersonal interactions.The descriptors inquisitive, kind, considerate, and generous are often used to describe persons characterized by agreeableness.(5) Finally, neuroticism tends to indicate negative emotions.Persons showing neuroticism are often described as moody, nervous, or touchy.
This construct has appeared to consistently predict individual differences between university students.Prior research has documented its essential role in explaining differences in achievement [6,7], motivational states [8], students' learning approaches [9], self-regulated learning [10].
Five-factor model, self-regulation, achievement emotions and health
The relationship between the Big Five factors and selfregulation has been analyzed historically with much interest [11][12][13][14][15].The dimensions of the five-factor model describe fundamental ways in which people differ from one another [16,17].Of the five factors, conscientiousness may be the best reflection of self-regulation capacity.More recent research has shown consistent evidence of the relationship between these two constructs, especially conscientiousness, which has a positive relationship, and neuroticism, which has a negative relationship with selfregulation [18,19].The Big Five factors are also related to coping strategies [20].
The evidence on the role of the five-factor model in self-regulation, achievement emotions, and health has been fairly consistent.On the one hand, self-regulation has a confirmed role as a meta-cognitive variable that is present in students' mental health problems [21].Similarly, personality factors and types of perfectionism have been associated with mental health in university students [22].In a complementary fashion, one longitudinal study has shown that personality factors have a persistent effect on self-regulation and health.Sirois and Hirsch [23] confirmed that the Big Five traits affect balance and health behaviors.
Self-regulation, achievement emotions and health
Self-regulation has recently been considered a significant behavioral meta-ability that regulates other skills in the university environment.It has consistently appeared to be a predictor of achievement emotions [24], coping strategies [25], and health behavior [26].In the context of university learning, the level of self-regulation is a determining factor in learning approaches, motivation and achievement [27].Similarly, the self-vs.externally regulated behavior theory [27,28] assumes that the continuum of self-regulation can be divided into three types: (1) self-regulation behavior, which is the meta-behavior or meta-skill of planning and executing control over one's behavior; (2) nonregulation behavior (deregulation), where consistent self-regulating behavior is absent; and (3) nonregulation behavior, when regulatory behavior is maladaptive or contrary to what is expected.Some example behaviors are presented below, and these have already been documented (see Table 1).Recently, Beaulieu and collaborators [29] proposed a self-dysregulation latent profile for describing subjects with lower scores on subscales regarding extraversion, agreeableness and conscientiousness and higher scores concerning negative emotional facets.
Table 1 here.Consequently, the question that we pose -as yet unresolved -is whether the different personality factors predict a determined type of regulation on the continuum of regulatory behavior, nonregulatory (deregulatory) behavior and dysregulatory behavior, based on evidence.
Aims and hypotheses
Based on the existing evidence, the aim of this study was to establish a structural predictive model that would order personality factors along a continuum as predictors of university students' regulatory behavior.The following hypotheses were proposed for this purpose: (1) personality factors differentially predict students' regulatory, nonregulatory and dysregulatory behavior during academic learning; they also differentially determine students' type of emotional states (positive vs. negative affect); (2) the preceding factors differentially predict achievement emotions (positive vs. negative) during learning, coping strategies (problem-focused vs. emotion-focused) and motivational state (engagement vs. burnout); and (3) all these factors ultimately predict student health, either positively or negatively, depending on their regulatory or dysregulatory nature.
Participants
Data were gathered from 2019 to 2022, encompassing a total of 626 undergraduate students enrolled in Psychology, Primary Education, and Educational Psychology programs across two Spanish universities.Within this cohort, 85.5% were female, and 14.5% were male, with ages ranging from 19 to 24 years and a mean age of 21.33 years.The student distribution was equal between the two universities, with 324 attending one and 318 attending the other.The study employed an incidental, nonrandomized design.The guidance departments at both universities extended invitations for teacher participation, and teachers, in turn, invited their students to partake voluntarily, ensuring anonymity.Questionnaires were completed online for each academic subject, corresponding to the specific teaching-learning process.
Student Health Behavior: The Physical and Psychosocial Health Inventory [41] measured this variable, summarizing the World Health Organization (WHO) definition of health: "Health is a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity." The inventory focused on the impact of studies, with questions such as "I feel anxious about my studies." Students responded on a Likert scale from 1 (strongly disagree) to 5 (strongly agree).In the Spanish sample, the model displayed good fit indices (CFI = 0.95, GFI = 0.96, NFI = 0.94; RMSEA = 0.064), with a Cronbach's alpha of 0.82.
Procedure
All participants provided informed consent before engaging in the study.The completion of scales was voluntary and conducted through an online platform.Over two academic years, students reported on five distinct teaching-learning processes, each corresponding to a different university subject they were enrolled in during this period.Students took their time to answer the questionnaires gradually throughout the academic year.The assessment for Presage variables took place in September-October of 2018 and 2019, Process variables were assessed in the subsequent February-March, and Product variables were evaluated in May-June.The procedural steps were ethically approved by the Ethics Committee under reference 2018.170,within the broader context of an R&D Project spanning 2018 to 2021.
Data analysis
The ex post facto design [42] of this cross-sectional study involved bivariate association analyses, multiple regression, and structural predictions (SEMs).Preliminary analyses were executed to ensure the appropriateness of the parameters used in the analyses, including tests for normality (Kolmogorov-Smirnov), skewness, and kurtosis (+-0.05).
Multiple regression Hypothesis 1 was evaluated using multiple regression analysis through SPSS (v. 26).
Confirmatory factor analysis To test Hypotheses 2 and 3, a structural equation model (SEM) was employed in this sample.Model fit was assessed by examining the chisquare to degrees of freedom ratio, along with RMSEA (root mean square error of approximation), NFI (normed fit index), CFI (comparative fit index), GFI (goodness-offit index), and AGFI (adjusted goodness-of-fit index) [43].Ideally, all these values should surpass 0.90.The adequacy of the sample size was confirmed using the Hoelter index [44].These analyses were conducted using AMOS (v.22).
Prediction results
The predictive relationships exhibited a continuum along two extremes.On the one hand, conscientiousness, extraversion and openness were significant, graded, and positive predictors of self-regulation.On the other hand, Agreeableness and Neuroticism were negative, graded predictors of self-regulation.A considerable percentage of explained variance was observed (r 2 = 0.499).The most meaningful finding, however, is that this predictive differential grading is maintained for the rest of the variables analyzed: positive affect (r 2 = 0.571) and negative affect (r 2 = 0.524), achievement emotions during study, engagement burnout, problem-and emotion-focused coping strategies, and student health.See Table 2.
Structural prediction model
Three models were tested.Model 1 proposes the exclusive prediction of personality factors on the rest of the factors, not including self-regulation.Model 2 evaluated the predictive potential of self-regulation on the factors of the Big Five model.Model 3 tested the ability of the Big Five personality traits to predict self-regulation and the other factors.The latter model presented adequate statistical values.These models are shown in Table 3.
Direct effects
The statistical effects showed a direct, significant, positive predictive effect of the personality factors C (Conscientiousness) and E (Extraversion) on self-regulation.The result for factor O (openness to experience) was not significant.Factors A (agreeableness) and N (neuroticism) were negatively related, especially the latter.In a complementary fashion, factors C and E showed significant, positive predictions of positive affect, while O and A had less strength.Factor N most strongly predicted negative affect.
Moreover, self-regulation positively predicted positive achievement emotions during study and negatively predicted negative achievement emotions.Positive affect predicted positive emotions during study, engagement, and problem-focused coping strategies; negative affect predicted negative emotions during study, burnout, and emotion-focused strategies.Positive emotions during study negatively predict negative emotions and burnout.Engagement positively predicted problem-focused coping and negatively predicted burnout.Finally, problemfocused coping also predicted emotion-focused coping.Emotion-focused coping negatively predicts health and well-being.
Indirect effects
The Big Five factors exhibited consistent directionality.Factors C and E positively predicted positive emotions, engagement, problem-focused coping, and health and negatively predicted negative emotions and burnout.Factor O had low prediction values in both negative and positive cases.Factors A and N were positive predictors of negative emotions during study, burnout, emotion-focused coping and health, while the opposite was true for factors C and E. These factors had positive predictive effects on self-regulation, positive affect, positive emotions during study, engagement, problemfocused strategies and health; in contrast, the other factors had negative effects on negative affect, negative emotions during study, burnout, emotion-focused strategies and health.See Table 4; Fig. 1.
Discussion
Based on the Self-vs.External-Regulation theory [27,28], the aim of this study was to show, differentially, the regulatory, nonregulatory or dysregulatory power of the Big Five personality factors with respect to study behaviors, associated emotionality during study, motivational states, and ultimately, student health behavior.
Regarding Hypothesis 1, the results showed a differential, graded prediction of the Big Five personality factors affecting both self-regulation and affective states.The results from the logistic and structural regression analyses showed a clear, graded pattern from the positive predictive relationship of C to the negative predictive relationship of N. On the one hand, they showed the regulatory effect (direct and indirect) of factors C and E, the nonregulatory effect of O, and the dysregulatory effect of factors A and especially N.This evidence offers a differential categorization of the five factors in an integrated manner.On the other hand, their effects on affective tone (direct and indirect) take the same positive direction in C and E, intermediate in the case of O, and negative in A and N.There is plentiful prior evidence that has shown this relationship, though only in part, not in the integrated manner of the model presented here [29,[45][46][47].
Regarding Hypothesis 2, the evidence shows that selfregulation directly and indirectly predicts affective states in achievement emotions during study.Directionality can be positive or negative according to the influence of C and E and of positive emotionality or of A and N with negative affect.This finding agrees with prior research [29,[48][49][50][51].
Regarding Hypothesis 3, the results have shown clear bidirectionality.Subsequent to the prior influence of personality factors and self-regulation, achievement emotions bring about the resulting motivational states of engagement-burnout and the use of different coping strategies (problem-focused vs. emotion-focused).Positive achievement emotions during study predicted a motivational state of engagement and problem-focused coping strategies and were positive predictors of health; however, negative emotions predicted burnout and emotion-focused coping strategies and were negative predictors of health.These results are in line with prior evidence [49,52,53].Finally, we unequivocally showed a double, sequenced path of emotional variables and affective motivations in a process that ultimately and differentially predicts student health [54,55].
In conclusion, these results allow us to understand the predictive relationships involving these multiple variables in a holistic predictive model, while previous research has addressed this topic only in part [56].We believe that these results lend empirical support to the sequence proposed by the SR vs. ER model [27]: the factors of conscientiousness and extraversion appear to be regulators of positive emotionality, engagement and health; openness to experience is considered to be nonregulating; and agreeableness and neuroticism are dysregulators of the learning process and precursors of negative emotionality and poorer student health [57].New levels of detail-in a graded heuristic-have been added to our understanding of the relationships among the five-factor model, selfregulation, achievement emotions and health [23].
Limitations and research prospects
A primary limitation of this study was that the analysis focused exclusively on the student.The role of the teaching context, therefore, was not considered.Previous research has reported the role of the teaching process, in interaction with student characteristics, in predicting positive or negative emotionality in students [49,58].However, such results do not undercut the value of the results presented here.Future research should further analyze potential personality types derived from the present categorization according to heuristic values.
Practical implications
The relationships presented may be considered a mental map that orders the constituent factors of the Five-Factor Model on a continuum, from the most adaptive (or regulatory) and deregulatory to the most maladaptive or dysregulatory.This information is very important for carrying out preventive intervention programs for students and for designing programs for those who could benefit from training in self-regulation and positivity.Such intervention could improve how students experience the difficulties inherent in university studies [47,59], another indicator of the need for active Psychology and Counseling Centers at universities.
Table 1
Conceptual Continuum and Typologies of Each Self-Regulatory Behavior
Table 2
Predictions between the Five Factor Model (FFM) and health variables (n = 637)
Table 3
Models of structural linear results of the variables
Table 4
Total, indirect, and direct effects of the variables in this study, and 95% bootstrap confidence intervals (CI)
|
v3-fos-license
|
2018-12-11T00:55:17.781Z
|
2016-03-08T00:00:00.000
|
56150724
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.assaf.org.za/index.php/sacq/article/download/978/794",
"pdf_hash": "45868eb2259a6a75c399df81d5034b5abb6358a9",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46006",
"s2fieldsofstudy": [
"Political Science",
"Law"
],
"sha1": "45868eb2259a6a75c399df81d5034b5abb6358a9",
"year": 2016
}
|
pes2o/s2orc
|
OPERATION IRON FIST AFTER SIX MONTHS Provincial police strategy under review
21 The purpose of Operation Iron Fist was to address a spike in certain crimes in Gauteng. At a meeting with the SAPS Provincial Commissioner and his management team, the MEC requested the police to develop a specific operational plan to address the increases in some of the crime categories. In particular, it was highlighted that certain objectives relating to police visibility and performance should be included in the plan and that certain crimes should be prioritised.
The purpose of Operation Iron Fist was to address a spike in certain crimes in Gauteng.At a meeting with the SAPS Provincial Commissioner and his management team, the MEC requested the police to develop a specific operational plan to address the increases in some of the crime categories.In particular, it was highlighted that certain objectives relating to police visibility and performance should be included in the plan and that certain crimes should be prioritised.
As a result, the MEC reported in a public statement on 11 July 2006 that Operation Iron Fist would have eight key performance objectives: • the public could expect to see more police on Gauteng's roads and streets; • the police would put up more roadblocks; • the police would increase their efforts to track down and bring the most wanted criminals to justice; • the police would focus their deployment to tackle serious crimes; • efforts to remove and destroy illegal firearms would be stepped up; • efforts would be made to improve the service delivery from 10111 call centres; • the police would improve safety on the province's trains; and • there would be a focus on increasing community mobilisation against crime.
The primary intention of the operation was to arrest the spike in certain crime categories and stabilise the situation in the short-term period of six months.
The intention of the Department of Community Safety was to establish what impact a high-intensity, high visibility police operation would have on the following specific crimes: • vehicle hijacking; • cash-in-transit heists; • house robbery; • business robbery; • taxi violence; and • residential burglary.
Firoz Cachalia MEC for Community Safety, Gauteng
The evaluation of Iron Fist's achievements comes at a time of growing public concern about violent crime in our country.There can be little doubt that an analysis of crime patterns across the province will not offer consolation to those who have been robbed in their homes, families of those killed for a cell phone, women raped on the way to work, those who have been hijacked at gun-point or an elderly person assaulted on the way from a pension pay point.
Nevertheless, every effort has been made to guarantee the integrity and objectivity of this assessment.The Community Safety Department's evaluation of Operation Iron Fist was based on information from police performance management systems, crime statistics from the SAPS and other sources.The department also conducted its own research, including a public perception survey that was undertaken by an independent organisation.
Understanding the role of the MEC in relation to safety
It is important to note that the Member of the Executive Council (MEC) of the Gauteng Provincial Government responsible for Community Safety has no direct managerial authority over the police.
The SAPS is a national organisation that is resourced and managed centrally.However, as an elected member of a provincial government with the responsibility for community safety, the MEC is responsible for expressing the safety concerns of the public to the police and requesting that the police respond.The Community Safety Department monitors and evaluates the effectiveness of the police's response to public concerns in an attempt to ensure that the police are held accountable to the public.
Assessing Iron Fist against its objectives
The section that follows presents the main findings of the evaluation of police performance against the specific objectives outlined above.
Objective 1: Increased mobilisation of police resources
The aim was to improve police visibility through increasing the number of officers in the field.
During the operation, a nationally driven restructuring process of the SAPS resulted in 3,000 police officers being deployed to priority police stations (i.e.those stations that record the highest levels of serious violent crime).Furthermore, all officers, including administrative staff, were expected to work overtime to support operations, and 400 entry-level constables were deployed throughout the province.
Police reservists
The objective of doubling the number of reservists in Gauteng by July 2007 was partly achieved.The number of trained reservists available in July was 3,206.To double this number the police would have had to recruit 266 reservists per month on average.By the end of December an average of 237 reservists had been recruited per month.This meant that 1,421 reservists were recruited during the Operation Iron Fist period.
Recently the structure of reservists has changed to allow for four categories of reservists: • 'Category A' reservists receive six months of training and have the same powers as fully functional police officers.• 'Category B' reservists provide administrative support to the SAPS.• 'Category C' reservists provide specialist skills (e.g.lawyers, medical doctors, etc).• 'Category D' reservists are replacing the old commandos and similarly will not have full police powers or authority.They will be utilised primarily on patrols and high visibility operations.
Once the curriculum has been finalised for these reservists a large-scale recruitment drive can be undertaken.
Vehicle patrols
The number of police vehicle patrols was also increased.A total of 93,757 vehicle patrols were undertaken during the operation (a monthly average of 15,626 vehicle patrols).This meant 7,202 more patrols (an increase of 8%) than in the same period in 2005, and 2,037 more patrols (an increase of 2%) than in the first half of 2006.
Highway patrols
There were more highway patrols when compared with the same time period in 2005, but marginally fewer than in the first six months of 2006.A total of 8,528 highway patrols were conducted during Operation Iron Fist (a monthly average of 1,421 highway patrols).This was 3,415 more (an increase of 67%) than in 2005, but 160 fewer (a decrease of 2%) than in the first half of 2006.
During Operation Iron Fist there was better coordination between the SAPS and the Metropolitan Police Departments.This meant that the police could focus their resources within communities while the MPDs focused on the highways.There were also targeted interventions at areas identified as 'hot-spots', such as the Rivonia off-ramp on the N1 highway.Police increased their patrols to improve safety in that area.
Vehicles searched
There was a substantial increase in the number of vehicles searched during Operation Iron Fist.A total of 698,555 vehicles were searched during the operation (a monthly average of 116,426 vehicles).This represented 161,590 more (an increase of 30%) vehicles searched than during the same time period in 2005, and 133,308 more (an increase of 24%) vehicles searched than in the first half of 2006.
People searched
There was a notable increase in the number of people searched during the operation.A total of 1,705,235 people were searched (a monthly average of 284,206 people).This represents 504,865 more people (an increase of 42%) than during the same period in 2006, and 542,116 more people (an increase of 47%) than during the first half of 2006.
Objective 2: Increased roadblocks
The objective of increasing the number of roadblocks was achieved, as reflected in the figures below.
Roadblocks
The police put up a total of 10,727 roadblocks (an average of 1,788 per month) during the operation.This meant 3,159 more roadblocks (a 42% increase) than during the same period in 2005, and a 40% increase compared with the first six months of 2006.
Vehicle control points
There was a substantial increase in vehicle control points compared with the same time period in 2005 but a decrease when compared with the first six months of 2006.In total 17,528 vehicle control points (a monthly average of 2,921) were set up during the operation.This represents 3,731 more vehicle control points (an increase of 27%) when compared with the same period in 2005.
However, there was a 21% decrease in the number of control points when compared with the first six months of 2006.This can be attributed to operational decisions that resulted in resources for vehicle control points (i.e.personnel and vehicles) being used for other purposes (e.g.roadblocks and targeted patrols).
Objective 3: Targeting key wanted suspects
In terms of tracing wanted suspects, a total of 4,653 initiatives were carried out to trace wanted criminals during the operation (a monthly average of 776 Senior managers were also deployed to each of the six 10111 centres to improve the levels of supervision.These are however short-term measures and will not address the structural shortcomings facing the six 10111 call centres.These include challenges relating to recruitment, staffing and technology.There were also clear differences between the six existing call centres in Gauteng.
In order to address these structural challenges, the SAPS is in the process of changing the entire 10111 Objective 4: The targeted deployment of police against serious crime The police arrested a total of 218,572 people during the operational period for all offences.This resulted in 10,471 more people being arrested than in the first six months of 2006 (a 5% increase).
During Operation Iron Fist, a total of 80,729 people were arrested for all serious crimes, ranging from murder to theft.This was an increase of 2,775 people (a 4% increase) from the same period in 2005 and an increase of 9,767 (a 14% increase) over the first six months of 2006.Moreover, at the beginning of the operation in July, 32% of the total arrests were for serious criminal offences.By December the proportion of arrests for serious offences had increased to 43% of the total.
In terms of arrests for the specific crimes that Iron Fist aimed to reduce, the following numbers of arrests were made: • Business robbery: 124 suspects were arrested, representing an increase of 103% when compared with the same six month period in 2005, and an increase of 114% when compared with the first six months of 2006.While it is encouraging to see a substantial improvement in the number of arrests for this crime category, it is important that these numbers increase further.Far more resources need to be devoted to identifying and arresting people involved in business robberies.• Residential robbery: During the operation, a total of 566 suspects were arrested for residential robberies.This represents a 21% increase in the number of suspects arrested in 2005 and an increase of 25% over the number of suspects arrested during the first half of 2006.As with business robberies, the increase is encouraging but more attention will need to be given to identifying and arresting people involved in these crimes.• Vehicle hijacking: 273 suspects were arrested, which is 84 more people (a 44% increase) arrested than in both 2005 and during the first half of 2006 (for both periods 189 suspects were arrested for hijacking).• Cash-in-transit heists: 15 suspects were arrested for CITs during the operational period.This system.An amount of R600m has been spent on building and equipping a new world-class police emergency response centre in Midrand.This has already happened and it is anticipated that staff at the six existing centres will start being redeployed into a single emergency call centre from July 2007.
Technology for the satellite tracking of police vehicles is also in the process of being installed.This will enable the SAPS to better identify the location of individual police vehicles and then closely monitor the response times to calls for assistance.New staff recruitment, training and supervision systems will also be put into place to improve the service delivered by 10111 call operators to members of the public.The SAPS will publicly launch the new centre once it is fully operational.
Objective 7: Improving safety on the railways
In the month of July a special operation called 'Operation Railway Safety' was undertaken, during which 497 Gauteng provincial SAPS reservists were deployed onto the trains.They made a total of 4,435 arrests, successfully tackling the criminal elements that had started to operate on railways in the earlier part of the year.These police remained on the railways until the deployment of 300 permanent SAPS Railway Police Unit members.
Objective 8: Increasing community mobilisation against crime
The indicators used to measure this objective included the number of police visits to schools, number of pamphlets distributed by the SAPS, and the extension of CPFs.
School visits
The SAPS substantially increased the number of visits to schools during Operation Iron Fist.A total of 12,474 schools were visited (a monthly average of 2,079 visits).This demonstrates that 3,888 more visits took place, (a 45% increase) than in the same time period in 2005 and 1,530 more visits (a 14% increase) than during the first six months of 2006.These visits are undertaken by police members to educate children and youth to be aware of the dangers of drugs and weapons, and to assist them in reporting crime.
Distribution of pamphlets
A total of 51,283 pamphlets on crime and police contact information were distributed during the operation.This is 23,779 more pamphlets (an 87% increase) than were distributed over the same time period in 2005 and 35,501 more pamphlets (a 225% increase) than were distributed during the first half of 2006.These pamphlets are distributed to assist communities with crime prevention tips and information on how to contact the police for assistance or to report crime.
Community Policing Forums (CPFs)
Many CPFs across the province stepped up to support Operation Iron Fist.The Gauteng Department of Community Safety was involved in a number of initiatives to assist CPFs and encourage community participation against crime.CPFs and sector forums were strengthened in 27 police precincts, including, among others, Booysens, Jeppe, Hillbrow, Kagiso, Tembisa, Naledi, Khutsong, Dobsonville and Kliptown.
In places such as Marabastad, Sebokeng and Rabie Ridge, community patrollers were deployed in community identified crime hotspots such as parks and around certain shopping malls.In places such as Mamelodi, Orlando, Naledi and Meadowlands, the CPFs were actively involved in road shows, Imbizos and door-to-door campaigns to ensure that people were aware of, and could support, Operation Iron Fist in their communities.
The Star newspaper also launched a campaign to support and publicise the work of various CPFs that started during the operational period.
The impact of Operation Iron Fist on crime
Information provided below on crime trends for specific targeted crimes in Gauteng over the Operation Iron Fist period is based on SAPS crime statistics.Increases or decreases in crime have been determined by comparing the monthly numbers in question with the same time the previous year.This provides a more accurate assessment, as there are distinct annual patterns for most crime categories.
Vehicle hijacking
Vehicle hijacking stabilised over the Operation Iron Fist period, compared with the increases that occurred during the first half of the year.Fewer instances of vehicle hijackings were reported during Operation Iron Fist than were reported during the first six months of 2006.Nevertheless, there are still too many hijackings occurring in Gauteng and greater effort will be needed to continue the downward trend in this crime that has been occurring over the past four years.
Cash-in-transit heists
There was a substantial reduction in the number of cash-in-transit heist (CIT) incidents during Operation Iron Fist.The Cash-in-Transit Crime Combating Forum, consisting of industry role players such as the South African Bank Risk Information Centre (SABRIC) and the police, announced that they had recorded a 27% reduction of CITs in the last four months of 2006 when compared with the same period in 2005.
Business robberies
Operation Iron Fist had some impact on business robberies.Although business robberies continued to increase during Operation Iron Fist, they did so at a slower rate than during the first six months of 2006.However, where businesses have come together to address robberies in specific sectors, positive results have been achieved.For example, the Consumer Goods Council Crime Prevention Programme reported that armed robberies among their members in the retail industry had decreased by 11% during 2006 when compared with 2005.Nevertheless, business robberies continue to be of concern and further attention will be paid to improving policing and business strategies for addressing this crime type.
Residential robberies
This crime occurs when criminals use violence against people while the victims are at home, to enter their premises and rob them.This crime continues to be of concern and although more people were arrested for residential robberies, decreases in the incidence of this crime were not recorded during Operation Iron Fist.This situation, where armed criminals attack people in their homes, cannot be tolerated.This crime will continue to be prioritised until decreases are achieved, and to this end, new strategies and tactics are being explored to tackle this crime in 2007.
Residential burglaries
House burglaries occur when criminals break into houses when there is no one at home.It is encouraging that there was a substantial reduction in house burglary during Operation Iron Fist.This reflects a significant change, as house burglaries had increased slightly during the first six months of 2006.
Taxi violence
There was a substantial reduction in taxi violence towards the end of the Operation Iron Fist period with far fewer incidents occurring in November and December than during previous months.Nevertheless, there were still a few incidents in certain areas and efforts to end taxi related violence will continue.
Key lessons
Some of the key conclusions about the evaluation of Operation Iron Fist are discussed below.
Improved targeting
Operation Iron Fist demonstrated that the law enforcement agencies were able to mobilise their resources to increase visibility and arrest rates.The police use of resources did result in a decrease in overall crime rates, especially for property related crimes.However, the levels of violent crime remain high.
The strategy did not succeed in reducing incidents of specific types of violent crime, in particular house robberies and business robberies.These crimes account in some measure for the growing levels of fear in our society.Clearly, the capability of the police to target these types of violent crime needs to be strengthened.The Premier of the province has asked that the SAPS Provincial Commissioner develop specific strategies to target these types of crime, and explain the approach that the SAPS will be adopting to the Gauteng Legislature.
in the province.This is one of the areas that experienced a positive impact during Operation Iron Fist.The trend towards an increasing proactive response by local government in improving safety must be encouraged.
The adoption of a comprehensive Gauteng Safety Strategy
There can be no doubt that a six-month, policedriven operation alone will not solve our crime problem.What is required over time is a comprehensive integrated strategy that includes a focus on improving the quality of policing, community mobilisation and partnerships, better coordination of law enforcement operations by the SAPS and the metropolitan police departments, and increased involvement of local authorities.It is for this reason that the Gauteng Provincial Government adopted a comprehensive eight-year Gauteng Safety Strategy in August of 2006 (see the next article in this issue).
Conclusion
All those who demonstrated considerable commitment and dedication during Operation Iron Fist and the continuing fight against crime must take credit for the achievements noted above.In particular, the SAPS Gauteng Provincial Commissioner and his dedicated policemen and women ought to be commended for their efforts and sacrifices in tackling crime.Appreciation must also be expressed to the metropolitan police departments, and the members of CPFs and the many unsung heroes who participate in these efforts.
Public safety is a priority and the Community Safety Department will continue to work in a systematic, focused and determined way to ensure that all people can enjoy the freedom and security that they deserve.
Effective partnerships
The results clearly showed that when relationships between CPFs and police management are effective, better results are achieved.Typically, partnerships result in increased and better intelligence being provided to the police in relation to individuals or groups committing specific crimes in specific communities.These results demonstrate that it is possible to reduce serious and violent crimes if communities and the police work together and focus their efforts.
Over half (56%) of Gauteng policing precincts saw a reduction in priority crimes, including places such as De Deur, Alexandra, Atteridgeville, Mamelodi and Eldorado Park (which recorded the largest decrease in Gauteng).In many instances it was also because of dedicated police station managers who were able to work in constructive partnerships with mobilised communities.Nevertheless, community involvement is still too low and efforts will be made to improve this through the launch of a 'social movement against crime.'Some stations recorded significant increases in crime, including Dobsonville, Zonkwizizwe, Ennerdale, and Midrand.These stations will require increased attention in the months ahead.
Station management
The restructuring of the SAPS initiated by the national office resulted in improved management and leadership at station level in some areas.This has led to significant reductions in crime in some communities, such as Johannesburg Central, Hillbrow and Booysens.Nevertheless, demonstrable improvements in police responses to calls for assistance from the public must be achieved.In addition, more attention needs to be paid to ensuring that police responsiveness improves.The tracking devices installed in police vehicles later this year should result in better police responses.
Cooperation of law enforcement agencies
During the Iron Fist period, metropolitan police departments adjusted their operational plans in accordance with the operation, resulting in improved coordination of law enforcement agencies At the beginning of July last year, the Gauteng MEC for Community Safety publicly announced the launch of a six month, high intensity police operation called Operation Iron Fist.This article draws from a media statement delivered by the MEC on 2 February 2007 in which the achievements of Iron Fist between July and December 2006 were discussed.Given that little information on policing strategy is currently provided to the public, the SA Crime Quarterly hopes, by publishing extracts of the MEC's statement, to assist in documenting and publicising anti-crime efforts and to help hold accountable those in positions of leadership on crime in South Africa.
|
v3-fos-license
|
2022-09-01T15:20:36.617Z
|
2022-08-29T00:00:00.000
|
251971350
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/omcl/2022/4932304.pdf",
"pdf_hash": "0222b2e7b51c061bf46e2529c5ae27ae6d4fe639",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46008",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "543e935dd6cfe3b0350be683b6bbc35bb471bd27",
"year": 2022
}
|
pes2o/s2orc
|
Tripeptide Leu-Pro-Phe from Corn Protein Hydrolysates Attenuates Hyperglycemia-Induced Neural Tube Defect in Chicken Embryos
Neural tube defect (NTD) is the most common and severe embryopathy causing embryonic malformation and even death associated with gestational diabetes mellitus (GDM). Leu-Pro-Phe (LPF) is an antioxidative tripeptide isolated from hydrolysates of corn protein. However, the biological activity of LPF in vivo and in vitro remains unclear. This study is aimed at investigating the protective effects of tripeptide LPF against NTD in the high glucose exposure condition and delineate the underlying biological mechanism. We found that LPF alleviated NTD in the high glucose-exposed chicken embryo model. In addition, DF-1 chicken embryo fibroblast was loaded with high glucose for induction of oxidative stress and abnormal O-GlcNAcylation in vitro. LPF significantly decreased accumulation of reactive oxygen species and content of malondialdehyde in DF-1 cells but increased the ratio of reduced glutathione and oxidized glutathione in chick embryo. Oxygen radical absorbance capacity results showed that LPF itself had good free radical scavenging capacity and could enhance antioxidant activity of the cell content. Mechanistic studies suggested that the resistance of LPF to oxidative damage may be related to promotion of NRF2 expression and nuclear translocation. LPF alleviated the overall O-GlcNAcylation level of cellular proteins under high glucose conditions and restored the level of Pax3 protein. Collectively, our findings indicate that LPF peptide could act as a nutritional supplement for the protection of development of embryonic neural tube affected by GDM.
Introduction
In recent years, women during pregnancy have an increased propensity to develop hyperglycemia or gestational diabetes mellitus (GDM). Global estimates revealed that the prevalence of GDM was 16.9% in women aged 20-49 years in 2013 [1]. Newborns and infants of mothers with GDM may have a higher risk of dysplasia or malformation [2]. Neural tube defect (NTD) is the most severe embryopathy with high morbidity, possibly causing anencephaly, micro-cephaly, exencephaly, and even death. Past studies have shown that development of NTD in embryos exposed to high glucose environment involves multiple cellular mechanisms, including oxidative stress [3], formation of glycation end products [4], expression of microRNAs [5], folic acid metabolism [6], and excessive apoptosis [7].
O-linked N-acetyl glucosamine (O-GlcNAc) posttranslationally modifies serine and threonine residues of proteins, known as O-GlcNAcylation. Emerging evidences have shown that the dysregulation of O-GlcNAcylation is associated with the pathogenesis of various diseases, such as diabetes and neurodegenerative diseases [8,9]. In fact, accumulation of O-GlcNAcylation is observed in placenta of hyperglycemic maters and mainly focuses on proteins in endothelial and trophoblast cells [10]. Nuclear factor-E2related factor 2 (NRF2) is a transcription factor that rapidly responds to oxidative stress. Upon activated, translocation of NRF2 will be performed from the cytoplasm to the nucleus for binding with antioxidant response elements (AREs) and then promoting the transcription of some target genes associated with the cytoprotective defense system against oxidative damage [11]. The NRF2-mediated antioxidant defense system plays a protective role in some diabetic complications, such as diabetic nephropathy and retinopathy [12,13]. In addition, NRF2 activation could inhibit valproic acid or high glucose-induced NTD in mice [14,15].
Some plant-derived natural compounds have been utilized to protect high glucose-induced NTD, including epigallocatechin gallate, baicalin, quercetin, and curcumin [16][17][18][19][20]. Our previous study showed that carnosine mitigated high glucose-induced NTD in the chicken embryo model [21]. Recently, the presence of various bioactive peptides has already been reported in many kinds of foods with some health-promoting properties [22]. Corn, an important food crop worldwide, is rich in protein resources that are preserved in corn gluten meal (CGM) after starch is extracted. A variety of active peptides were identified from the hydrolysate and fermentation of CGM [23]. Leu-Pro-Phe (LPF) is a tripeptide isolated from hydrolysate of CGM and shows antioxidative property in some cell-free experiments [24,25]. However, the biological activity of LPF in vivo and in vitro remains largely unknown. In addition, numerous studies have shown that redox balance is disrupted under GDM, and antioxidant supplement is an important strategy to protect against embryonic dysplasia. Hence, in this study, we adopted chicken embryo and DF-1 chicken embryo fibroblasts to evaluate the protective effect of LPF peptide against hyperglycemia-induced NTD. Mechanically, we found that LPF activated NRF2-mediated antioxidant response to high glucose-induced oxidative stress besides free radical scavenging. In addition, LPF peptide effectively inhibited protein O-GlcNAc modification and then restored the expression of paired box 3 (Pax3), an important transcription factor for neural tube development.
Chicken
Embryos and Treatment. Fertilized eggs were incubated in an incubator at 38°C, 65%~70% relative humidity. High glucose-induced NTD in chick embryos was performed according to our previous study [21]. A window was opened on the blunt side of the egg for the administration of exogenous D-glucose or LPF peptide. Chicken embryos were divided into control group, high-glucose model group, and LPF treatment groups at different concentrations. On embryo development day (EDD) 0, LPF was injected into chicken embryos (2, 10, 50, or 100 nmol/ 100 μL/egg) from the air cell. On EDD 1, high-glucose model group and LPF treatment groups were stimulated with 0.4 mmol/100 μL/egg D-glucose, while the control group was given the equal volume of bird saline. After treatment, the fertilized eggs were returned to the incubator for further incubation until the required day.
2.4. Cell Viability Evaluation by MTT. MTT assay was applied to detect cell viability. DF-1 cells were seeded into a 96-well plate with the density of 4 × 10 3 cells/well overnight. Cells were treated with indicated concentration of D-glucose for 24 h. After treatment, 10 μL of MTT solution (5 mg/mL) was added to each well for further incubation for 3 h at 37°C. Formazan crystals were dissolved with 200 μL DMSO. The absorbance was determined at 570 nm using a microplate reader.
Determination of Reactive Oxygen Species (ROS) by Flow
Cytometry. DF-1 cells were seeded into a 6-well plate with 2 × 10 5 cells/well overnight. After treatment with LPF for 24 h or not, DF-1 cells were stimulated with D-glucose for 6 h. Subsequently, cells were labeled with 10 μM H 2 DCFDA at 37°C for 20 min and then washed with PBS twice. Cells were harvested by cell scraping and collected by centrifugation at 800 rpm for 5 min. The cell pellet was resuspended in PBS for ROS detection by flow cytometer (Beckman Coulter Epics XL) with FITC fluorescence channel.
2.6. Histological Observation. EDD 5 chicken embryos were fixed with 4% paraformaldehyde for 3 days and then embedded in paraffin and sliced into 5 μm sections according to the conventional protocol. Paraffin sections of embryos were dewaxed with xylenes and then rehydrated with an ethanol gradient. According to the instructions of Hematoxylin 2 Oxidative Medicine and Cellular Longevity Eosin (H&E) Staining Kit, sections were stained for 5 min with hematoxylin and washed with flowing water for 10 min. Next, eosin staining was performed for 1 min. All the sections were dehydrated in a gradient ethanol and vitrified by dimethylbenzene according to the conventional protocol. Histological morphology of embryos tissues was observed and photographed under an automatic scanning microscope.
2.7. Western Blotting. On EDD 3.5, chicken embryos were taken for Western blotting detection. Embryonic tissue was weighed and added into RIPA lysis buffer at the mass/volume ratio of 1 : 5 with homogenization for total protein extraction. Cell pellets were directly lysed for 30 minutes by RIPA lysis buffer. After centrifugation at 12000 rpm for 4°C, the supernatants were collected, and protein concentrations were quantified by the BCA method. The protein samples were denatured by boiling in 1× loading buffer at 100°C for 10 min. The proteins were separated by SDS-PAGE and then transferred to PVDF membrane. The membrane was blocked with 5% nonfat milk solution at room temperature for 1 h. Blots were incubated with primary antibodies, including anti-Pax3 (1 : 500), anti-NRF2 (1 : 1000), anti-Keap1 (1 : 1000), anti-O-GlcNAc (1 : 1000), and β-Actin (1 : 3000) at 4°C overnight. Sequentially, HRP-labeled goat anti-rabbit or mouse secondary antibody (1 : 3000) was incubated at room temperature for 2 h. Finally, blots were detected by the ECL substrate system and visualized by a Tanon-5200 Image Analyzer.
2.8. Measurement of GSH/GSSG. EDD 5 chicken embryos were homogenized in ice-cold PBS buffer and subsequently centrifuged 4°C for 10 min at 12,000 rpm. The supernatants were subject to measurement of total GSH and reduced GSH by a commercial assay kit according to the manufacturer's instruction: the ratio of GSH and GSSG = reduced GSH/ðtotal GSH − reduced GSHÞ.
2.9. Determination of Malondialdehyde (MDA) Content. DF-1 cell lysates were prepared with ultrasound in ice-cold PBS. The supernatants were collected by centrifugation 4°C for 10 min at 12,000 rpm, and the protein level was detected by the BCA method. Determination of MDA content was conducted with a commercial assay kit according to the manufacturer's instruction. The MDA contents were normalized by the protein amount.
2.10. Immunofluorescence. DF-1 cells were seeded into 35 mm glass bottom dishes overnight. After indicated treatment, cells were fixed with 4% paraformaldehyde for 5 min followed by being washed in PBS 3 times. Cells were permeabilized with 0.1% Triton X-100 for 15 min and then blocked with 5% bovine serum albumin (BSA) for 1 h at room temperature. Next, cells were incubated with rabbit anti-NRF2 antibody (1 : 100) at 4°C overnight. Alexa Fluor 488 goat anti-rabbit (1 : 500) secondary antibodies were added for 2 h at room temperature in the dark. DAPI was used to label the nucleus for 10 mins at room temperature from light. Fluorescence detection and imaging were conducted with a fluorescence microscope.
2.11. Oxygen Radical Absorbance Capacity (ORAC). ORAC assay applies fluorescence sodium as a fluorescence probe. Trolox acts as a free radical scavenger to protect fluorescence against attack of AAPH-generated peroxyl radical. ORAC reaction was carried out in 75 mmol/L phosphate buffer solution (pH = 7:2). Add LPF or cell lysate sample 20 μL, phosphate buffer 20 μL, and fluorescence sodium 20 μL to a 96-well plate and then add 140 μL AAPH into each well to start the reaction. The microplate was placed in a BioTek microplate reader, and fluorescence intensity was measured every 2 min at the excitation wavelength of 485 nm and the emission wavelength of 527 nm at 37°C. The ORAC value was calculated using the net area under the fluorescence decay curve with Trolox as the standard. 2.13. Statistical Analysis. All values are presented as mean ± SD. Statistical significance was analyzed by one-way analysis of variance (ANOVA) followed by Tukey's multiple comparisons test (GraphPad Prism software, San Diego, CA, USA). p < 0:05 was considered statistically significant.
LPF Moderates Hyperglycemia-Induced NTD in EDD 2
Chick Embryos. To evaluate the effects of LPF on hyperglycemia-induced NTD in the chick embryo model, sufficient LPF was synthesized and then purified by HPLC followed by MS analysis identification (Figures 1(a) and 1(b)). Chicken embryos gradually begin to close their neural grooves about 25-29 hours after hatching to form the neural tube. Hence, fertilized chick eggs were challenged with high concentration of D-glucose (0.4 mmol/egg) 24 h after hatching (EDD 1) by air cell to produce the NTD model. LPF peptide was injected into chicken embryos at EDD 0 (Figures 2(a) and 2(b)). As shown in Figure 2(c), we observed the morphology of EDD 2 chicken embryos through stereoscopic microscope and found that high glucose exposure caused severe embryonic malformation. In 3 Oxidative Medicine and Cellular Longevity addition, quantitative analysis discovered that high glucose dramatically reduced the somite pair numbers (p < 0:001) and length of chick embryos (p < 0:001) (Figures 2(d) and 2(e)). However, the addition of 10 nmol/egg or 50 nmol/egg LPF significantly reversed the inhibitory effect of high glucose on embryo development with increased somite pair numbers (p < 0:001, p < 0:001) and embryonic length (p < 0:001, p < 0:001) (Figures 2(c)-2(e)). LPF of 100 nmol/ egg had no protective effect on hyperglycemia-induced NTD, which suggests that high doses of LPF may have potential adverse effects on embryonic development (Figures 2(c)-2(e)). The above results indicate that LPF could protect against hyperglycemia-induced NTD in EDD 2 chick embryos.
LPF Mitigates Hyperglycemia-Induced NTD in EDD 5
Chick Embryos. Our previous study found that various patterns of manifestation of NTD caused by high glucose treatment were observed in EDD 5 chicken embryos [21]. Therefore, we next investigated the effect of LPF on hyperglycemia-induced embryonic NTD in EDD 5 chick embryos. As shown in Figure 3(a), high glucose caused obvious NTD and death, while LPF of 10 and 50 nmol/egg low-ered the NTD rate and death rate. This indicated that LPF treatment had beneficial effects in improving survival of chicken embryos and preventing the incidence of NTD. Similarly, both doses of LPF treatment significantly reduced the body weight loss of EDD 5 chicken embryos caused by high glucose exposure (p < 0:001, p < 0:001) (Figure 3(b)). To further assess the protective effect of LPF on hyperglycemiainduced NTD, the morphology of whole mount embryos was observed under a stereomicroscope. As shown in Figure 3(c), the EDD 5 embryos of the control group were structurally complete with good development of all embryonic parts, while high glucose exposure was detrimental to embryonic development and caused malformation. We then observed the neural tubes of the chick embryos by tissue sections and HE staining. Compared with normal embryos with the neural tube well closed, high glucose led to an incomplete closure at the dorsal part of the neural tube ( Figure 3(d)). However, both 10 and 50 nmol/egg of LPF improved the morphology of chicken embryos and reduced abnormal closure incidence of neural tubes (Figures 3(c) and 3(d)). Folic acid (FA), a well-known periconceptional agent used as the positive control, also exhibited excellent embryo protection against high glucose-induced NTD
LPF Inhibits High
Glucose-Induced Oxidative Damage in DF-1 Cells and Chicken Embryos. Antioxidant LPF peptide was purified from corn gluten meal hydrolysate, which scav-enges a variety of ROS including DPPH radical, ABTS radical, hydroxyl radical, and superoxide radical anion [24,25]. Therefore, we sought to explore whether LPF exerts antioxidant activity in cells and chick embryo tissue. First, we also evaluated the APPH radical scavenging capacity of LPF by ORAC assay. As shown in Figure 5 Data are presented as mean ± SD, and the statistical differences were analyzed by one-way ANOVA. * * * p < 0:001, ### p < 0:001 vs. the indicated group. 6 Oxidative Medicine and Cellular Longevity antioxidative activities of DF-1 cells lysates (p < 0:05), suggesting that LPF could enter cells to increase intracellular antioxidant capacity ( Figure 5(b)). Based on the above high glucose-induced oxidative stress model in DF-1 cells, we also examined the effect of LPF on intracellular ROS levels using H 2 DCFDA probe. As shown in Figure 5(c), high glucose promoted the production of intracellular ROS, while LPF inhibited the accumulation of ROS dose-dependently. Excessive ROS attacks lipids containing carbon-carbon double bonds such as polyunsaturated fatty acids (PUFAs), commonly known as lipid peroxidation. MDA is one of the metabolism products of lipid peroxidation and is considered a marker of lipid peroxidation [28]. The significantly increased MDA level was found in the high glucose group (p < 0:01), while LPF (5 and 10 μM) decreased the content of MDA (p < 0:05, p < 0:01) ( Figure 5(d)). GSH, a nature antioxidant tripeptide with the thiol group, fights various intracellular and extracellular oxidants. GSH is oxidized and regenerated from the oxidized form of GSSG with consumption of NADPH. Hence, GSH/GSSG ratio is considered as an indicator of cellular redox status. Compared with the control group, GSH/GSSG ratio of chicken embryo was significantly decreased in the high glucose group (p < 0:001), indicating the oxidative stress response. The treatment of LPF (10 or 50 nmol/egg) significantly increased the GSH/ GSSG ratio (p < 0:01, p < 0:001) ( Figure 5(e)). The above results indicate the protective effects of LPF against oxidative stress in DF-1 cells and chicken embryos. NRF2 plays an important role in antioxidant response to hyperglycemia associated oxidative stress. Activation of NRF2 upregulates the expression of antioxidant genes to promote scavenging of ROS and reduce accumulation of oxidative products [29]. We further investigated whether the antioxidant effect of LPF was associated with activation of the NRF2 signal. First, the expression of NRF2 was detected in EDD 3.5 chicken embryo tissue by qPCR and Western blotting. As shown in Figure 5(f), we observed a downregulation of NRF2 protein level in the high glucose group with the unchanged mRNA expression, which was possibly relevant to the elevated level of Keap1, an Oxidative Medicine and Cellular Longevity endogenous inhibitor of NRF2 ( Figure 5(g)). In contrast, LPF treatment inhibited the expression of Keap1 and increased the content of NRF2 protein (Figures 5(f) and 5(g)). Furthermore, we found that LPF promoted the expression of NRF2 in DF-1 cells under normal conditions in vitro ( Figure 5(h)). Upon activation, NRF2 will translocate into the nucleus and then initiate transcription of downstream genes. Subsequently, we examined the effect of LPF on translocation of NRF2 in DF-1 cells by immunofluorescence assay. The immunofluorescence result showed that LPF (5 and 10 μM) promoted NRF2 translocation from cytoplasm to nucleus ( Figure 5(i)). The above results indicate that LPF promotes the expression and activation of NRF2.
LPF Inhibits Hyperglycemia-Induced O-GlcNAcylation
and Restores Pax3 Protein Level. We next investigated the effect of LPF on O-GlcNAcylation. As a result, high glucose exposure raised the overall O-GlcNAcylation level of EDD 3.5 chicken embryo tissue compared to the control group. Nevertheless, this abnormal O-GlcNAcylation of protein was mitigated by LPF treatment (10 and 50 nmol/egg) (Figure 6(a)). Besides, LPF (5 and 10 μM) also alleviated the overall O-GlcNAcylation level in high glucose-treated DF-1 cells (Figure 6(b)), suggesting that LPF may regulate the dynamic process of cellular O-GlcNAcylation. Pax3, one of the transcription factor Pax family, plays a critical role in neural tube development, and loss or mutation of Pax3 could lead to embryonic NTD [30,31]. By contrast, restoration of the Pax3 expression in the neural crest contributes to rescuing embryonic development [32]. Our previous study found that high glucose promoted the O-GlcNAcylation of Pax3 and led to a decrease in its protein content [21]. Therefore, we detected the effect of LPF on the mRNA and protein levels of Pax3 by qPCR and Western blotting. Consistent with the previous results, high glucose did not significantly affect the transcriptional level of Pax3 (i) After stimulated with LPF for 24 h, DF-1 cells were fixed with paraformaldehyde, and NRF2 was detected by immunofluorescence. DAPI was used to label the nucleus for analysis of the nuclear translocation of NRF2. The scale bar is 100 μm. Data are presented as mean ± SD, and the statistical differences were analyzed by oneway ANOVA. * p < 0:05, * * p < 0:01, * * * p < 0:001; # p < 0:05, ## p < 0:01, ### p < 0:001; ns: not significant vs. the indicated group. 9 Oxidative Medicine and Cellular Longevity (p > 0:05), and LPF did not significantly change its mRNA content (p > 0:05). Notably, LPF restored the high glucoseinduced decrease in Pax3 protein (Figure 6(c)). In order to further confirm the protective role of Pax3 in LPF treatment against high glucose-induced NTD, we detected the downstream gene expression of Pax3 in EDD 3.5 chicken embryo tissues. As shown in Figure 6(d), two Pax3 downstream genes, Met and Ncam1, significantly decreased under the high-glucose condition (p < 0:001, p < 0:01), corresponding to a decreased Pax3 protein level in Figure 6(c). LPF treatment significantly increased the mRNA levels of Met (10 nmol/egg, p < 0:05; 50 nmol/egg, p < 0:01) and Ncam1 (50 nmol/egg, p < 0:01) compared to the high-glucose group. The above results indicate that LPF alleviates hyperglycemia-induced O-GlcNAcylation and restores Pax3 protein level.
Discussion
Corn peptide is prepared by enzymatic hydrolysis or fermentation of corn protein, and amino acid sequences of var-ious active peptides have been identified. We found that LPF, derived from corn peptides, alleviated hyperglycemiainduced NTD in the chicken embryos. In vitro and in vivo results suggest that LPF ameliorated oxidative stress via activating NRF2 signaling, reduced the abnormal level of O-GlcNAcylation level, and restored expression of Pax3 (Figure 7). Studies have shown that small peptides could be directly absorbed into the blood circulation through the intestinal barrier. Therefore, biological activities of the oligopeptide can be maintained after oral administration [33]. LPF supplementation is expected to be used to prevent or treat diabetic embryopathy.
The redox balance of the body is maintained by some antioxidant enzymes and reduced molecules in physiological state for protection of tissues and cells from oxidative damage. We found that LPF alleviated high glucose exposureinduced oxidative stress in vivo and in vitro. Among many endogenous antioxidant molecules, GSH plays the most important role in maintaining redox state. N-Acetylcysteine (NAC), a precursor of cysteine which facilitates GSH supplementation, has been shown to prevent . β-Actin was set as the housekeeping gene. The expressions of these genes were presented as fold changes relative to the control group. Data are presented as mean ± SD, and the statistical differences were analyzed by one-way ANOVA. * * p < 0:01, * * * p < 0:001; # p < 0:05, ## p < 0:01; ns: not significant vs. the indicated group.
congenital heart defects induced by pregestational diabetes in mice [34]. In addition, NAC could elevate intraplatelet GSH in blood from type 2 diabetes patients and has therapeutic potential for reducing thrombotic risk [35]. Our result shows that LPF peptide increases the ratio of GSH and GSSG in chicken embryo tissues, which may be related to reducing the consumption of GSH by ROS or promoting the recovery of GSH from GSSG. MDA is a reactive aldehyde metabolite of PUFA peroxide and could form adducts with biological molecules such as proteins and nucleic acids. A study found that MDA adduct of hemoglobin maybe associated with significant morbidity in preterm infants [36]. In addition, some clinical studies show that plasma MDA levels in adult type 2 diabetic patients are higher than healthy people [37][38][39]. Importantly, elevated serum level of MDA is positively correlative of high diabetic peripheral neuropathy score [40]. Our in vitro result shows that LPF significantly reduces MDA in high glucose-treated DF-1 cells, possibly in part due to inhibition of ROS by LPF. In fact, many corn peptides, including LPF, have antioxidant activity with free radical scavenging capacity, reducing power and metal chelating activity [23]. LPF shows excellent AAPH free radical (a water-soluble peroxyl radical) scavenging activity in a cellfree system and enhanced the antioxidative ORAC value of LPF-treated DF-1 cells lysates, suggesting that LPF could enter cells to increase intracellular antioxidant capacity. Dysregulation of redox signaling from the Keap1-NRF2 axis in utero could be an important factor priming disease susceptibility in offspring [41]. Our results also show that Keap1 and NRF2 are affected with marked oxidative stress in the high glucose-induced NTD chick embryo. Accumulating researches have indicated that NRF2 extensively regulates cellular redox status at the transcriptional level. The therapeutic potential of NRF2 activation is widely recog-nized for oxidative stress and related diseases [42,43]. Some natural NRF2 activators have been found, including sulforaphane, curcumin, and resveratrol for treatment of hyperglycemia-related diseases [44]. At baseline conditions, NRF2 forms an inactive complex with its endogenous repressor Keap1 on the cytoplasm for degradation by the ubiquitin-proteasome pathway. Therefore, activation of NRF2 could be achieved by reducing the level of Keap1 or interfering with Keap1-NRF2 interaction. Studies have shown that Keap1 degradation can be conducted by autophagic and proteasome-dependent pathway [45,46]. Notably, some peptide inhibitors have been found to disrupt their interaction based on the structural basis of Keap1 interactions with NRF2 [47][48][49]. LPF peptide could upregulate the protein expression of NRF2 and promote its translocation from the cytoplasm to the nucleus. Although we found that LPF reduced Keap1 level, the underlying mechanism still needs to be further investigated.
An abnormal O-GlcNAcylation modification of the total protein was found in chicken embryos and DF-1 cells with high glucose exposure. Two opposing enzymes, O-GlcNAc transferase (OGT) and hydrolase (OGA), dynamically catalyze the install and removal of O-GlcNAc, respectively [50]. LPF exerted an inhibitory effect on the overall O-GlcNAcylation, which was possibly related to the regulation of both enzymes. In fact, O-GlcNAcylation has profound physiological functions in embryonic development. Recent studies have found that O-GlcNAcylation modifies some Polycomb group and impacts their functions on regulating early embryogenesis, stem cell differentiation, and other cellular processes [51]. However, excessive O-GlcNAcylation has a detrimental effect on embryo development, and OGT inhibition ameliorates NTD in diabetic embryonic mice [52]. Our previous study showed that high glucose led to
11
Oxidative Medicine and Cellular Longevity enhanced O-GlcNAcylation modification of Pax3 protein and reduced stability [21]. In addition, the null mutant of Pax3 could cause development of NTD in mice [31,53]. Hence, upregulation of the Pax3 expression is a potential means to prevent or treat NTD. Although LPF failed to change the mRNA level of Pax3, its protein expression was restored, which was related to the inhibition of LPF on O-GlcNAcylation. Pax3 functions as a transcription factor and regulates the expression of multiple downstream genes, such as Met and Ncam1 [54,55]. Their expressions were decreased after high glucose exposure and recovered by LPF implying restoration of Pax3 function.
Conclusions
In conclusion, tripeptide LPF was found to alleviate hyperglycemia-induced NTD in chicken embryos and cellular damage in high glucose-exposed DF-1 chicken embryo fibroblasts by regulating oxidative stress and abnormal O-GlcNAcylation. Mechanistic studies indicated that the resistance of LPF to oxidative damage may involve promotion of NRF2 expression and nuclear translocation. In addition, LPF could mitigate the overall O-GlcNAcylation level of cellular proteins and restore the content of Pax3 protein. Our study demonstrates the protective effect of tripeptide LPF on embryos and promotes its further application as a kind of nutritional supplement for the protection of embryonic neural tube affected by GDM in the future. However, some limitations still exist in the current study. We only focused on investigating the effect of LPF on neural tube at an early stage of chick embryo development. In fact, a high-glucose environment has deleterious effects on the development of various organs and systems. Hence, potential embryoprotected efficacy of LPF needs further evaluation at a later stage of chick embryo development. Moreover, mammalian models like mice have not been utilized for verification of current findings. It is worth noting that mechanistic studies on antioxidation and anti-O-GlcNAcylation efficacy of LPF provide further directions for subsequent-related researches. We confirm that antioxidation and anti-O-GlcNAcylation are effective therapies to protect embryos against GDM. In addition, supplement of bioactive peptides like LPF is expected to prevent other oxidative stress and O-GlcNAcylation-related disease.
|
v3-fos-license
|
2022-01-28T16:10:00.592Z
|
2022-01-26T00:00:00.000
|
246302492
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6694/14/3/625/pdf",
"pdf_hash": "0776db3f4ad84492a4de0e3cbdfe899671e761ab",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46010",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "e0c8b5f796f19d26261042723686df77e528cc6c",
"year": 2022
}
|
pes2o/s2orc
|
Clinical Evidence for Thermometric Parameters to Guide Hyperthermia Treatment
Simple Summary Hyperthermia (HT) is a promising therapeutic option for multiple cancer entities as it has the potential to increase the cytotoxicity of radiotherapy (RT) and chemotherapy (CT). Thermometric parameters of HT are considered to have potential as predictive factors of treatment response. So far, only limited data about the prognostic and predictive role of thermometric parameters are available. In this review, we investigate the existing clinical evidence regarding the correlation of thermometric parameters and cancer response in clinical studies in which patients were treated with HT in combination with RT and/or CT. Some studies show that thermometric parameters correlate with treatment response, indicating their potential significance for treatment guidance. Thus, the establishment of specific thermometric parameters might pave the way towards a better standardization of HT treatment protocols. Abstract Hyperthermia (HT) is a cancer treatment modality which targets malignant tissues by heating to 40–43 °C. In addition to its direct antitumor effects, HT potently sensitizes the tumor to radiotherapy (RT) and chemotherapy (CT), thereby enabling complete eradication of some tumor entities as shown in randomized clinical trials. Despite the proven efficacy of HT in combination with classic cancer treatments, there are limited international standards for the delivery of HT in the clinical setting. Consequently, there is a large variability in reported data on thermometric parameters, including the temperature obtained from multiple reference points, heating duration, thermal dose, time interval, and sequence between HT and other treatment modalities. Evidence from some clinical trials indicates that thermal dose, which correlates with heating time and temperature achieved, could be used as a predictive marker for treatment efficacy in future studies. Similarly, other thermometric parameters when chosen optimally are associated with increased antitumor efficacy. This review summarizes the existing clinical evidence for the prognostic and predictive role of the most important thermometric parameters to guide the combined treatment of RT and CT with HT. In conclusion, we call for the standardization of thermometric parameters and stress the importance for their validation in future prospective clinical studies.
Introduction
Hyperthermia (HT) is a clinical treatment for cancer which extraneously and intrinsically heats malignant cells to a temperature of 40-43 • C for a suitable period of time [1,2]. Heat delivered to tumor tissues can act as a cytotoxic or sensitizing agent to enhance their remission or at least regression by utilizing several biological mechanisms and pleiotropic effects when combined with other conventional cancer treatment techniques, such as radiotherapy (RT) and/or chemotherapy (CT).
The biological effects of HT, which all favor its use in combination with RT and CT, include direct cytotoxicity, radiosensitization, chemosensitization, and immune modulation. HT-induced cell lethality is predominantly a result of conformational changes and the destabilization of macromolecule structures including the disruptions in cell metabolism, inhibition of DNA repair, and triggering of cellular apoptotic pathways [3][4][5][6]. The direct HT-induced cell lethality is known to be intrinsically tumor-selective for hypoxic cells [7]. During heating, enhanced blood perfusion in tumor tissues influences the radiosensitizing and chemosensitizing effects of HT by increasing the tumor oxygenation level and local concentration of CT drugs respectively [4,8,9]. Radiosensitization and chemosensitization effects, as well as the inhibition of DNA synthesis and repair, on the molecular level depend on the aggregation of proteins produced by HT-induced denaturation [10]. Moreover, protein unfolding and the intracellular accumulation of proteins trigger molecular chaperones including the heat shock proteins (HSPs) [11]. The release of HSPs and other "immune activating signals" underly the inflammatory and immunogenic responses to HT in combination with RT and/or CT and can promote anti-tumor immunity [12][13][14]. Exploiting molecular and physiological mechanisms evoked by HT can improve the efficacy of RT and CT. Therefore, HT in cancer treatment is used mainly within the framework of multimodal treatment strategies [3,8].
Multiple preclinical studies have been designed to unravel the relationship between biological mechanisms induced by HT and thermometric parameters as predictors of tumor response [15][16][17][18][19][20]. The parameters investigated in these studies include the temperature achieved during HT [6,15], heating duration, thermal dose [21], time interval between HT and the other treatment modality [15,22,23], the number of HT sessions [24], and the sequence of treatment modality [15,25,26]. All of these parameters were shown to influence the extent to which HT enhances the effect of RT or CT using cellular assays and in vivo models. In addition to thermometric parameters, the treatment parameters of RT and CT, such as total radiation dose, number of RT fractions, type of chemotherapeutic drug and the number of CT cycles, prescribed for a specific clinical indication, also play a significant role in attaining a therapeutic window with synergistic effects when combined with HT [25,27,28].
The effectiveness of HT combined with RT and/or CT has been investigated in many clinical studies with different tumor types. Unfortunately, to date, there is no consensus on HT delivery when combined with these cancer treatment modalities, resulting in substantial heterogeneity of the HT treatment protocols applied. Any comparison of these studies in terms of outcome should be made with caution in view of this heterogeneity in HT protocols. A good understanding of thermometric parameters and their interpretation is mandatory in this regard. However, there is inconclusive clinical evidence about the relationship of thermometric parameters with both tumor and normal tissue responses to HT in combination with RT and/or CT. The reason for this is that thermometric parameters are inconsistently reported or analyzed in prospective clinical studies and the retrospective analyses are conflicting. For instance, minimum tumor temperature was identified as a prognostic factor in a few studies [29][30][31]. However, another study showed that different metrics such as temperature achieved in 90% (T 90 ), 50% (T 50 ), and 10% (T 10 ) in the target volume were more strongly correlated with cancer response than minimum achieved temperature [32]. Furthermore, a short time interval between HT and RT was shown to significantly predict treatment outcome in retrospective analyses of cervical cancer patients [22]. However, conflicting results have been also reported [33] which may be attributed to differences in time interval and tumor temperature achieved, and in patient population [34]. Thermal dose has been successfully tested in several clinical trials as a predictor of tumor response to combined RT and HT treatment [35][36][37][38][39][40][41][42]. These did not result in established thresholds for thermal dose for treating different cancer sites, even though European Society for Hyperthermic Oncology (ESHO) guidelines recommend superficial HT maintains T 50 ≥ 41 • C and T 90 ≥ 40 • C [43]. The concept of a relationship between thermometric parameters with treatment outcome is highly attractive because it could improve the understanding of tumor-specific mechanisms of interaction between HT and RT and/or CT. Defining thermometric parameters is therefore important for a meaningful clinical evaluation of HT treatment outcomes when combined with RT and/or CT.
A limited amount of clinical information is available about the effect of thermometric parameters on treatment response. Increasing awareness of the importance of such parameters on the efficacy of HT combined with other cancer treatments is important, and thus these parameters should be evaluated and reported routinely. Achieving the defined thermometric parameters during HT treatment would further increase the effectiveness of biological mechanisms when combined with RT and/or CT. Future prospective clinical studies should include description of all relevant thermometric parameters to pave the way towards the proper analysis and standardization of thermometric parameters for each clinical indication treated with HT in combination with RT and/or CT.
This work summarizes the evidence underlying thermometric parameters as predictors of treatment outcomes as reported in clinical studies using HT in combination with RT and/or CT for treating different cancer types and emphasizes the need for reference thermometric parameters to improve HT efficacy. For completeness, the findings pertaining to thermometric parameters from preclinical studies are also discussed, to provide comprehensive information about their significance and underlying mechanisms.
Data Sources and Search Strategies
The literature search included databases of clinicaltrials.gov and pubmed.ncbi.nlm. nih.gov from March to September 2021 and randomized prospective and retrospective clinical studies with specific criteria were identified. The search terms were hyperthermia, cancer treatment, randomized clinical studies, prospective clinical studies, and retrospective clinical studies. Those terms were used mainly to search for the title and abstract. We also found articles which were recommended, suggested, or sent to us on the internet. Additionally, we handsearched the reference lists of the most relevant clinical studies and review articles.
Inclusion and Exclusion Criteria of Clinical Studies
This non-systematic review included randomized, prospective, and retrospective clinical studies that recruited patients with cancer who were treated with HT and RT and/or CT. The data from randomized trials are only from the patient group which received HT in combination with either RT and/or CT. Data from the non-HT arm were not extracted.
The main inclusion criteria was the use of either electromagnetic, radiative, or capacitive HT systems, independent of cancer type. Another criterion was more than 10 patients recruited in prospective and retrospective studies. Retrospective studies were only included if analysis of thermometric parameters for HT in combination with RT had been performed.
Clinical studies which used the thermal ablation technique, interstitial-based/modulated electro HT techniques, interstitial RT techniques, high intensity focused ultrasound (HIFU) HT, whole body HT, and studies in pediatric patients were not included in this review. Pilot and feasibility studies were also excluded.
Data Extraction and List of Variables Included
The data extracted from the clinical studies contained the following information: • First author of the study • Study design: prospective or retrospective • RT treatment data: total dose, number of fractionations • CT treatment data: drug and concentration prescribed, number of cycles • Thermometric parameters • Reported clinical endpoints • Reported relationship between thermometric parameters and clinical endpoint
A Summary of HT Techniques
The clinical studies included in this review administered HT using externally applied power with electromagnetic-based techniques, such as radiofrequency, microwave, or infrared. These techniques differ with regard to their application to treat superficial or deep-seated tumors, as summarized elsewhere [44].
For superficial tumors, the electromagnetic radiative and capacitive systems are the those used in the clinical trials included in this review. The superficial HT techniques and their application are explained in detail elsewhere [43]. The radiative and capacitive systems differ in the way they are applied in the clinic. A study showed that for superficial cancers, the radiative HT system performs better than capacitive systems in terms of temperature distribution [45]. The commercially available radiative superficial systems are the BSD-500 device (Pyrexar Medical, Salt Lake, UT, USA), the ALBA ON4000 (Alba Hyperthermia, Rome, Italy) and contact flexible microwave applicators (SRPC Istok, Fryazino, Moscow region, Russia). Thermotron RF8 (Yamamoto Vinita Co, Osaka, Japan), Oncotherm (Oncotherm Kft., Budapest, Hungary) and Celsius TCS (Celsius42 GmbH, Cologne, Germany) are examples of commercial capacitive systems used for superficial tumors.
Different HT techniques with unique specifications, characteristics, and limitations are used to treat deep-seated tumors [46]. The ESHO guidelines provide information as to how and when a specific particular HT device should be used to treat deep-seated tumors [46,47]. The radiative HT systems for deep-seated tumors used in clinical trials are the BSD-2000 device (Pyrexar Medical, Salt Lake, UT, USA), the ALBA 4D (Alba Hyperthermia, Rome, Italy), and the Synergo RITE (Medical Enterprises Europe B.V., Amstelveen, The Netherlands), and capacitive systems are Oncotherm (Oncotherm Kft., Budapest, Hungary), Celsius TCS (Celsius 42 GmbH, Cologne, Germany), and Thermotron RF8 (Yamamoto Vinita Co, Osaka, Japan). Another simulation study showed a difference in heating patterns between radiative and capacitive HT for deep-seated tumors [48]. The radiative technique yields more favorable simulated temperature distributions for deep-seated tumors than the capacitive technique.
Definition of Thermometric Parameters
In this work, the thermometric parameters were extracted from the selected perspective and retrospective clinical studies. The definitions of these parameters are listed in Table 1.
Thermometric Parameters Definitions
Heating Temperature T min Minimum temperature achieved in target volume ( • C). T max Maximum temperature achieved in target volume ( • C). T avg Average temperature achieved in target volume ( • C). T 10 Temperature achieved in 10% of the target volume ( • C). T 20 Temperature achieved in 20% of the target volume ( • C). T 50 Temperature achieved in 50% of the target volume ( • C). T 80 Temperature achieved in 80% of the target volume ( • C). T 90 Temperature achieved in 90% of the target volume ( • C).
Heating duration t pre Warm-up period is the time required to achieve the desired treatment temperature and therapeutic time (min).
Time interval t int
The time interval between HT and RT and/or CT.
Sequencing
The scheduling order of HT with RT and/or CT.
Temperature measurements in the target volume or surrounding tissue are crucial for assessing treatment quality and are represented by temperature metrics. During a HT session, the temperature is usually monitored and recorded using high resistance thermistor probes, fiber optic temperature probes or thermocouples by invasively placing the probes in the target volume or in the vicinity of the target volume [43,46,50]. The ESHO guidelines recommend that after the definition of the tumor volume as a planning target volume, a target point should be defined where the probe is positioned intraluminally or intratumorally [46]. In addition, the guidelines strongly suggest keeping a record of thermometry measurement points within or close to the tumor sites [43]. After completion of the HT session, recorded temperature data during t treat are evaluated by computing temperature metrics. For instance, T max is calculated as the maximum temperature value recorded in the target volume (Table 1). T 10 , another maximum temperature metric, is computed as the temperature value received by 10% of the target volume [32]. Similarly, the other temperature metrics listed in Table 1 are computed. In current practice, the thermometric parameters and thermal dose are computed by software integrated in the HT systems or using thermal analysis tools such as RHyThM [51].
To illustrate how temperature, t pre and t treat terms are measured in clinical practice, Figure 1 shows the temperature and heating duration parameters of a patient treated with HT in the radiation oncology center at Cantonal Hospital Aarau (KSA) using BSD-500 system (BSD Medical Corporation, Salt Lake City, UT, USA). The temperature metrics and thermal doses can be also computed by using the data from Figure 1. A decade ago, a new thermal dose entitled "TRISE" was proposed by Franckena et al. [36]. However, this parameter has not yet been evaluated in experimental studies. Another newly proposed thermal dose parameter is the area under the curve (AUC) [49]. In contrast to CEM43 • C and TRISE, AUC is computed without any prior assumptions by summating AUC for the entire treatment session, including t pre and t treat . Similarly to TRISE, AUC has not yet been investigated in preclinical studies. Another parameter related to HT used in this review is thermal enhancement ratio (TER), defined as 'the ratio between RT dose required to achieve a specific endpoint and RT dose to achieve the same endpoint in combination with HT' [52].
Heating Temperature
The responsiveness of a tumor to HT is determined by different heat-induced mechanisms at the cellular level. The oxygenation rate is affected by temperature, as a higher rate was reported at 41-41.5 • C in comparison to higher temperature (at 43 • C) in rodent tumors, human tumor xenografts, canine, and human tumors [53]. Heating at 40 • C potentiated the cytotoxicity of CT drugs in human maxillary carcinoma cells [28], and the cytotoxicity was further increased on heating to 43.5-44 • C [54]. In contrast, another preclinical study showed no such dependency at 41-43.5 • C [55]. An in vitro study showed that apoptosis in human keratinocytes occurred at temperatures of 39 • C and above [56]. However, the majority of studies show synergistic actions of HT with RT and CT at temperatures above 41 • C [5,57], leading to the inhibition of DNA repair and chromosomal aberrations, induction of DNA breaks by RT and CT, and protein damage as an underlying molecular event of heat treatment [5,58,59]. To benefit from additive and synergistic effects of HT when combined with RT and/or CT, uniform temperature in the target volume should be delivered during the whole treatment course.
The temperature metrics are used to present the heating temperature achieved during treatment, not only in the target volume, which encompasses the tumor, but also for adjacent healthy tissue. T 90 , T 80 , T 50 , T 20 , and T 10 are considered to be less sensitive than T min , T avg and T max , due to the number and arbitrary positioning of sensors in the tissue.
Such temperature metrics can be used to understand the response to heat of various cancer types for a specific duration and, at the same time, the heat-induced effects on surrounding normal tissues. However, except for T min and T max , most descriptive metrics of temperature have no specific reference values yet (Table 2). (Table 2), even though temperature distributions can be better controlled in preclinical than in clinical studies. In an in vivo study, no temperature variations were observed in tumors as they were recorded intratumorally [15]. Temperature at a reference value with minor variations (±0.05 • C) was reported in a vitro study [60]. In contrast, the temperature data recorded in patients are limited for various reasons. For example, thermistor probes inserted in deep-seated tumors in patients have the potential to cause complications or sometimes are impractical to insert intraluminally or intratumorally [61]. The value of the lowest temperature achieved during HT treatment is shown to have a prognostic role in describing the biological effects of HT. According to an in vivo study, T 90 was a predictive parameter of reoxygenation and radiosensitization effects [62]. An in vitro experiment which investigated the difference in thermal sensitivity between hypoxic and oxic cells demonstrated that direct cytotoxicity induced by HT is more selective to the hypoxic cells [7]. Thus, temperatures required to achieve comparable thermal enhancement effect of HT vary depending on tissue type and characteristics.
Heating Duration
Temperature fluctuations, such as a decrease by 0.5 • C, have been shown to have a strong effect on the extent of cell kill, which was compensated by doubling the heating duration [6,63]. Therapeutic ratio, defined as the ratio of thermosensitive liposomal doxorubicin delivered to the heated tumor increased from 1.9-fold with 10 min heating to 4.4-fold with 40 min heating [64]. In an in vivo study, TER for mouse mammary adenocarcinoma (C3H) increased with respect to heating exposure longer than 30 min at 41.5 • C [15]. A study used mouse leukemia, human cervical carcinoma (HeLa), and Chinese hamster ovary (CHO) cells to demonstrate that the time required to kill 90% of the cells at 43 • C varied according to type [65]. The survival data from different tissues were analyzed using the Arrhenius equation to understand the effect of t treat for different cell types [66]. These analyses showed that the reference t treat value is set at 60 min when heating constantly at reference temperature (Table 3). Table 3. Reference heating duration parameters for HT.
Heating Duration Parameters
Reference Value (min) t pre undefined t treat 60 1 Heating for longer than 60 min is restricted by thermotolerance, which was observed after 20 min while heating at 43.5 • C [67]. In addition, the surviving fraction of asynchronous CHO cells heated to 41.5 • C was decreased with increasing t treat , until the thermotolerance effect appeared [21]. Thermotolerance is activated by different forms of stress including heat exposure for a specific time [68], which depends on the temperature and the amount of HT damage induced [69]. In an experimental study, the effect of thermotolerance was observed using the human tumor cell line (HTB-66) and CHO cells after 4 h of heating at 42.5 • C and 3 h of heating at either 42.5 or 43 • C [70]. The degree of thermotolerance is determined by cell type, heating temperature, and time of heating including the interval between successive heat treatments [71].
Thermal Dose
The relationship between temperature and t treat was demonstrated experimentally in two preclinical studies, which showed that the same thermal enhancement of ionizing radiation in cells lines was achieved by heating for 7-11 min at 45 • C or for 120 min at 42 • C [26,72]. It was also shown that different survival rates were obtained when heating asynchronous CHO cells to different temperatures for varying t treat [66]. These preclinical data showed that heating temperature and t treat influence thermal damage. The relationship of temperature and t treat to the biological effects induced by HT is described using the Arrhenius equation, which models the relationship of the inactivation rate in a biological system [21]. This led to the discovery that the relationship between temperature and t treat depends on the activation energy required to induce a particular HT-induced biological event, such as protein denaturation [59,66]. The thermal dose concept, CEM43 • C, was established to account for the biological effects induced by HT in terms of both temperature and t treat [21]. More specifically, CEM43 • C calculates the equivalent time of a HT treatment session by correlating temperature, t treat and inactivation rate of a biological effect induced by heat based on the Arrhenius equation. The reference temperature of 43 • C was shown as a breakpoint in the Arrhenius plot with a steeper slope between 41.5 and 43 • C in comparison to 43-57 • C [66]. The threshold values of CEM43 • C for tissue damage differ for specific tissues as identified in in vivo studies and are reviewed elsewhere [70,73,74]. In addition, these data underline that CEM43 • C is an important parameter that has biological validity to assess the thermal damage in tissues. CEM43 • CT 90 is one of the most frequently used thermal dose descriptors at T 90 , not only in clinical, but also in experimental settings. In an in vivo study, Thrall et al. [75] showed a relationship between CEM43 • CT 90 and local control in canine sarcomas, but not with CEM43 • CT 50 and CEM43 • CT 10 . Another in vivo study using breast (MDA-MB-231) and pancreatic cancer (BxPC-3) xenografts showed that at relatively low values of CEM43 • CT 90, tumor volumes could be reduced by exposure to heat alone [76]. However, none of the preclinical studies proposed reference values for clinical validation, as shown in Table 4. Although there is no reference threshold value for the CEM43 • C, its efficacy to predict tumor response and local control has been experimentally proven [75,77]. CEM43 • C is considered as a thermal dose parameter with few weaknesses which have been discussed elsewhere [78].
Number of HT Sessions
Thermotolerance is an undesirable side effect of HT which renders tumor cells insensitive to heat treatment for 48 to 72 h [79]. Thermotolerance consists of an induction phase, a development phase, and a decay phase. Each of these components may have its own temperature dependence as well as dependence on other factors, such as pH and presence of nutrients [80]. Thermotolerance plays an important role on how HT sessions are scheduled during the treatment course. An in vivo study using C3H mouse mammary carcinoma confirmed that preheating for 30 min at 43.5 • C induced thermotolerance for the next heating session [81]. Twice weekly heating to 43 • C for 60 min in combination with RT at 3 Gray (Gy) per fraction for 4 weeks was shown to result in a steady state decline in oxygenation level suggesting vascular thermotolerance [82]. In comparison, Nah et al. reported that heating at 42.5 • C for 60 min could render the tumor blood vessels resistant to the next heating session after an interval of 72 h [83]. It has also been shown that when HT was delivered daily with RT 5 days a week, no significant thermal enhancement could be detected in comparison to one single HT session, even when heat was delivered simultaneously or sequentially [84]. With the agreement of these findings, N week is defined as 1 or 2 sessions separated by at least 72 h (Table 5). In summary, HT should be delivered once or twice weekly, taking into account the type of cancer, RT fractionation and CT drug scheduling. Due to logistical reasons, the N total usually depends on, the treatment plan for different cancer sites, number of RT fractions or number of CT cycles (Table 5).
Time Interval Parameter between HT and RT and/or CT
The t int between HT and RT and/or CT treatment is another parameter that affects sensitization due to time-dependent biological effects and its contribution to thermotolerance.
Recently, an in vitro study of human papillomavirus (HPV)-positive (HPV16 + , HPV18+) and HPV-negative cell lines that were treated with HT either 0, 2 and 4 h before and after RT showed that the shortest t int resulted in lower cell survival fractions and decreased DNA damage repair [85]. The influence of t int has been investigated in an in vivo study, which reported that TER is greatest when heat and radiation are delivered simultaneously [15]. Unfortunately, simultaneous delivery is currently technically impossible in clinical routine and therefore heat and radiation are usually delivered sequentially. A very short t int of approximately five min is considered as an almost simultaneous application [86]. Dewey et al. concluded that HT should be applied simultaneously or within 5-10 min either side of radiation to benefit maximally from the radiosensitizating effect of heat [6]. TER is decreased faster for the normal cells than for cancerous cells when t int ≤ 4 h between HT and RT [15]. Thus, it can be argued that a slightly longer t int could ensure the sparing of normal tissue from radiosensitization before or after RT. A t int longer than 4 h, no sensitization effects induced by HT were observed [15,85]. The wide range of acceptable t int values reported in experimental studies is from 0 (when CT is delivered during HT) to 4 h ( Table 6). Table 6. Reference t int parameter for HT in combination with RT or CT.
Time Interval Parameter
Reference Value (min) In contrast to RT, CT can be given simultaneously or immediately after or before HT [87]. A preclinical study, in which cisplatin and heat were used to treat C3H xenografts, showed that a higher additive effect can be obtained when cisplatin was given 15 min before HT in comparison with an interval longer than 4 h [55].
Furthermore, HT has been shown to sensitize the effects of gemcitabine at 43 • C when the drug was given 24 h after heating [88], whereas another study showed an optimal effect when the drug was given 24-48 h before heating [89]. The type of CT agent and its interaction with heat are factors which determine the t int between HT and CT ( Table 6).
Sequencing of HT in Combination with and RT and/or CT
An additional predictive parameter for the effectiveness of radiosensitization and chemosensitization is the sequencing of heat prior to or after the application of RT or CT. Usually, HT and RT are delivered sequentially but there is no consensus as to the optimal sequence. An in vivo study by Overgaard investigated the impact of sequence and interval between the two modalities on local tumor control and normal tissue damage in a murine breast cancer model and found that the sequence did not have any significant effect on the thermal enhancement in tumor tissues [15]. However, an experimental study using Chinese hamster ovary (HA-1) and mouse mammary sarcoma (EMT-6) cell lines showed that sequencing of radiation and heat altered radiosensitivity for these two cancer cell types [90]. HT before RT showed more thermal enhancement in synchronous HA-1 cell lines and the opposite sequence increased the thermal enhancement in EMT-6 cell lines. Other experimental studies reported no impact of the sequence of RT and HT in V79 cells on thermal enhancement [26,72]. In line with these results, an experimental study with HPV cell lines showed no difference in radiosensitization or cell death when heat was delivered prior or after radiation [85]. Due to conflicting results with regard to the treatment sequencing of HT and RT, additional preclinical mechanistic studies on different cell types are required.
An in vivo study where heat was combined with cisplatin CT showed that simultaneous application of both treatments resulted in prolonged tumor growth delay in comparison with administration of cisplatin after HT [55]. Another study found that simultaneous exposure of human colorectal cancer (HCT116) cells to HT and doxorubicin was more effective than sequential administration because of higher intercellular drug concentrations at 42 • C [91]. In conclusion, better insight into the interaction of various CT drugs with HT and RT is required to define the optimal sequencing of specific drugs and RT dose.
Evidence for the Predictive Values of Thermometric Parameters in Clinical Studies Combining HT with RT
Numerous prospective and retrospective clinical studies have been conducted to assess the efficacy of HT in combination with RT for treating superficial and deep-seated tumors. The design of most clinical studies was based on the translation of experimental findings aiming to reproduce benefit of HT when combined with RT. Tables 7 and 8 show the results of the most important clinical studies. The prospective clinical studies in Table 7 reported improved clinical results, apart from the study by Mitsumori et al. which did not show a significant difference in the primary clinical endpoint of local tumor control between two treatment arms [92]. The underlying reason could have been differences in RT dose prescriptions and missing patient treatment data. Although the study showed a significant difference in progression free survival, this was judged to be not a substantial benefit. The authors stressed the need for internationally standardized treatment protocols for the combination of HT and RT.
In reality, temperature and thermal dose are usually reported as post-treatment data recordings (Tables 2 and 4) to account for temperature homogeneity or sensitivity. Even though temperature cannot always be measured invasively, depending on the location of the tumor, a strong correlation was reported between intratumoral and intraluminal temperatures, suggesting that intraluminal temperature measurements are a good surrogate for pelvic tumor measurements [50,93]. In addition, retrospective studies showed that a higher intra-esophageal temperature (>41 • C) predicts longer overall survival, improved local control and metastasis-free rate [94,95]. The difficulty of performing invasive measurements was illustrated by a randomized phase III study by Chi et al. [96] in which only 3 out of 29 patients with bone metastases had directly measured intratumoral temperature. In the study by Nishimura et al. [97], the HT session was defined as effective if an intratumoral temperature exceeded 42 • C for more than 20 min. However, according to the Arrhenius relationship, this is not considered long enough to induce a significant biological effect [21].
Another obstacle during HT is the non-standardized methodology for describing the temporal and spatial variance of temperature fields. Several groups have investigated the correlation between various temperature metrics. The study by Oleson et al. showed that T min , tumor volume, radiation dose, and heating technique play significant roles in predicting treatment response for patients treated with RT in combination with HT [29]. In contrast, Leopold et al. reported that the more robust parameters T 90 , T 50 , and T 10 are better temperature descriptors and predictors of histopathologic outcome than T min and T max [32]. The median T min , T min during the first heat treatment and tumor volume were reported to be factors predictive for the duration of cancer response (Table 7) [98], even though it is considered that skin surface temperature is not representative for superficial tumors and cannot be associated with clinical outcomes [42]. For deep-seated tumors, Tilly et al. reported that T max was a predictive treatment parameter for prostate-specific antigen (PSA) control [99]. The relationship of high (T avg ≥ 41.5 • C) and low (T avg < 41.5 • C) tumor temperature with clinical response has been analyzed in a study by Masunaga et al. [100]. They showed that heating the tumor to temperatures of T avg ≥ 41.5 • C for a duration of 15-40 min achieved better tumor down-staging and better tumor degeneration rates [100]. This finding supports the concept that direct cytotoxic effects of HT are enhanced at temperatures higher than 41 • C, as suggested in preclinical studies [5,57]. A higher response rate was also reported when tumors were heated with T avg > 42 • C for 3-5 HT sessions [97]. In contrast, a study showed no difference in clinical outcome when patients were treated with mean T min = 40.2 • C, T max = 44.8 • C or T avg = 42.5 • C for N total of 2 or 6 [24]. Other studies also reported no impact of N total and N week on clinical outcome [40,101]. The contradictory results derived from clinical studies with regard to the predictive power of temperature descriptors and N total are why we did not list reference values for these descriptors in Table 5.
The predictive role of thermal dose has been investigated in both prospective and retrospective clinical studies (Tables 7 and 8). However, there is still no conclusion about the values for thermal dose that should be obtained during HT treatment for maximal enhancement effect. In prospective studies (Table 7), the correlation between thermal dose and treatment outcome is rarely reported. Retrospective studies reported that thermal dose, CEM43 • C, is an adequate predictor of treatment response and its best prognostic descriptor is CEM43 • CT 90 [32,33,[36][37][38]102]. In a phase III study of the International Collaborative Hyperthermia Group, led by Vernon et al. [113], thermal dose was associated with complete response (CR) in patients treated for superficial recurrences of breast cancer [39]. Another randomized study showed that the best tumor control probability was dependent on thermal dose [106]. Further, retrospective analyses indicate that thermal dose is a significant predictor of different clinical endpoints (Table 8) [33,36]. A few studies did not find such significant relationships between clinical endpoints and thermal dose [103,109,110]. For example, in the prospective study of Maguire et al., a total CEM43 • CT 90 with a threshold above 10 min did not show a significant effect on CR [110]. However, the association of CEM43 • CT 90 with CR was later reported for patients treated with superficial malignant cancers [35]. Similar to the study by Maguire et al., the minimum effective thermal dose was set as 10 CEM43 • CT 90. In addition, a test HT session was performed to verify if the tumor was heatable, and a thermal dose of higher than 0.5 CEM43 • CT 90 could be achieved [35,110]. The objective of the study by Hurwitz et al. was to achieve a CEM 43 • CT 90 of 10 min, yet the resulting mean of thermal dose for all 37 patients was only 8.4 min [112]. The cumulative minutes T 90 > 40.5 • C, defined as 'the time in minutes with T 90 > 40.5 • C for the whole N total ', with a mean of 179 ± 92 min, together with T 90 and T max were reported to correlate with toxicity and prostate specific antigen clinical endpoints [99]. Similarly, Leopold et al. showed that cumulative minutes of T 90 > 40 • C is a predictor of treatment endpoints [40]. In retrospective studies, TRISE thermal dose concepts [36] were shown to have a predictive role in treatment response. These retrospective analyses showed that TRISE had a significant effect on local control for a cohort of patients with cervical cancer [33].
The effect of the t int parameter has been only analyzed with respect to treatment endpoints in retrospective studies. The study by van Leeuwen et al. reported that a t int less than 79.2 min between RT and reaching 41 • C during HT was associated with a lower risk of in-field recurrences (IFR) and a better overall survival (OS) in comparison to a longer t int [22]. In contrast, another retrospective study showed that neither a shorter t int of 30-74 min nor a longer t int of 75-220 min between RT and the start of HT were significant predictors of local control (LC), disease free survival (DFS), disease specific survival (DSS) or OS [33]. Thus, the optimal t int between HT and RT to achieve a maximal effect on the tumor remains unknown.
Apart from heat-related parameters, the total dose of ionizing radiation and its fractionation in combination with HT has an impact on clinical treatment response [118,119]. Valdagni et al. [103] reported that increasing the total dose of RT appeared to improve clinical response as 71% (5/7) and 90% (9/10) CR rates were observed for patients with nodal metastases of head and neck cancers who received total doses of 64-66 Gy or 66.1-70 Gy, respectively. In addition, it was reported that previously irradiated tumors, which are typically more resistant to ionizing radiation, achieved higher CR rates when treated with combined RT and HT in comparison with RT alone [35].
Furthermore, RT technique has been reported to have a beneficial effect on combined RT and HT treatment outcomes [29]. For example, technological advance such as MRIguided brachytherapy were shown to improve the treatment outcome when RT is combined with HT [36].
The weak, and in part contradictory, evidence from clinical studies clearly shows that further analyses of thermometric parameters are required to define reference values for clinical use. The reported values for thermometric parameters from prospective and retrospective clinical studies (Tables 7 and 8) can be translated into standard references after being tested and validated in prospective clinical trials. The incidence of late grade III toxicity did not differ between low or high TRISE or low or high t int patients.
Evidence for Predictive Values of Thermometric Parameters in Clinical Studies Combining HT and CT
The added value of combining CT with HT has been established, not only in in vitro and in vivo studies, but also in clinical studies. Randomized clinical studies, which demonstrate that the combination of CT and HT results in improved clinical outcome in comparison with single modality treatment [122][123][124][125], confirm the preclinical findings [126]. The positive prospective and retrospective clinical studies are summarized in Tables 9 and 10 respectively, with a focus on thermometric parameters.
The effectiveness of CT drugs has been enhanced by HT in a variety of clinical situations, such as localized, irradiated, recurrent, and advanced cancers, but only few indications are really promising. Long term outcome data, e.g., regarding the combination of CT with HT for bladder cancer, underline the clinical efficacy of this treatment strategy [125]. Chemosensitization by HT is induced by specifics biological interactions between CT drugs and heat. The increased blood flow and the increased fluidity of the cytoplasmic membrane of the cells induced by HT increase the concentration of CT drugs within malignant tissues. Interestingly, Zagar et al. performed a joint analysis of two different clinical trials and reported no significant correlation between drug concentration and combined treatment effect of CT and HT [127]. However, only a few CT drugs with specific properties (Tables 9 and 10) are good candidates to use with HT. Alkylating agents, nitrosureas, platinum drugs, and some antibiotic classes show synergism with HT, whereas only additive effects are reported with pyrimidine antagonists and vinca alkaloids [59]. For example, heat increases the cytotoxicity of cisplatin, as shown by in vitro and in vivo studies [28,55]. Cisplatin concentration increases linearly with temperatures above 38 • C when applied simultaneously [28,128]. Synergy between HT and CT could be obtained at temperatures below 43.5 • C in a preclinical study [55]. Similarly, enhanced toxicity has been demonstrated for bleomycin [126,129], liposomal doxorubicin [130], and mitomycin-C [131]. Based on the summary of preclinical data, van Rhoon et al. suggested a CEM43 • C of 1-15 min from heating to 40-42 • C for 30-60 min for any free CT drug, including thermos-sensitive liposomal drugs [132].
Lower temperatures might increase the therapeutic window by differential chemosensitization of cancer and normal tissues. In the prospective study of Rietbroek et al. [133] in patients with recurrent cervical cancer treated with weekly cisplatin and HT, three temperature descriptors, T 20 , T 50 , and T 90 , including the time in minutes in which 50% of the measured tumor sites were above 41 • C, indicated a significant difference in these parameters between patients who did and who did not exhibit a CR after treatment. However, there was neither a difference in T max between responders and non-responders in a cohort of patients with recurrent soft tissue sarcomas treated with CT and HT [134], nor in a cohort of patients with recurrent cervical cancer [135].
In a prospective study of patients treated with CT and HT for recurrent ovarian cancer, no significant relationship of T 90 and T 50 and CEM43 • CT 90 and CEM43 • CT 50 with clinical outcome was found [136]. Similarly, the independency of T 90 and CEM43 • CT 90 was also demonstrated in a retrospective study in soft tissue sarcoma [137]. Although a relationship of thermal dose with treatment response has been reported by Vujaskovic et al. [138], the parameters CEM43 • CT 50 and CEM43 • CT 90 were not statistically different between patients who did or did not respond to the treatment. The low mean value of T 90 =39.7 (33.5-39.8) • C reported in this study might be the reason for the non-significant relationship of thermal dose with the clinical endpoint in addition to other factors such as hypoxia and vascularization level of the tumor. The first randomized phase III study that assessed the safety and efficacy of CT in combination with HT also recorded a low (≤40 • C) mean value of T 90 = 39.2 • C (38.5-39.8 • C). However, the thermometric data were not analyzed or reported in correlation with treatment response [123]. Further investigations are required to understand which temperature is needed to achieve a maximum therapeutic effect, according to the type of CT drug and its concentration. (correlation of thermometric parameters with clinical outcome not presented) n: number of patients assigned to be treated with HT in combination withCT; † : mean value (±standard deviation) or mean value (range); ‡ : median (range); n.r.: not reported; *: in mg unit only; 1 LPFS: local progressionfree survival; 2 OS: overall survival; 3 CR: complete response; 4 PR: partial response; 5 SD: stable disease; 6 PD: progressive disease; 7 PFS: progression free survival; 8 PFR: progression free rate; 9 RR: response rate; 10 DFS: disease free survival; 11 QoL: quality of life; 12 NC: no change; 13 LTDL: low temperature liposomal doxorubicin. n: number of patients assigned to be treated with HT in combination withCT; † : mean value (±standard deviation) or mean value (range); ‡ : median (range); 1 CR: complete response; 2 PR: partial response; 3 SD: stable disease; 4 PD: progression disease; 5 ORR: objective response rate; 6 DCR: disease control rate; 7 OS: overall survival; 8 RECIST: Response Evaluation Criteria in Solid Tumors; 9 WHO: world health organization.
Based on preclinical studies, the delivery of simultaneous CT and HT is recommended to achieve the greatest chemosensitization effect by HT [55,142]. However, in contrast to experimental results [20,55], most of the prospective studies listed in Table 9 were designed to deliver heat sequentially, and in most studies the CT drugs were administered prior to HT. Despite the fact that a considerable supra-additive or synergistic effect can be achieved by the simultaneous delivery of CT and RT, the sequential application of CT and HT may protect normal tissues from chemosensitization. The cell killing of hypoxic and oxygenated tumor cells can still be obtained with sequential delivery of CT drugs and HT [54]. In clinical studies, the t int between modalities is usually kept under an hour [122,127,133,136,138]. Of note, the study of Ishikawa et al. showed a different scheduling of gemcitabine and HT for the treatment of locally advanced or metastatic pancreatic cancer [139]. Patients enrolled in this clinical study were treated with HT prior to CT with a t int of 0-24 h. This unique flexible relationship of gemcitabine cytotoxicity with the t int and sequence was revealed in an in vitro study [143]. The specific properties of CT drugs are main factors in determining the most efficient treatment sequence between CT and HT for each class of drugs.
That treatment protocols might require individualized standards for HT thermometric parameters as has recently been illustrated by an interim analysis of cisplatin and etoposide given concurrently with HT for treatment of patients with esophageal carcinoma. This analysis showed a relationship between tumor location and temperature reporting, i.e., higher temperatures were achieved in distal tumors [144]. Similar treatment site-dependent analysis of thermometric parameters should be performed in future trials. Although the biology underlying the interaction between CT drugs and heat in cancer and normal tissues is largely unknown, thermometric parameters have been shown to predict outcome when HT is combined with CT. Therefore, as discussed above, no definitive conclusions can be drawn regarding the optimal thermometric parameters for an enhanced effect of HT with CT.
Evidence for Predictive Values of Thermometric Parameters in Clinical Studies Using RT and CT in Combination with HT
Clinical malignancies, in particular advanced and inoperable tumors, can be treated using triplet therapy consisting of CT, RT and HT as a maximal treatment approach. The number of prospective and retrospective clinical studies investigating this approach is limited, the most important of which are listed in Tables 11 and 12, respectively. These studies have already reported the feasibility of this trimodal approach for cervical cancer, rectal cancer, and pancreatic cancer.
The optimal combination of CT, RT, and HT in a single framework is complex, be-cause so many biological processes underly the interactions between the three modalities. In addition, clinical factors often influence the optimal combination of RT and CT. A template with fundamental specifications for designing a clinical study with the trimodal treatment is proposed by Herman et al. [145].
Even though there is no consensus as to the optimal scheduling of trimodal treatment, clinical studies to date integrate HT in combination with daily RT and CT drugs based on the concept that CT should interact with both RT and HT. Scheduling CT weekly is most feasible in terms of maintaining an optimal t int between HT sessions, drug administration, and RT fraction [145].
The reason why cisplatin is most frequently used in trimodality regimens is less based on a specific interaction with heat, but rather on extensive evidence from phase III randomized trials showing that cisplatin potently improves the antitumor efficacy of radiotherapy, albeit at the cost of increased toxicity. Drug concentration has been shown to affect treatment response [146], as proven experimentally [147]. A phase I-II study reported that a higher cisplatin dose (50 mg/m 2 ) in comparison with a lower dose (20-40 mg/m 2 ) combined with RT and HT was positively correlated with CR [146]. Interestingly, overall survival between patients treated with two different CT regimes in combination with RT and HT did not differ [148]. However, the study was limited by the small size of the patient cohort. With reference to Table 11, clinical studies using trimodality treatment usually used conventional fractionation schemes with 1.8-2.0 Gy per fractions, leaving it largely unknown whether other schedules such as hypofractionation (>10 Gy per week or large single fractions) might be biologically more favorable. The total dose varied according to cancer type. In the case of cervical cancer, brachytherapy at high dose rate (HDR) or low dose rate (LDR) was applied to deliver the boost dose [149,150]. Furthermore, high or low total RT dose was reported to have an influence on CR rate when combined with 5-FU, leucovorin and HT [151]. In contrast to CT and RT treatment parameters, HT treatment parameters were frequently not reported. Thermometric parameters, such as temperature and thermal dose including t int , are reported but not set as fixed treatment requirements as there are no accepted reference values.
Disregarding the Arrhenius relationship of heating temperature and t treat , Amichetti et al. [152] reported a short t treat of 30 min with mean temperature range values of T max = 43.2 • C (41.5-44.5 • C) and T min = 40.1 • C (37-42 • C). This might explain why this study did not result in a higher CR rate in comparison to the previous study by Valdagni et al. [103]. A correlation of achieved temperature with treatment response such as disease-free interval to local relapse (DFILR) was reported in the study by Kouloulias et al. [153]. This study showed that the DFILR rate was greater in patients who achieved heating temperature T 90 > 44 • C for longer than 16 min during HT treatment. No significant correlation of DFILR with mean values of temperature descriptor T min was confirmed. Referring to the last row in Tables 7-12, the clinical endpoints among studies differ, which adds another level of complexity to generalizing the thermometric parameter correlations reported in studies.
Thermal dose was reported less frequently than temperature measurements, hence there is a lack of information about its predictive role for treatment response. In one study, thermal dose was directly and proportionally associated with CR, as patients who exhibited CR after treatment with a measured CEM43 • CT 90 of 4.6 min in comparison with patients with a PR and a CEM43 • CT 90 of only 2.0 min [146]. Recently, a prospective phase II study investigating neoadjuvant triplet therapy in patients with rectal cancer showed that patients achieving good local tumor regression had received a high thermal dose [154]. However, no threshold, only the mean of CEM 43 • C, was reported. The retrospective analysis of thermometric parameters of the prospective study by Harima et al. [149] showed that >1 min CEM43 • CT 90 is the threshold value which significantly correlates with treatment response (CR and disease-free survival rates). It also confirmed that CEM43 • CT 90 below 1 min are insufficient to achieve enhancement of RT and CT [155]. Unfortunately, no further analyses of the relationship between HT treatment parameters with clinical outcomes in studies using triplet therapy were reported.
Furthermore, the optimal interval between heat, radiation and anticancer drugs is still unclear. With reference to preclinical and clinical outcomes, t int affects the thermal enhancement effect of HT on both ionizing radiation and CT drugs. A particular interaction between HT and CT in terms of t int was reported according to properties of the CT drugs. A short t int between sequential HT and doxorubicin resulted in more rapid treatment response [153]. However, it is not clear whether the CT drug interacts primarily with RT only when administered on the same day or also during an extended time period. In the first scenario, CT and HT could typically be administered within a range of 1-6 h prior to RT to optimally exploit the biological interaction. 12 : 26% (7/29), PR: 33% (9/29) and SD: 22% (6/29).
•
The dependence of T 50 on the body size parameters was substantial.
(correlation of thermometric parameters with clinical outcome not presented) Moreover, the N total was shown to be a prognostic factor for OS for bladder cancer patients treated with combined CT, RT, and HT followed by surgery [161]. In contrast, Gani et al. [164] reported that the number of HT sessions was not predictive for OS, DFS, LC, or distant metastasis-free survival. Neither did the sequencing of CT, HT, and RT in clinical reports follow a specific pattern. Preclinical studies are required to better understand the interaction of CT, RT, and heat and how they should be combined in future clinical trials.
Future Prospects
The main limitations of HT as a cancer treatment in current clinical practice are the need for better standardization of treatment protocols, up-to-date quality assurance guidelines that are widely applicable and dedicated planning systems to generate patient treatment plans. The wide variation of thermometric parameters derived from clinical studies indicate that HT treatment is currently delivered according to individual clinical center guidelines. Consequently, the comparison of clinical study outcomes is substantially hampered by the large degree of variation in treatment parameters. Regarding the data summarized in Tables 6-11, apart from thermal dose and temperature measured during treatment, other thermometric parameters reported often include only t treat , t int , or N week .
Monitoring and measuring temperature is one of the main challenges in routine clinical practice and has hindered the clinical expansion of HT. The future of HT in combination with RT and CT requires novel technical developments for the delivery and measurement of homogenous heating of the malignant tissues. Not all studies (Tables 7-12) recorded temperatures in the region of the tumor. The process of inserting temperature probes to monitor and record the HT is considered invasive and uncomfortable, and sometimes the tumor site is inaccessible for the temperature probe. For example, Milani et al. [162] reported that even though the tumors were not deep-seated, intratumoral temperature measurements were only feasible in one of 24 patients, so no representative thermal doses could be reported. One of the non-invasive approaches currently under clinical evaluation is magnetic resonance thermometry (MRT) that provides 3-D temperature measurements. Hybrid MR/HT devices are currently installed in five European clinical centers.
Temperature measurements in anthropomorphic phantoms with MRT are accurate in comparison with thermistor probes [167], but clinical measurements are currently inaccurate in most pelvic and abdominal tumors [168]. The physiological changes in tissue microenvironment, patient movements, magnetic field drift over time, limited sensitivity in fatty tissues, and respiratory motion, including cardiac activity in regions of the pelvis and abdomen, hamper the accurate temperature measurement by MRT [168]. The temperature images from MRT systems contain image distortion, artifacts, and noise, leading to inaccurate temperature measurement, low temporal resolution, and low imaging to signal-to-noise ratio (SNR) [169]. The sources and solutions of image artifacts as a result of additional frequencies were described by Gellermann et al. [170]. Proton-resonance frequency shift (PRFS), apparent diffusion coefficient (ADC), longitudinal relaxation time (T 1 ), transversal relaxation time (T2), and equilibrium magnetization (M0) are the imaging techniques used to exploit temperature-dependent parameters [170][171][172][173]. The PRFS technique is the most frequently used MRT method, even though it was shown that when there is a poor magnetic field homogeneity, ADC or T 1 techniques are preferable [174]. However, the accuracy of temperature measurements was in the range of ±0.4 to ±0.5 • C between PRFS method and thermistor probe using a heterogeneous phantom [175]. A stronger correlation between MRT and thermistor probes was found in patients with soft tissue sarcomas of lower extremities and pelvis [176] in comparison with recurrent rectal carcinoma [177]. The successful implementation of MRT in clinical centers, as automated temperature feedback during the HT session, might have a considerable impact on clinical outcomes to deliver the desired heating and conform the heat distribution to spare healthy surrounding tissues. This could substantially help to standardize data collection and the analysis of thermometric parameters. Another experimental approach to monitoring treatment temperature during HT sessions is electrical impedance tomography (EIT) as recently reported in a simulation study by Poni et al. [178]. EIT captures the electrical conductivity of tissues depends on temperature elevation. For example, the multifrequency EIT technique detects the changes in conductivity due to perfusion increase induced by the change in temperature [179]. The accuracy of EIT for temperature measurements was reported to range from 1.5 • C to 5 • C [180]. The potential of EIT to monitor temperature in the cardiac thermal ablation field is being investigated [181]. This technique also holds promise for HT treatment. Both MRT and EIT may allow for improvement of the spatial homogeneity of heat to the cancer tissues.
The technological advances and standardization of international treatment protocols for different cancer types will improve the effectiveness and synergy of HT in combination with RT and/or CT. In line with this, there is a need for clinically accepted processes for the recording and reporting of thermometric data. This will allow for the inclusion of specific thermometric parameters in future clinical studies combining HT with RT and/or CT. For any future prospective study, it should be mandatory that thermometric parameters are recorded and some recommendations are available in the current guidelines [43,46]. The integration of thermometric parameters is one of the objectives of the HYPERBOOST ("Hyperthermia boosting the effect of Radiotherapy") international consortium within the European Horizon 2020 Program MSCA-ITN. The HYPERBOOST network aims to create a novel treatment planning system, including the standardization of thermometric parameters derived from retrospective and prospective clinical trials.
Conclusions
In this review, we provide an extensive overview of thermometric parameters reported in prospective and retrospective clinical studies which applied HT in combination with RT and/or CT and their correlation with clinical outcome. It is recognized that there is a wide variety in the practice of HT between clinical centers, and we aimed to elucidate the use and reporting of thermometric parameters in different clinical settings. It emerged that the sequencing of HT and RT varies more than the sequencing of HT and CT. Only a few standards seem to exist with regard to the sequence of HT with RT and CT in a triplet for specific CT drug, RT fractionation and thermal dose. According to the evaluated studies, t int is a critical parameter in clinical routine, but no clinical reference values have been established. Of note, a constant t treat of 60 min throughout the HT treatment course was described in most clinical studies. The most important parameter seems to be temperature itself, which correlates with thermal dose. Revealing the relationship between thermal dose and treatment response for different cancer entities in future clinical studies will lead to the improved application of heat to promote the synergistic actions of HT with RT and CT. We suggest that it become mandatory for new clinical study protocols to include the extensive recording and analysis of thermometric parameters for their validation and overall standardization of HT. This would allow for the definition of thermometric parameters, in particular of thresholds for temperature descriptors and thermal dose.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2018-11-15T16:51:26.142Z
|
2018-08-12T00:00:00.000
|
53216178
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://globalcardiologyscienceandpractice.com/index.php/gcsp/article/download/341/312",
"pdf_hash": "206d57b9fba1e1ff06a65fa875ff52cf14d183a4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46011",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "206d57b9fba1e1ff06a65fa875ff52cf14d183a4",
"year": 2018
}
|
pes2o/s2orc
|
Alcohol septal ablation in hypertrophic cardiomyopathy
Alcohol septal ablation (ASA) has become an alternative to surgical myectomy in obstructive hypertrophic cardiomyopathy since it was first introduced in 1994 by Sigwart. The procedure alleviates symptoms by producing a limited infarction of the upper interventricular septum, resulting in a decrease in left ventricular outflow tract (LVOT) gradient. The technique has been improved over time and the results are comparable with those of myectomy. Initial concerns about long-term outcomes have been largely resolved. In this review, we discuss indications, technical aspects, clinical results and patient selection to ASA.
INTRODUCTION
Hypertrophic cardiomyopathy (HCM) is the most common inheritable cardiac disease, with a prevalence of 1 in 500 persons. It is characterized by marked hypertrophy of the myocardium that provokes diastolic dysfunction, left ventricular outflow tract obstruction and an increased risk of arrhythmias. In addition, it is highly heterogeneous. Most individuals with HCM have near-normal life expectancy, and they remain asymptomatic throughout life. On the other hand, some patients develop symptoms of heart failure, angina, syncope or even sudden cardiac death caused by different mechanisms. 1,2 Nearly two-thirds of patients with HCM have a significant gradient across the left ventricular outflow tract (LVOT) at rest, during provocation manoeuvres or exercise, and are classified as obstructive HCM. The main treatment in these patients is negative inotropic drugs, such as beta-blockers, calcium channel antagonist and disopyramide. Between 5-10% remain symptomatic and need septal reduction therapy, either surgical septal myectomy or alcohol septal ablation 3-6 ( Figure 1). Brugada et al. were the first to use alcohol into the septal branch of left anterior coronary artery to treat refractory ventricular tachycardia 7 . Sigwart et al. 8 and Kuhn et al. 9 subsequently reported on reduction of systolic wall motion in HCM by temporary balloon occlusion of the coronary artery. Later, Sigwart introduced catheter-based delivery of absolute alcohol to provoke a small septal infarction as an alternative to surgical myectomy 10,11 .
Pathophysiology of obstruction
Obstruction to left ventricular outflow results from the combined effect of severe septal hypertrophy and abnormalities of the mitral valve apparatus. Ejection flow drags elongated and abnormally positioned anterior mitral leaflet into LVOT. Following this, the LVOT orifice is narrowed further and greater obstruction to flow develops. Also, co-aptation of mitral leaflet is distorted and appearing dynamic mitral regurgitation, which plays an important role in symptoms. [3][4][5] LVOT obstruction has several physiopathological consequences, including reduction of cardiac output, diastolic dysfunction, secondary mitral regurgitation, and myocardial ischemia. These factors are related with symptoms of dyspnoea, chest pain, presyncope and syncope, and are associated with a worse prognosis 3,5 .
Indications for septal reduction therapy
Septal reduction therapy should be considered in patients with an LVOTO gradient ≥50 mmHg, moderate to severe symptoms (New York Heart Association (NYHA) functional Class III-IV) (Recommendation I C) and/or recurrent exertional syncope (Recommendation IIa C) in spite of maximally tolerated drug therapy 1,2,5 .
This intervention should be performed by experienced operators defined as an individual operator with a cumulative case volume of at least 20 procedures (AHA/ACC guidelines) or a minimal caseload of 10 ASA or myectomies (ESC guidelines). Some studies have shown a relationship between results and hospital volumes 1,2 .
Kim et at 12 demonstrated that 60% of centres in the U.S. had performed <10 myectomies during the 9-year study period. This is important as the low-volume centres were found to have higher in-hospital mortality rates (15.6% vs. 3.8% p < 0.001), need for permanent pacemaker (10.0% vs. 8.9%; P < 0.001), and bleeding complications (3.3% vs. 1.7%; P < 0.001) after septal myectomy compared with high-volume centres. For ASA, 67% of centres performed <10 procedures, but ASA procedures in low-volume centres were not associated with worse outcome. In contrast, a recent study 13 based on Euro-ASA registry cohort showed significant association between institutional experience and an almost two-fold lower incidence of periprocedural major adverse events, and significantly better efficacy and safety in long-term follow up after an institutional experience was achieved.
As there are no randomized trials comparing surgery and ASA, guidelines are based on observational studies. The 2011 AHA/ACC guidelines consider septal myectomy as the gold standard technique for septal reduction therapy, and advise against performing ASA in younger patients, severe septal thickness (>25-30 mm), mid-ventricular obstruction and in the presence of concomitant cardiac disease. They specifically recommend ASA in the elderly and in patients with significant comorbidity that increases surgical risk or when patients refuse open-heart surgery. 2,14 ESC guidelines 1 do not give priority to one technique over another, and suggest an individual assessment with an experienced multidisciplinary team.
Recently, a small study 15 has evaluated long-term outcomes of mildly symptomatic patients (NYHA II) treated with ASA. The 30-day mortality after ASA was lower than previously reported (0.6%) and annual all-cause mortality rate was similar to the general population. After almost 5 years of follow-up, 69% remained in NYHA I class and haemodynamic improvement remained similar as at the beginning. In addition, some studies 16,17 have shown good results in periprocedural mortality rate (0.3% vs. 2%, p = 0.03), pacemaker implantation (8% vs 16%, p < 0.001), NYHA status (95% NYHA I-II), lower annual mortality rates (1% vs 5%, p < 0.01) with similar arrhythmic event rates (1%) in younger ages (<50 years). These studies could be the initial evidence to broaden the indication for ASA to younger patients, but these data should be confirmed before in larger studies.
ASA is controversial in children, adolescents and young adults as there are no long-term data on the late effects of a myocardial scar in these groups, and because the technical difficulties and potential hazards of the procedure in smaller children and infants are greater. Anecdotal cases have been reported where children were treated with ASA after unsuccessful surgical myectomy and were not candidates for heart transplantation 1,2,18 .
Procedure
At most centres, the technique performed is that proposed by Faber et al., 19 which uses myocardial contrast echocardiography 11 . First, it is necessary to have two arterial access sites for coronary guide-catheter and pigtail catheter, and one venous access site (usually femoral or jugular) for pacing electrode. Before the ablation, a diagnostic catheterization is performed to measure the left ventricular outflow tract and to exclude coronary artery disease and select a potential target septal artery. The outflow pressure gradient is usually measured by catheterization and Doppler echocardiography at rest and after induction of extrasystoles with a pigtail catheter or programmed stimulation utilizing the temporary pacemaker 3,4,6 .
Since almost 50% of patients develop a transient complete heart block, implantation of a temporary pacemaker lead is mandatory in all patients without a previous permanent pacemaker or ICD 11 . Using internal jugular vein access site and conventional permanent active fixation pacing electrode connected to a permanent sterilized pacemaker generator allows improving electrode stability, patient mobility and minimizing the risk of cardiac perforation 20,21 .
A guide-wire is advanced into the target septal artery; afterwards an over the wire balloon is advanced into the target septal artery. This is inflated and isolates the septal artery from the other coronary territories. Selective angiography of the target septal branch through the inflated balloon catheter should document the adequate sealing of the septal branch. Consequently, echocardiographic contrast agent is injected through the balloon catheter with continuous echocardiographic screening. An obvious opacification of the area of the septum involved in the contact point for SAM will be seen if the artery is the correct choice. Multiple projections are required to ensure the correct distribution. Myocardial contrast echocardiography allows higher success rates despite lower infarct sizes, in turn reducing complication rates 22,23 (Figure 2). A small volume (1-3 mL) of absolute alcohol is injected slowly in small increments through the central lumen of the balloon catheter under continuous fluoroscopic, haemodynamic, echo and electrocardiographic control. The quantity of injected alcohol should be determined by the septal thickness (1 ml/10 mm). Analgesia should be given for control of chest pain. Balloon occlusion should be maintained for at least 10 mins. After deflating the balloon catheter, an angiogram is performed to confirm complete occlusion of the septal branch and normal flow in the left anterior descending artery. In different registries <5% ASA procedures have been aborted for lack of an appropriate septal branch 24 .
The gradient commonly decreases during the procedure, though the beneficial effect is the consequence of a slow process of fibrosis and ventricular remodelling that is not achieved until some months later. The relationship between the acute results and the long-term benefits is poor.
Patients are observed in the cardiac intensive care unit for 24-72 hours. Cardiac enzyme measurements every 6 to 8 hours allow documentation of peak creatine kinase or troponin value. If no complete heart block is present at that time, temporary pacing wires can be removed. Hospital stay is usually 5 days if no complication is observed (although some centres advocate up to 1 week), and this is predominantly to observe for late complete heart block.
Periprocedural complications and long-term outcomes
Complications are rare with experienced operators. The 30-day mortality for septal ablation is now <1%, with severe cardiac events occurring in <2% of patients.
The most common complication is the need for permanent pacemaker. The risk in large multicentre observations remains around 10-12%, more than twice the risk of permanent pacemaker implantation compared with those who undergo myectomy. [24][25][26][27] Patients with first-degree AV block and those with LBBB are at high risk of persistent advanced block during ASA; thus, the implantation of a permanent pacemaker prior to procedure is highly recommended 11 . Higher doses of alcohol are associated with a higher risk of heart block and subsequent pacemaker requirement; doses of alcohol ranging between 1.5 and 2.5 mL probably represent the optimum balance between efficacy and safety for most patients 24 . In addition, there is a reduction in pacemaker implantation with increased operator volume 3,12,13 . Other rare complications include coronary artery dissection, ventricular fibrillation, cardiac tamponade, cardiogenic shock, pulmonary embolism and bradyarrhythmias.
Due to iatrogenic myocardial infarction, the potential for proarrhythmia has been a concern for ASA. The work of ten Cate et al. 28 reported an annual rate of cardiovascular death or ICD discharge to be 5.2-fold higher in the ASA group than in the myectomy group. However, no study has reproduced these results. The likely reason for these outcomes is that patients included received large volumes of alcohol (4.5 ml) and the goal was to achieve resolution of the obstruction in the laboratory 11,29 . Later published studies do not indicate an increase in incidence of ventricular arrhythmias during followup [24][25][26]30 . In the Euro-ASA registry 24 only a few patients experience early post-procedural ventricular arrhythmias (1.6%), and the rate of sudden mortality events was 1% per year. Annual sudden cardiac death rates following ASA were also found to be similar to those in post-myectomy patients, ranging from 0.4% to 1.3%. 5 In addition, the survival of ASA-treated patients was found to be comparable to those myectomy-treated patients and patients with non-obstructive HCM 31 , and it was comparable to the expected survival for age and sex general population (Survival free of all-cause mortality at 1, 5, and 10 years was 97%, 92% and 82% respectively) ( Table 1).
Treatment efficacy
The reduction of the gradient is observed immediately after surgical myectomy, whereas the benefits of ASA are delayed, often for more than 6 months after alcohol injection. Both myectomy and ASA are followed by a process of cardiac remodeling that involves the reduction of the thickness in other segments and in the size of the left atrium, a consequence of the haemodynamic improvement achieved. Two meta-analyses showed a slightly higher LVOT gradient after ASA compared with myectomy. 25,26 However, no significant differences were found in NYHA functional class, peak oxygen consumption and exercise capacity at late follow-up between the 2 procedures. The median percentage of patients remaining in NYHA functional class III/IV was 8% after ASA and 5% after myectomy ( p = 0.43), and the reduction in LVOT gradient was 71% after ASA and 77% after myectomy ( p = 0.63). 25,26 The benefit of ASA in older patients is similar to that in younger patients. 16,17 On the other hand, the incidence of additional septal reduction therapy was 7.7% following ASA compared with 1.6% following myectomy. 25 Briefly, no significant difference in symptom relief was noted between the two approaches. ASA was as safe as myectomy regarding SCD, short-term, and long-term mortality, although is associated with more than twice the risk of permanent pacemaker implantation and a 5 times higher risk of the need for additional septal reduction therapy.
Patient selection
Multiple studies have shown a high success rate and low complication rate with both septal myectomy and ASA, leading to excellent reduction in outflow tract obstruction and sustained improvement in symptoms. The choice of procedure is dependent on many factors including the expertise and availability of the operators, the presence of concomitant cardiac problems, accompanying medical comorbidities, and patient choice ( Table 2). Candidates for this treatment should be evaluated by a team with expertise in the diagnosis and management of patients with HCM, and both procedures should be performed by experienced operators.
CONCLUSION
ASA has become an alternative to surgical myectomy that may be considered for many patients. Data indicates that functional and haemodynamic success of ASA is high and similar to that of surgery. Benefits of ASA in comparison to myectomy include shorter hospital stay, less pain, and avoidance of complications associated with surgery and cardiopulmonary bypass. Despite being widespread, the procedure should only be performed by experienced operators and on carefully selected patients.
|
v3-fos-license
|
2018-12-05T09:43:32.501Z
|
2016-01-04T00:00:00.000
|
155474741
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1080/23322039.2015.1128133",
"pdf_hash": "d4ee8299f0472928c826cd39666a2d19338d490b",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46012",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "d4ee8299f0472928c826cd39666a2d19338d490b",
"year": 2016
}
|
pes2o/s2orc
|
An analysis of seasonality fluctuations in the oil and gas stock returns
This paper investigates the existence of seasonality anomalies in the stock returns of the oil and gas companies on the London Stock Exchange. It employs F-test, Kruskal–Wallis and Tukey tests to examine days-of-the-week effect. Generalised autoregressive conditional heteroscedasticity specification was also employed to investigate both the days-of-the-week and months-of-the-year effects. The analysis had been extended to some key FTSE indices. Our results showed no evidence of any regularity or seasonal fluctuation in the oil and gas stock returns despite the seasonal changes of demand in the companies’ products. However, January effect has been observed in FTSE All Share and FTSE 100 indices. Subjects: Economics; Finance; Business & Industry; Finance; Quantitative Finance
Introduction
The analysis of seasonality in stock returns has been performed by many scholars over the years in order to establish whether there are calendar-related anomalies in stock returns. If the proposition that calendar anomalies such as day-of-the-week, intraday, weekend and January effects exist in stock returns, then the random walk hypothesis would be rejected. This also contradicts the efficient market hypothesis (EMH) because at that point future stock returns can be predicted. The interest of researchers in ABOUT THE AUTHORS Muhammad Surajo Sanusi is a lecturer in Finance at Birmingham City University (UK). He got his PhD in Finance from Robert Gordon University (UK) on market efficiency, volatility behaviour and asset pricing analysis. He is a qualified professional accountant under the full membership of the Association of Chartered Certified Accountants (ACCA). His research interest covers the areas of financial markets operations, stock market volatility, security analysis and financial econometrics.
Farooq Ahmad is lecturer in Finance at Robert Gordon University (UK). He graduated with a PhD in Finance from University of Stirling (UK). His research interest areas are financial markets efficiency analysis, regulations of financial markets, evaluations of the impact of innovations and reforms in financial markets, yield curve analysis, Gilt-edged market analysis and management of public debt.
PUBLIC INTEREST STATEMENT
Oil and gas sector remains one of the most important sectors in the world, and hence, we try to investigate the behaviour of stock returns of the oil and gas companies quoted on the London Stock Exchange. The study employed both parametric and non-parametric tests to examine the daysof-the-week and months-of-the-year effects. We have not found evidence in recent times that the behaviour of stock returns is abnormal in certain days of the week or months of the year except in January. seasonality analysis was promoted by the fact that evidence gathered could be used to accept or reject the EMH. Although majority of the inferences made suggest the existence of seasonality, market inefficiency could not be confirmed especially due to the existence of transaction costs. Documented evidence in support of the seasonality presence in stock returns have also been criticised by some scholars who attributed the empirical evidence as the product of statistical misspecification. It was observed that existing studies have not provided sufficient and most reliable conclusions about the existence of seasonality in stock returns and any relating consequences to the proposition of the market efficiency.
In this paper, we employ seasonality tests as a tool to provide further evidence on the predictability of stock returns of London-quoted oil and gas stocks and some market indices. Yadav and Pope (1992) have been among the scholars that tested for the existence of calendar anomalies in stock markets. They investigated the existence of either intraweek or intraday seasonality in the pricing or returns of UK stock index future contracts using the distinctive settlement methods of the London stock exchange. The existence of seasonality was found in the UK stock market because of abnormal Monday returns discovered which could be due to the non-trading weekends. However, there was no evidence that the abnormal Monday returns could be attributed to the delay in the release of bad news until Friday as speculated by some scholars. In contrast to the findings of Yadav and Pope (1992), Mookerjee and Yu (1999) discovered abnormal returns on Thursdays from an investigation on the Shanghai and Shenzhen stock exchanges of China, although these researchers have agreed that their findings are odd when compared to that of many scholars. Mookerjee and Yu (1999) found high mean returns on Thursdays instead of Fridays (negative returns are usually found on Mondays) as reported by most of the earlier studies and barriers to the changes in daily prices (limits on daily returns). The daily returns were also found to be positively correlated with risk (standard deviation figures). Most of the studies on the day-of-the-week effect were conducted in developed markets and, according to the majority of the inferences, the effect of seasonality was evidenced in such markets. In similar developments, Chang, Pinegar, and Ravichandran (1993) investigated the day-of-the-week effect in some European markets and the United States using classical or traditional methods adopted by various scholars and an approach with sample size and error term adjustments. Results showed the existence of day-of-the-week effect in the majority of the markets similar to most of the findings in the literature. Dicle and Levendis (2014) tested whether the day-of-the-week effect still exists by investigating up to 51 international markets from thirty three countries over the period between 2000 and 2007. Similar to the findings of Yadav and Pope (1992), Mookerjee and Yu (1999), and Chang et al. (1993), they also found the existence of dayof-the-week effect in almost all the exchanges in these countries. Qadan (2013) also tested the existence of day-of-the-week effect on the recent United States data of the S&P 500 index using a threshold-ARCH model. The results of the test showed both stock returns and volumes on Monday to be lower than those of other days. In addition, they also reported that the investor's fear gauge as measured by volatility was higher on Mondays and lower on Fridays.
Literature review
Further evidence on the day-of-the-week effect in the developed markets has also been recorded by the studies of Clare, Psaradakis, and Thomas (1995), Dubois andLouvet (1996), andSteeley (2001). Steeley (2001) attributed the presence of seasonality in the UK equity market to the pattern of flow of market-wide news. Dubois and Louvet (1996) examined the day-of-the-week effect in 11 indices across 9 countries over the period between 1969 and 1992. Lower returns were found at the beginning of the week and tend to increase towards the end of the week. Dubois and Louvet (1996) concluded that there is a strong evidence of day-of-the-week in European countries. The UK equity market was also investigated by Clare et al. (1995) and found results similar to that of Dubois and Louvet (1996). Clare et al. (1995) used a deterministic seasonal model (a method adopted by Franses (1993)) on the FTSE All Share index and discovered a significant seasonality effect in the market. In a slightly contrary view, Steeley (2001) has reported that weekend effects have vanished from UK markets in the 1990s. However, day-of-the-week effect can still be traced in the market if the stock return series data is divided according to the directions ((+) or (−) of the returns) of the market. In that case, Steeley (2001) concluded that the cause of the day-of-the-week effect was due to the pattern and nature of market-wide information classified as "bad" or "good" news.
The research on the day-of-the-week effect has also been extended to emerging markets. Al Ashikh (2012) investigated the day-of-the-week effect on the Saudi Arabian stock exchange and found evidence from both the analysis of mean returns and its variance that the market efficiency hypothesis can be rejected due to the existence of day-of-the-week effect. Haroon and Shah (2013) have also examined the Karachi stock exchange in Pakistan for the existence of day-of-the-week effect. In contrast to the results reported by Al Ashikh (2012), Haroon and Shah (2013) discovered mixed results from the two partitions of the period of study that is, sub-period I and II. Sub-period I negates the existence of day-of-the-week effect, while sub-period II found evidence of the existence of day-of-the-week effect. Ogieva, Osamwonyi, and Idolor (2013) have also conducted an investigation on the Nigerian stock exchange for the existence of day-of-the-week effect and found evidence to reject the market efficiency hypothesis.
Other calendar anomalies such as a January effect have also been investigated extensively in the field of finance. Findings reported by scholars are similar to that of day-of-the-week effect where the majority of the studies found evidence for the seasonality effect in stock returns, although scholars such as Chien, Lee, and Wang (2002) observed that the empirical evidence supporting a January effect could be due to the misapplication of statistical tools. He opined that, with high volatility in stock returns, the dummy variables in the regression model testing the existence of seasonality could generate significant coefficients. Studies like that of Haugen and Lakonishok (1988), Jaffe and Westerfield (1985), and Solnik and Bousquet (1990) have all documented evidence of a "January effect" in the stock returns of various stock exchanges which may create doubt on the work of Fama (1970) on the EMH.
Methodology and results
In this section, we aim to investigate the existence of the day-of-the-week and monthly effects in the stock returns of London-quoted oil and gas stocks and some related FTSE measures such as the FTSE All Share, the FTSE 100, the FTSE UK Oil and Gas, the FTSE UK Oil and Gas Producers and the FTSE AIM SS indices. Our data for this analysis covers the periods from 4 January 2010 to 31 December 2012 for the day-of-the-week effect and January 2005 to December 2014 for the monthly effect.
Firstly, daily stock returns (Monday to Friday) of individual series were calculated using log (P t /P t−1 ) formula and mean returns compared in order to test the null hypothesis of equality. The null hypotheses of equality between the discrete week's days' mean returns are tested using both parametric and non-parametric statistical tools. The F-test is employed as a parametric tool to test whether there is any significant difference between the week's days' mean-returns. If the F-statistic value is found to be higher than the critical value (critical values for F-distribution) at a selected significance level, then the null hypothesis that ( M = T = W = Th = F ) is rejected for the alternative hypothesis that ( M ≠ T ≠ W ≠ Th ≠ F ). Kruskal-Wallis is a non-parametric test that is not based on any assumption about the underlying distribution. It performs the same function as the F-test but without consideration for the distribution of samples tested. It rather tests whether the samples are from the same distribution. If the K-W Statistic value is found to be greater than its critical value, the null hypothesis of equality is rejected and accepted if vice versa. Pairwise test of the week's days' mean returns were also conducted using the Tukey test to make comparison between the pair means. If the Tukey test statistical values result in the rejection of the null hypothesis of equality, then the pair of mean returns of two weekdays are regarded as not equal which signifies the existence of a day-of-the-week effect.
The results of F-test, Kruskal-Wallis test and Tukey test on the day-of-the-week return series are presented in Table 1. From the results, the null hypothesis of equality cannot be rejected in all the series except the FTSE AIM SS Oil and Gas index. The statistical values derived from the tests employed are not greater than their respective critical values at 5% significance level and that suggests the non-existence of the day-of-the-week effect in the series under investigation. In the FTSE AIM SS Oil and Gas index, the F-statistic is recorded at 4.0107 which is significantly higher than the critical value of 2.38 at 5% significance level. The non-parametric test of the Kruskal-Wallis statistic has a value of 21.888 which is also higher than the critical value of 9.48 at 5% significance level. The Tukey pairwise test suggests a significant difference between the mean returns of Fridays and Mondays at 4.7070 and Fridays and Tuesdays at 5.0321 (both higher than a critical value of 3.86 at 5% significance level) which indicate the rejection of the null hypothesis of equality and at the same time confirming the existence of the day-of-the-week effect in the FTSE AIM SS Oil and Gas index.
The next step undertaken in our investigation of the day-of the-week effect is to create binary dummy variables for the week's days of Mondays through Fridays as independent variables while the return series of every weekday remains as dependent variables. The variables are subjected to a regression model based on the assumption of Autoregressive Conditional Heteroscedasticity (ARCH) developed by Engle (1982) in order to explore the relationship (deviations) between variables using coefficients generated from the regression model. The ARCH model was employed because the standard ordinary least square regression model's assumption of homoscedasticity cannot be attained by the series of stock returns. In other words, the variances and covariances of stock returns are found to be changing over time and not homoscedastic (constant). Fama (1965) and Mandelbrot (1966) reported the existence of volatility clustering (large changes in returns followed by similar changes and small changes also followed by small changes) which give rise to changing conditional variance (heteroscedasticity). Lagged returns are also included in the model in order to overcome the problem of auto-correlation. In our effort to improve the model, we have employed the generalised version of ARCH model as suggested by Bollerslev (1986). The specifications of the models employed are given as: where R t is the stock return series under investigation, D Mt , D Tt , D Wt , D Tht , D Ft represent the binary dummy variables for Monday through Friday; for Monday returns the dummy variable is equal to 1 and all others are equal to zero. The coefficients attached to the dummy variables measure the average deviation of the week's days' mean return from other days' mean returns. If any coefficient is found to be significant, then the days' mean return attached to the coefficient has deviated from that of the others and thus, there is the existence of the day-of-the-week effect. A constant is not included in the regression model in order to avoid the dummy variable trap. The second equation is the generalised ARCH employed where 2 t is the conditional variance, 1 u 2 t−1 is the ARCH term and 1 2 t−1 is the generalised ARCH term. The coefficients of the ARCH and generalised autoregressive conditional heteroscedasticity (GARCH) terms are referred to as alpha and beta, respectively.
The regression results are presented in Table 2 and most of the week's days' coefficients are not significant at both 1% and 5% levels of significance. This indicates the absence of a day-of-the-week effect in the stock returns. However, the FTSE AIM Oil and Gas index return series has significant Monday and Friday coefficients which are signs of a day-of-the-week effect as shown by the results of the F-test, the Kruskal-Wallis test, and the Tukey tests depicted in Table 1. Similarly, JKX Oil and Gas has recorded a significant coefficient on Friday at 5% level of significance. Lamprell Plc stock returns also have significant coefficients on Tuesday, Wednesday and Friday at 1% level of significance. In summary, only coefficients in three stocks (FTSE AIM Oil and Gas index, JKX Oil and Gas, Lamprell) were found to be significant which is indicative of the existence of a day-of-the-week effect. The results from JKX Oil and Gas index and Lamprell Plc contradict that of the F-test, the Kruskal-Wallis test, and the Tukey tests which showed no evidence of day-of-the-week anomalies. The coefficients of both the ARCH and GARCH terms represented in the results as "α 1 " and "β 1 " were found to be strongly significant at 1% level which is an additional sign of model appropriateness.
In testing for the monthly effect, binary dummy variables were also created for the monthly (January through December) stock returns as 12 independent variables (constant parameter would not be included in order to avoid dummy variable trap). Both the dummy variables (independent variables) and the monthly return series (dependent variables) are subjected to a regression model using GARCH specifications. The specifications of the models employed are given as: where R t is the monthly stock return series under investigation, D Jt + D Ft + D Mt + D At + D Myt + D Jnt + D Jyt + D Aut + D St + D Ot + D Nt + D Dt represents the binary dummy variables for January through December; for January returns the dummy variable is equal to 1 and all others are equal to zero and it goes the same way for the remaining months. The coefficients attached to the dummy variables measure the average deviation of a given month's mean return from other months' mean returns. If any coefficient is found to be significant, then the monthly mean return attached to the coefficient has deviated from that of the others and thus, there is the existence of the monthly effect. The second equation is the generalised ARCH employed where 2 t is the conditional variance, 1 u 2 t -1 is the ARCH term and 1 2 t−1 is the generalised ARCH term. The coefficients of the ARCH and GARCH terms are referred to as alpha and beta, respectively.
The results in Table 3 show the monthly effect of January through December on the stock returns of the UK oil and gas companies and some related FTSE indices. Most of the monthly coefficients in the oil and gas companies were found to be insignificant at both 1 and 5% significance level except in oil companies that were listed on the Exchange recently (2010 to date). The results from the FTSE indices differ. January, May and November coefficients were found to be highly significant at 1% level in FTSE All Share and FTSE 100 indices. It shows the presence of January effect; a finding which has been famous in the literature. End-of-the-year activities such as Christmas and New Year holidays are part of the reasons for January effects. May effects were also not a surprise. In the UK, tax year begins from 6 April and ends 5 April in the following year. For that reason, most of the companies that are operating in the UK prefer to use a financial year that corresponds with tax year for easy tax assessment. November effect could be due to the actions or inactions of investors to gain from the December anomaly. The stock returns of oil and gas companies were found to be insensitive to January effects except in Fortune Oil, Hunting and Aminex. May coefficient was also significant in FTSE UK Oil and Gas index returns. Seasonal effects as a result of winter and summer periods due to changes in energy usage have not been found in any of the key FTSE Oil and Gas indices. The significance of coefficients in Enquest, Essar Energy, Ophir Energy and Ruspetro were suspected to be due to short time series of stock returns as companies were listed on the Exchange in recent times.
Findings
The results generated from our seasonality analysis of the day-of-the-week and monthly effects have not shown any evidence of these calendar anomalies in London-quoted oil and gas stocks and in a few FTSE share indices investigated. Based on these findings, and with all other factors held constant, we cannot ascertain the predictability of oil and gas stock returns due to seasonal fluctuation. This outcome is in line with the findings of other studies like Steeley (2001) who noted the disappearance of the weekend effect in the UK market except if the data is partitioned along the direction of the market. Chang et al. (1993) have also discovered the disappearance of a day-of-theweek-effect in the most recent data of the United States investigated. However, January effect has been observed in FTSE All Share and FTSE 100 indices. Our methodology is also similar to that of Guidi (2010) who examined for the existence of a day-of-the-week effect in the Italian stock market using the GARCH model in the regression and found no evidence of the DOTW effect in the market's stock returns.
Conclusion
We have attempted to contribute to the existing studies on whether calendar anomalies have any effect on the pricing of stocks. The seasonality analysis is considered as another tool that can provide further evidence to the predictability and the market efficiency of the oil and gas sector and some FTSE share indices. Our investigation on London-quoted oil and gas stocks and some FTSE share indices which employed various statistical tools could not provide any statistical evidence to suggest the existence of seasonal effects in the UK oil and gas stock returns of the London Stock Exchange. The investigation of the monthly effect has shown the existence of January effect in the FTSE All Share and FTSE 100 indices. It was, therefore, established that end-of-the-year activities such as Christmas and New Year holidays have significant impact on the stock returns of the entire market except the oil and gas sector.
|
v3-fos-license
|
2020-03-26T10:26:05.004Z
|
2020-03-01T00:00:00.000
|
214785118
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/20/6/1732/pdf",
"pdf_hash": "507c2e0b566067defbe4f4563f7fcb24c905e64b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46013",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "44872265eb73ae4318f7049f2f417d0d5edb4307",
"year": 2020
}
|
pes2o/s2orc
|
Sampling Methods for Metocean Data Aiming at Hydrodynamic Modeling of Estuarine and Coastal Areas
Field observations require adequate metocean data gathering to promote the link between environmental diagnostic and prognostic obtained from modeling techniques. In general, model confidence can be improved by using data which present better quality and by improved parametrizations. This paper discusses and suggests timing routines for data gathering which are enough to describe the hydrodynamic behavior of estuarine and coastal areas. From the environmental diagnostics viewpoint, a sampling procedure is defined to the temporal scales providing data with adequate resolution to describe the natural process without signal aliasing. The proposed sampling procedure was based on the analysis of a data set of tides, currents, waves, water temperature, and meteorological variables observed at several stations along the Brazilian coast. The instrument setup was based mainly on the results of the harmonic analysis of tides. It is shown that the setup of instruments for simultaneous measurements of currents and waves requires special attention particularly in sites that present low currents and the action of waves. A subset of data gathered in shallow bays was used to estimate the surface turbulent stress by using a classical and a slightly modified parametrization for the wind drag coefficient. Under near neutral atmospheric stability conditions and high tide excursion, the surface turbulent stress obtained with the classical and the modified parametrization differed but the current profiles are expected to be only partially affected by wind-induced drift currents.
Introduction
Advances in electronic devices and manufacturing processes allowed developing sensors and sensor systems that are compact, accurate, and present low energy consumption. Powerful software interfaces and communication subsystems help the planning of standalone or telemetry-based stations and contribute to the widespread monitoring of natural water bodies.
Field observations ranging from relatively simple and over short periods [1] to long-term monitoring with large instrument networks [2] have been used to describe the movement of water bodies as well as for modeling studies [3,4].
The uncertainty of modeling studies can be attributed to input variables and parameters and can arises from modeling errors as well as from the sampling and measurement processes [5].
Although synoptic metocean monitoring is broadly reported in the literature, an extensive discussion on the spatial and temporal scales and their relationships with the instrument setup and timing is not usually found. The terminology metocean data refers to meteorological and oceanographic conditions. As a result of inadequate timing, errors due to aliasing or due to non-stationary signals can affect the uncertainty of models that use such signals as contour conditions or as reference for the model calibration process.
The link between monitoring and modeling tasks can be established through the mathematical description of the movement of natural water bodies by the Reynolds Averaged Navier-Stokes equation (RANS). A brief analysis of the RANS equation and the turbulent stress parameterization indicates a minimum set of metocean variables that should be monitored in order to fulfill the validation and calibration requirements of hydrodynamic models.
Hydrodynamic models are considered as a fundamental tool to support research and engineering design of coastal facilities and structures. The model performance is evaluated comparing the agreement between model results and observed data. Even for models with good reported performance, the ability to reproduce the movement of water bodies depends heavily on the quality of the data gathered in the study area. Low quality data induces uncertain model results which may lead to non-correct design parameters.
As a continued goal, researchers and practitioners should follow procedures to gather environmental data that are based on the adequate spatial and temporal sampling scales. Efficient monitoring is endorsed by most monitoring initiatives quoting the "Ocean Best Practices" supported by The Global Ocean Observing System (GOOS). From a data gathering viewpoint, the goal of this study is to define a sampling procedure based on the scales associated with hydrodynamic processes in shallow waters. The description of such natural systems is usually obtained through large scale modeling techniques and turbulence parameterizations based on the Reynolds decomposition.
Time Scales and Convergence
By the Reynolds decomposition, turbulent flux is described by a large scale and a small scale component representing the average value of a variable and its fluctuations, respectively. The decomposition of a variable u is then written as: where u i (t) denotes the time average of u and u i (t) represents its fluctuations. The averaged value of the fluctuations u i is equal to zero for an infinity integration period T which, naturally, is not feasible. Then, the accuracy level will be determined by the integration period T which implies in an error due to finite averaging time.
The integral time scale is defined as the time that elapses until a signal presents no more correlation with itself and can be evaluated from the normalized autocorrelation function ρ(τ) by the expression ≡ ∞ 0 ρ (τ) dτ. When the integration period T is much longer than the integral time scale , the root mean square error ξ 2 between the real mean value and the observed mean value for u i (t) can be estimated by [6,7]: where u i (t) 2 represents the process variance. The underlying assumption is that the process stationarity is kept during the integration time T and attention should be paid to this fact during the instrument setup for data gathering.
The Reynolds decomposition as depicted in Equation (1) allows a simplification of the Navier-Stokes momentum equations which are basic for this discussion.
Variables of Equations for Shallow Water Modeling and Instrumental Support
For the hydrodynamic modeling of an incompressible fluid and considering large scales, the Reynolds Averaged Navier-Stokes momentum equation for shallow waters can be simplified expanding the pressure term with the Boussinesq approximation and considering that the dynamic pressure and the spatial variations of the atmospheric pressure are very small. The momentum equation can be written, eliminating lower order terms, in order to obtain the following form [8]: where the variables, suppressing overbars for simplicity, are average values and, using indicial notation, (x, y, z) ≡ (x 1 , x 2 , x 3 ) and (u, v, w) ≡ (u 1 , u 2 , u 3 ); in this way u, v and w are components of a velocity vector in the directions x, y e z; the variable η is the free surface level, ρ is the fluid density and ρ 0 is a reference value for the density, the term τ T i3 represents the turbulent stress; and the acceleration terms a i represent Coriolis forces.
The left side of Equation (3) represents the acceleration and advective terms. Vertical current profiles can be measured, in a selected point of the modeled domain, by using Acoustic Doppler Current Profilers (ADCPs). The term w ∂u/∂z, representing the gradient of the vertical velocity, is usually not accounted for in modeling due to its typically low values, and due to measurement noise and bias superposed. Researchers should pay special attention to the ADCP setup for gathering good and representative data of the water current time scales.
On shallow waters, the variations of the free surface level η play a very important role in water movement, which implies gathering tidal data by quality tidemeters. This fact is particularly important as surface water level variations are used as contour conditions at open boundaries. In such model limits, the readings of instruments based on pressure sensors installed at the bottom for measuring water level are influenced by the atmospheric pressure. This fact requires the simultaneous monitoring of atmospheric pressure variations in nearby located meteorological stations.
For rivers and channels affluent to the modeled domain, one should prescribe flow velocity or discharge rate [8]. For stream gauging, most common instruments for water velocity measurement employ mechanical, acoustic, and stage-based volumetric techniques [9] but also include systems based on imaging techniques. The transmission of water level sensors data based on IoT technology help to establish operational monitoring networks [10], which can also serve to modeling purposes.
The observed gradient of horizontal and vertical profiles of the mass density ρ = ρ (S, T, p), that is a function of the salinity S, temperature T and pressure p, defines the need of barotropic or baroclinic models. For coastal areas with small salinity variations, the water column stratification depends mainly on the water temperature and the current magnitude. We quote the instruments based on thermistors strings as a useful tool to gather time series of temperature profiles.
Considering the importance of the Coriolis forces, for small size domain models, the terms a i can be neglected.
Specific contour conditions result in turbulent stress represented by τ T i3 which is related to variables that can be solved and, as so, is described by parameterizations at the bottom τ B i and at the surface τ S i .
Parameterization of the Turbulent Stress Term
The turbulent stress at the bottom τ B i is commonly parameterized by the formulation of [11], when 2D and 3D models are coupled: where ρ is the mass density, in kg m −3 ; g the gravity acceleration, in m s −2 ; u * is the stress velocity, acting as a scale factor, in m s −1 ; H is the local depth, m; ε is the bottom roughness, m; and U i is the vertically averaged water speed in the x i direction, m s −1 . By using an Acoustic Doppler Current Profiler (ADCP), researchers can acquire data that describe current magnitude and direction along the water column; current profile data are processed to calculate the vertically averaged water speed U i which is used for evaluating Equation (4); the ADCP data is also used for calibrating the model, based on the comparison between modeled and observed values; considerations about the setup of ADCPs are found in Section 3 and in Section 5.3.
The turbulent stress on the surface τ S i is parameterized as a function of the wind speed described by: where ρ air is the air density, in kg m −3 ; C D is the wind drag coefficient; W 10 is the wind speed 10 meters above surface, m s −1 ; W 0 is the wind speed at sea surface, m s −1 ; and φ i is the angle between the wind vector and the x i direction. This formulation for τ S i uses a surface speed W 0 dependence and it has been shown to lead to improvements in models of tropical ocean waters [12,13].
The dependence of wind stress on the drag coefficient C D near the coastal zone was studied by [14] considering the neutral drag coefficient C DN over a variety of conditions. For non-neutral conditions, an atmospheric stability function ψ u (ζ) should be included to obtain the drag coefficient C D under observed conditions using [15]: where ζ is the stability parameter; and k is the von Kármán constant. For modeling coastal areas, it is common practice to use a parameterization of the wind drag for neutral atmospheric stratification, which implies C D = C DN , expressed by [16]: A minor modification of the above expression was made in the usual form by including the value of the wind speed W 0 at sea surface. Despite its intuitive importance, this term refers to a parameter that cannot be observed in situ.
Turbulent Stress and Free Surface Water Variation
Dealing with the atmospheric stability parameter [17], the solution for turbulent stress on the surface τ S i depends on solving an equation system that describes aspects of air-sea interaction [18]. For evaluating this interaction, the COARE algorithm [19] solves the atmospheric stability functions. Please, note that the COARE algorithm documentation and the newest Fortran and Matlab computer codes can be accessed at Flux Documentation-Ocean Climate Stations [20]. The bulk COARE algorithm requires input data of wind speed, air temperature and relative humidity, solar irradiance, and precipitation; sea surface temperature and water temperature 6 m below the surface are also needed. These variables are not usually observed and represent a limitation to algorithm application. This aspect implies the calculation of the turbulent stress with the help of parametrizations like the expressed in Equation (7) which depends only on the wind data corrected to a reference height. As a usual recommendation, the standard is to gather wind data at 10 m above the surface, and the temperature and relative humidity are measured 2 m above the surface. As the water level η changes with time, such reference heights are not maintained, and some form of height transformation is needed.
For shallow water sites with significant tidal excursions, and when winds are not observed on the reference level of 10 m above the surface, the relative importance of the wind sensor mounting height should be evaluated, and corrections applied to turbulent stress terms. This evaluation, however, requires that a set of variables be available as described in the following section.
The Selected Set of Variables and the Need of Specific Sensors
Based on the discussion presented so far for shallow waters, it is possible to select a representative set of metocean variables with two purposes: (i) to allow a hydrodynamic circulation analysis indicating the relationship between cause and effects, and (ii) to provide elements for model validation or calibration.
As indicated by the previous discussion, specific sensors and sensor systems should be used to gather data of: water level η referenced to a vertical datum NR, water current profile u n , wind W 10 measured 10 m above surface, wind W 0 measured at the surface, water temperature profiles, water temperature T A at the surface, temperature T 2 and air specific humidity q 2 measured 2 m above the surface, atmospheric pressure p a and, whenever possible, the main wave parameters: significant height H S , peak period T P and associated direction D P . Some of these variables are depicted in Figure 1 which also illustrates the matching between the water current profile and the wind profile.
The Selected Set of Variables and the Need of Specific Sensors
Based on the discussion presented so far for shallow waters, it is possible to select a representative set of metocean variables with two purposes: (i) to allow a hydrodynamic circulation analysis indicating the relationship between cause and effects , and (ii) to provide elements for model validation or calibration.
As indicated by the previous discussion, specific sensors and sensor systems should be used to gather data of: water level η referenced to a vertical datum NR, water current profile un, wind W10 measured 10 m above surface, wind W0 measured at the surface, water temperature profiles, water temperature TA at the surface, temperature T2 and air specific humidity q2 measured 2 m above the surface, atmospheric pressure pa and, whenever possible, the main wave parameters: significant height HS, peak period TP and associated direction DP. Some of these variables are depicted in Figure 1 which also illustrates the matching between the water current profile and the wind profile. At the air-sea interface, part of the wind energy is transferred to the water surface and induces the wind-wave orbital velocities while other part induces drift currents. For the purposes of this paper, we are dealing with time-averaged values. The expected residual values of the orbital velocities should be small in comparison with the vertically averaged current magnitude. On the other hand, under low to moderate-to-strong wind speed, drift currents are small in comparison with the vertically averaged current magnitudes particularly under moderate-to-strong currents. The typical ratio between the surface drift and the wind speed is reported as about 3% [ 21].
Aiming at a simplified solution, a continuity condition at the free surface between the wind and current profiles is imposed by the method proposed in this paper. This condition is represent ed by W0 that, due to practical limitations, cannot be measured in situ.
An iterative procedure should be implemented in such a way that the current u0 at the surface approximates, to a certain threshold, the value of wind speed W0 evaluated at the surface.
For neutral atmospheric conditions, t he main steps of the procedure are: 1. Let the first non-contaminated ADCP layer (please refers to the label un in Figure 1) be an estimation for W0 and correct the wind speed WZ observed at height z by using: At the air-sea interface, part of the wind energy is transferred to the water surface and induces the wind-wave orbital velocities while other part induces drift currents. For the purposes of this paper, we are dealing with time-averaged values. The expected residual values of the orbital velocities should be small in comparison with the vertically averaged current magnitude. On the other hand, under low to moderate-to-strong wind speed, drift currents are small in comparison with the vertically averaged current magnitudes particularly under moderate-to-strong currents. The typical ratio between the surface drift and the wind speed is reported as about 3% [21].
Aiming at a simplified solution, a continuity condition at the free surface between the wind and current profiles is imposed by the method proposed in this paper. This condition is represented by W 0 that, due to practical limitations, cannot be measured in situ.
An iterative procedure should be implemented in such a way that the current u 0 at the surface approximates, to a certain threshold, the value of wind speed W 0 evaluated at the surface.
For neutral atmospheric conditions, the main steps of the procedure are: 1.
Let the first non-contaminated ADCP layer (please refers to the label u n in Figure 1) be an estimation for W 0 and correct the wind speed W Z observed at height z by using:
2.
Calculate the turbulent stress on the surface τ S i and at the bottom τ B i ; 3.
Solve the water current profile (analytical or numerical) and let the water current at the surface u 0 be a new estimation for W 0 ; 4.
As previously explained, the observed variables included in the momentum equation are large scale variables and, under the hypothesis of an ergodic process, the time averaged values are a good approximation for the ensemble average [7].
Ensemble Averaging
The diagram presented in Figure 2 is used to understand the effect of ensemble averaging on a continuous variable on time u(t). Figure 2a shows a unit impulse train (function shah) used to take samples of u(t) at uniform time intervals T P by a multiplication process. The discrete-time sequence u(k) obtained with the sampling process is multiplied by a periodic pulse train with a width of T B and a period T E as shown in Figure 2b. Within the time T B a set of 2N + 1 samples is averaged according to weighting factors and the resulting value is registered at time intervals T E as an estimation of the average value of the signal u(t) along the ensemble time as shown in Figure 2c.
Sensors 2020, 20, x FOR PEER REVIEW 6 of 16 3. Solve the water current profile (analytical or numerical) and let the water current at the surface u0 be a new estimation for W0; 4. Repeat steps (2) and (3) until convergence.
As previously explained, the observed variables included in the momentum equation are large scale variables and, under the hypothesis of an ergodic process, the time averaged values are a good approximation for the ensemble average [7].
Ensemble Averaging
The diagram presented in Figure 2 is used to understand the effect of ensemble averaging on a continuous variable on time u(t). Figure 2a shows a unit impulse train (function shah) used to take samples of u(t) at uniform time intervals TP by a multiplication process. The discrete-time sequence u(k) obtained with the sampling process is multiplied by a periodic pulse train with a width of TB and a period TE as shown in Figure 2b The estimation U(n) for the analog function average value i u can then be represented by the symmetric low pass digital filter with 2N + 1 weighting factors as [22]: Naturally, the number of samples 2N + 1 and the associated burst time TB will determine the degree of smoothness of the ensemble sequence and will also determine an amplitude error for timevarying signals. In other words, there is a tradeoff as a quite long burst time TB will result in loss of process stationarity.
A study on ensemble timing is presented in the following section to cover the requirements for turbulence filtering and for avoiding aliasing errors. The estimation U(n) for the analog function average value u i can then be represented by the symmetric low pass digital filter with 2N + 1 weighting factors as [22]:
Sampling Procedure for Metocean Variables
Naturally, the number of samples 2N + 1 and the associated burst time T B will determine the degree of smoothness of the ensemble sequence and will also determine an amplitude error for time-varying signals. In other words, there is a tradeoff as a quite long burst time T B will result in loss of process stationarity.
A study on ensemble timing is presented in the following section to cover the requirements for turbulence filtering and for avoiding aliasing errors.
Sampling Procedure for Metocean Variables
Ensemble timing defines the time interval for the time between pulses T P , for the burst duration T B and for the ensemble time T E with the following restrictions and requirements: (i) the ensemble time T E is determined by the Nyquist-Shannon to avoid aliasing on flow patterns modulated by tides; (ii) the burst time T B is adjusted to the need of keeping the process stationary during the ensemble sampling; (iii) the time between pulses T P is determined by the intended smoothness degree; the high frequency content of wind waves has to be accounted when measuring waves parameters.
Aiming to determine a rule for the ensemble timing and the record length or the monitoring period, a dataset from monitoring stations along the Brazilian coast was analyzed to identify the characteristic time scales. The main results are shown below.
Timing for Tide Monitoring
For a time varying sinusoidal signal representing a tide constituent with amplitude A and frequency ω = 2πf, the maximum slew rate ∆N/∆T occurs at zero crossing and is expressed by: The resulting error ε, written as a fraction of the peak-to-peak variation due a finite acquisition time ∆T = T B , is estimated by the slew rate at zero crossing given by: For the free surface water level η, the analyses presented in this paper were carried out in tidal time series freely available [23] [25], whose documentation and Fortran code is available [26].
On the selected stations, the harmonic constituent with the highest frequency and significant amplitude was M8, which has a period of about 3.1 h. Applying Equation (12), the burst period T B should not be longer than 3 min to keep the full-scale error ε within 5% for the M8 constituent. Although a percentual error of 5% seems to be high, it was defined considering that the M8 constituent presented an amplitude of less than 2 cm for sites in southern Brazilian coast. For a station inside Todos os Santos Bay, at mid-latitudes, the M8 constituent reached an amplitude of 6.5 cm; overtides with higher amplitude may be observed in specific sites. For those stations on the northern Brazilian coast, the M6 and M4 constituents presented amplitude of about 5.5 and 13.5 cm, respectively, leading to burst periods of 4 and 6 min under the same considerations.
Setting the sampling period to four times the Nyquist-Shannon period, the sampling period T E should be shorter than 25 min for the M8 constituent and 30 min for the M6 constituent.
In order to evaluate the quality of contour conditions for modeling based on short tide records, one-month registers around the equinoctial tide were also analyzed for the same stations. For stations with small meteorological effects, the mean sea level estimated from the annual series differed within ± 2 cm from the monthly series. The lowest astronomical tide (LAT), however, differed up to 15 cm when estimated from the annual or the monthly register.
Additionally, gaps were introduced within the monthly series to evaluate the possibility of using still shorter registers to represent the water level as a contour condition. With the benefits of gap handling capability of the Foreman´s routines [25], it was possible to replace the registers of a period no longer than the final 6 days of a monthly time series with gaps, while keeping nearly the same mean level and providing a similar set of harmonic constants.
Water Temperature-Time Scales in a Shallow Bay
The water temperature dataset for the Guanabara Bay (located in Rio de Janeiro) was selected because it is the only average-to-long-term dataset that could be identified along the Brazilian coast. Additionally, this dataset can be correlated with the tidal data from the Ilha Fiscal station, one of the selected stations for tidal analysis, located inside the Guanabara Bay.
Water column temperature profiles were studied with the data gathered at 30 min intervals by Aanderaa TR-7 thermistor strings installed inside the Guanabara Bay, coordinates 22.87 S and 43.15 W during a period of about 8 months in a 21 m deep site [27]. Considering that the inner water temperatures are dependent on the solar radiation and air temperature, the water temperature monitored was strongly affected by the flushing of the bay due to tides; temperature profiles strongly influence the dilution pattern in these sites [28]. Figure 3 shows the power spectrum for the mid-depth water temperature gathered along 8 months and depicts energy peaks with periods of about 25.
Water Temperature-Time Scales in a Shallow Bay
The water temperature dataset for the Guanabara Bay (located in Rio de Janeiro) was selected because it is the only average-to-long-term dataset that could be identified along the Brazilian coast. Additionally, this dataset can be correlated with the tidal data from the Ilha Fiscal station, one of the selected stations for tidal analysis, located inside the Guanabara Bay.
Water column temperature profiles were studied with the data gathered at 30 min intervals by Aanderaa TR-7 thermistor strings installed inside the Guanabara Bay, coordinates 22.87 S and 43.15 W during a period of about 8 months in a 21 m deep site [27]. Considering that the inner water temperatures are dependent on the solar radiation and air temperature, the water temperature monitored was strongly affected by the flushing of the bay due t o tides; temperature profiles strongly influence the dilution pattern in these sites [ 28]. Figure 4 shows the power spectrum for the middepth water temperature gathered along 8 months and depicts energy peaks with periods of about 25. Due to the strong tide modulation in the temperatures observed, it is proposed an ensemble timing similar to the proposed for tide monitoring.
The density stratification and the current profile define the degree of stratification for the water column. Based on this statement, it is a good practice to sample water temperature and currents at the same depth levels whenever possible.
Monitoring of Current Profiles and Waves
The most common form of using an Acoustic Doppler Current Profiler (ADCP) in coastal waters is the upward-looking fixed mooring which allows simultaneous measurement of the waves and current patterns. The ADCP used was a Workhorse Waves Array operating at 600 kHz manufactured by Teledyne RDI. The Waves Array is an ADCP for current profiling, which also gathers data about wave-induced orbital velocities by the addition of a pressure sensor and a specialized firmware. The Due to the strong tide modulation in the temperatures observed, it is proposed an ensemble timing similar to the proposed for tide monitoring.
The density stratification and the current profile define the degree of stratification for the water column. Based on this statement, it is a good practice to sample water temperature and currents at the same depth levels whenever possible.
Monitoring of Current Profiles and Waves
The most common form of using an Acoustic Doppler Current Profiler (ADCP) in coastal waters is the upward-looking fixed mooring which allows simultaneous measurement of the waves and current patterns. The ADCP used was a Workhorse Waves Array operating at 600 kHz manufactured by Teledyne RDI. The Waves Array is an ADCP for current profiling, which also gathers data about wave-induced orbital velocities by the addition of a pressure sensor and a specialized firmware. The instrument was deployed at the bottom attached to a double-axis gimbal to assure alignment very close to the vertical. Please note, the vertical alignment is a requirement for gathering good quality data, particularly for wave measurement.
The ADCP can do the measurement of the water current at discrete bins with size ∆h according to a user-defined setup. Figure 4 shows the wind speed W 10 , the vertical reference NR of the model, the depth layer size ∆h; the instrument yields a velocity profile for a range of depths from near the bottom (U 1 at z 1 ) to near the surface (Un at z n ). Current profile data are represented by averaging a set of profiles at ensemble interval T E . The time between pulses TP is the repetition rate of acoustic pulses into the water and is r elated to the high-frequency content of the water level signal. It was found that values of TP higher than 2 s imply the need for a long averaging time for the ensemble due to the interaction with the waves field. Consequently, the resulting burst time TB will be longer, and the slew -rate error will be higher. Once defined the value for TB, the number of pulses 2N + 1 will define the burst time and will be inversely related to the standard deviation of the averaged ensemble value. The depth cell size h is also inversely related to the standard deviation of the measurement.
Effect of Waves on Current Ensembles
The influence of the sea state on the average value of the current ensembles was evaluated with the data of an ADCP Waves Array 600 kHz deployed in two coastal stations with different wave climates. The instrument was deployed 4 km away from Rio de Janeiro State coast at a place 13 m deep and 2 km away from Bahia State coast at a place 26 m deep.
A typical setup for wave data acquisition uses a v alue of TP of half second and a burst time TB of 20 min. This value of TP seems adequate if we consider that wind waves present a period higher than 3-4 s. During a burst time of 20 min, the instrument will gather 2400 samples that are enough for high-resolution spectral analysis.
The time series of the horizontal component of the velocity sampled at 2 Hz were analyzed to evaluate the time scale associated with the convergence of its averaged values.
Equation (2) was evaluated for a relative error level of 2%, considering three sea states represented by increasing significant wave height Hs. The results are summarized in Table 1 for conditions of low, medium, and high values of vertically averaged current magnitude U observed during some selected events. The integration period T needed to obtain the defined relative error was calculated for three depth cells named Bin 1, Bin 2, and Bin 3, which form the Waves Array with Bin 3 closest to the sea surface. The time between pulses T P is the repetition rate of acoustic pulses into the water and is related to the high-frequency content of the water level signal. It was found that values of T P higher than 2 s imply the need for a long averaging time for the ensemble due to the interaction with the waves field. Consequently, the resulting burst time T B will be longer, and the slew-rate error will be higher. Once defined the value for T B , the number of pulses 2N + 1 will define the burst time and will be inversely related to the standard deviation of the averaged ensemble value. The depth cell size ∆h is also inversely related to the standard deviation of the measurement.
Effect of Waves on Current Ensembles
The influence of the sea state on the average value of the current ensembles was evaluated with the data of an ADCP Waves Array 600 kHz deployed in two coastal stations with different wave climates. The instrument was deployed 4 km away from Rio de Janeiro State coast at a place 13 m deep and 2 km away from Bahia State coast at a place 26 m deep.
A typical setup for wave data acquisition uses a value of T P of half second and a burst time T B of 20 min. This value of T P seems adequate if we consider that wind waves present a period higher than 3-4 s. During a burst time of 20 min, the instrument will gather 2400 samples that are enough for high-resolution spectral analysis.
The time series of the horizontal component of the velocity sampled at 2 Hz were analyzed to evaluate the time scale associated with the convergence of its averaged values. Equation (2) was evaluated for a relative error level of 2%, considering three sea states represented by increasing significant wave height Hs. The results are summarized in Table 1 for conditions of low, medium, and high values of vertically averaged current magnitude U observed during some selected events. The integration period T needed to obtain the defined relative error was calculated for three depth cells named Bin 1, Bin 2, and Bin 3, which form the Waves Array with Bin 3 closest to the sea surface. As we discard very shallow ADCP depth cells due to secondary lobe acoustic contamination, the values associated with Bin 1 and Bin 2 were used as an estimation of the averaging time to be used for the current ensemble.
The results presented in Table 1 show that conditions of calm sea and high currents require the use of lower integration times. On the other side, severe sea conditions over low currents require higher integration times.
Then, the variance associated with the averaged value of the ensemble increases with the variance of the orbital velocity and with the decrease of the current magnitude.
In order to evaluate the influence of the interval between pulses T P on the convergence of the ensemble average, the data of radial velocity time series for the events observed in Rio de Janeiro on Day 3 at 01:00 AM were resampled on rates of 1 Hz, 0.5 Hz and 0.2 Hz for the ADCP beams 1 to 4; the convergence of the time-averaged values is shown in Figure 5.
These results indicate that values of pulse interval T P higher than 0.5 Hz or 2 s do not lead to the convergence of the average value of the current ensemble, to a 5% threshold, over a period shorter than 3 min as quoted in Section 5.1. An additional consideration against higher T P values is the aliasing induced in the short period variations of sea level surface.
Near the surface, the magnitude of the current is affected by the wave-current interaction process and is strongly dependent on the wind pattern. The coupled solution for the current and wind profiles near the surface requires data of a set of meteorological variables. These results indicate that values of pulse interval TP higher than 0.5 Hz or 2 s do not lead to the convergence of the average value of the current ensemble, to a 5% threshold, over a period shorter than 3 min as quoted in Section 5.1. An additional consideration against higher TP values is the aliasing induced in the short period variations of sea level surface.
Near the surface, the magnitude of the current is affected by the wave-current interaction process and is strongly dependent on the wind pattern. The coupled solution for the current and wind profiles near the surface requires data of a set of meteorological variables.
Monitoring Meteorological Parameters
In order to estimate turbulent stress on surface S i winds, air temperature, air relative humidity and atmospheric pressure, whose sensors are commonly present in coastal meteorological stations, should be monitored.
The winds should be sampled with a time between pulses shorter than 5 s (ideally 1 s). The choice of the burst time TB is defined by the position of the spectral gap in the wind power spectrum presented by Van der Hoven [29]. This gap is a low energy frequency range corresponding to periods between 10 min and 1 h. In this frequency range the average wind speed presents some degree of
Monitoring Meteorological Parameters
In order to estimate turbulent stress on surface τ S i winds, air temperature, air relative humidity and atmospheric pressure, whose sensors are commonly present in coastal meteorological stations, should be monitored.
The winds should be sampled with a time between pulses shorter than 5 s (ideally 1 s). The choice of the burst time T B is defined by the position of the spectral gap in the wind power spectrum presented by Van der Hoven [29]. This gap is a low energy frequency range corresponding to periods between 10 min and 1 h. In this frequency range the average wind speed presents some degree of stationarity, which means averaged values observed in periods between 10 min and 1 h are relatively stable.
Similar gaps were also identified in the air temperature and air humidity power spectra [30]; this fact suggests the use of similar setups for all meteorological variables.
Recommendations for the measurement of meteorological variables, as presented by WMO [31], are useful for research institutions and instrument manufacturers. Important aspects of station sitting and exposure are discussed. The general discussion found in the guide [31] indicates that wind records should be averaged over 10 min interval, by taking samples at intervals of 1 s, for forecasting purposes. The WMO Guide [31] also recommends that the wind profile dependence on atmospheric stability should be accounted for and advises to take samples at intervals of 0.25 s in case we need to determine wind gusts.
Field data gathered at 2 stations were selected for evaluating wind stress over the water surface, as presented in Section 6. One station was installed inside Ribeira Bay (south of Rio de Janeiro, approximate location 23.09 S and 44.40 W), and the second one installed inside Marajó Bay (approximate location 0.56 S and 47.91 W). A mechanical sensor model 05305 manufactured by RMYoung gathered wind data from the Ribeira station and the wind data from Marajó station was gathered by an acoustic sensor WindSonic manufactured by Gill. Both sensors were connected to Campbell Scientific CR1000 dataloggers operating in a standalone mode. The dataloggers were programmed to acquire data every 5 s and to record averaged values at 10 min intervals.
Summary of the Proposed Timing
The synoptic monitoring of the variables of interest allows identifying the relationship between causes and effects.
Considering the tidal modulation on the hydrodynamic pattern of coastal areas, the timing for gathering water level data is mainly defined by the temporal scales of the higher frequency tidal constituent. Overall, the timing proposed for the other metocean variables is derived from the timing proposed for tide monitoring. As a rule, the timing for tidal currents should follow the setup for water level data gathering. In a typical coastal station for wind and atmospheric pressure measurement, the inclusion of additional sensors for air temperature and humidity, and solar radiation presents low overhead. It can be useful to study the atmospheric stability and to provide data for water quality models. Table 2 presents a summary of the proposed setup for coastal and estuarine monitoring studies. Naturally, not all instruments allow full flexibility in setting the ensemble timing. On the other hand, the proposed minimum period of 24 days for data gathering aiming at modeling tasks relies on the capability of gap handling of the harmonic analysis tools, as previously discussed. Please note that the selection of the preferred ensemble time T E as 10 min was also determined for coincidence with the timing for measuring winds (see Section 5.4). Naturally, an ensemble time T E of 20 min (see Section 5.3) allows data resampling aiming at cross correlation analysis between variables, in cases where low power consumption is priority.
Considerations about Power Consumption
The power consumption of meteorological sensors and controlling dataloggers and tidemeters is usually very low with current technology. The power consumption increases strongly for stations equipped with telemetry devices. In such cases, solar panels can usually provide energy for stations with no mains supply.
The concern is related to the power consumption of ADCP systems that need to operate in a standalone mode. For typical stations established with instruments for current measurement, the energy available from standard battery packs is usually enough for more than two months of continuous operation. The selection of an ensemble time T E of 20 min, as suggested in the previous section, will result in a deployment with the double of autonomy. For typical coastal stations, data quality should not suffer from this extended ensemble time. On the other hand, the use of higher sampling rates while keeping the same ensemble time T E increases power consumption and reduces the autonomy proportionally.
Based on the available technology, the quoted procedure does not pose restrictions regarding memory size or energy consumption except for the ADCPs monitoring wave data. The autonomy for current and wave measurements with a Workhorse Waves Array 600 kHz configured with the values proposed in Table 2 (T P of 1 s, T B of 3 min and T E of 10 min for currents; and T P of 0.5 s, T B of 20 min and T E of 1 h for waves) is about 30 days. Please note, these are typical values that can change depending on environmental factors, battery age, deployment depth, and different ADCP models. Naturally, the mooring autonomy concerning energy can be extended with the use of an external battery pack. In summary, we can carry out field works lasting up to 30 days, or even more, in order to gather good quality data.
Estimation of the Wind Stress on the Surface
Field monitoring was carried out with instruments programmed according to the timing setup described in this paper.
The typical setup was first considered for moorings on the shallow waters of Ribeira Bay located in the south of Rio de Janeiro State. The data indicated low wind speeds (average of 4.5 m s −1 not exceeding 9 m s −1 in short periods), small tidal excursions (lower than 1.5 m), low water currents (not exceeding 30 cm s −1 ) and different atmospheric conditions during the observation period. However, as no data was available for sea surface temperature and water temperature below surface, the bulk COARE algorithm was not applied.
Another evaluation was done with the data gathered near the mouth of Marajó Bay located on the northern Brazilian coast. The local depth was about 23 m. In this site, the maximum wind speeds were about 15 m s −1 during the observed period, tide excursion was about 5.0 meters in height and surface water currents reached 1.8 m s −1 after half-tide time.
When considering the near-neutral atmospheric conditions, instead of the application of an air-sea coupling model, turbulent stress values were calculated with the help of the Wu formulation [16]. The results were compared with the modified method by including the value of the wind speed W 0 at sea surface and η the free surface elevation. As exemplified in Figure 6, based on the Marajó station data for the zonal component of the turbulent stress, the values obtained with the two methods are different, as depicted in Figure 6d, but present the same tendency.
Under the action of strong winds and low tidal currents, the major differences between observed and depth-averaged current values are expected to occur at the bottom and the surface.
During the observed period, the winds of lower intensity occurred at the site with the lower tidal currents associated with the lower tidal excursion; on the other hand, the stronger winds occurred at the site with the higher tidal currents associated with the higher tidal excursion.
Although the modified parametrization generated different values of the wind turbulent stress over the surface, the high tidal currents observed at Marajó station are only partially affected by wind-induced drift currents.
For such site, which presents strong tidal currents, the description of currents by using 2D depth-averaged values should be appropriate, as a first guess, for estimating W 0 . Under the action of strong winds and low tidal currents, the major differences between observed and depth-averaged current values are expected to occur at the bottom and the surface.
During the observed period, the winds of lower intensity occurred at the site with the lower tidal currents associated with the lower tidal excursion; on the other hand, the stronger winds occurred at the site with the higher tidal currents associated with the higher tidal excursion.
Although the modified parametrization generated different values of the wind turbulent stress over the surface, the high tidal currents observed at Marajó station are only partially affected by wind-induced drift currents.
For such site, which presents strong tidal currents, the description of currents by using 2D depthaveraged values should be appropriate, as a first guess, for estimating W0.
Conclusions and Recommendations
A set of metocean variables was defined as enough to solve a simplified form of the momentum equation and lead to a better understanding of the movement of the water on coastal and estuarine areas.
Generally, the hydrodynamic models require data of water level, water current profiles, winds, atmospheric pressure, and bed roughness data as a calibration parameter. Under near-neutral atmospheric stability, temperature and air humidity and water temperature data can be neglected.
Water speed on the surface W0 and free surface variation η were added in a wind drag formulation leading to a better representation of the natural process through the computational
Conclusions and Recommendations
A set of metocean variables was defined as enough to solve a simplified form of the momentum equation and lead to a better understanding of the movement of the water on coastal and estuarine areas.
Generally, the hydrodynamic models require data of water level, water current profiles, winds, atmospheric pressure, and bed roughness data as a calibration parameter. Under near-neutral atmospheric stability, temperature and air humidity and water temperature data can be neglected.
Water speed on the surface W 0 and free surface variation η were added in a wind drag formulation leading to a better representation of the natural process through the computational models. Additional research should be undertaken by running 3D computational models for large estuarine systems to compare the results obtained with the use of the classical [16] and with the modified formulation.
An ensemble setup whose timing is believed to be adequate for monitoring most of the sites located on shallow waters was proposed for a selected set of variables. The timing used is related to the free surface elevation signal as coastal and estuarine hydrodynamic circulation depends on tidal time scales.
Overall, it is proposed to gather data on tides and tidal currents for 3 min over a 10-min interval. For sites that present low currents under severe sea conditions, the proposed timing is to gather data for 10 min over a 20-min interval, to decrease power consumption. For meteorological variables, we follow the standard recommendation for taking samples continuously over a 10-min interval, which is also coincident with the ensemble period for tides.
As general guidelines for data gathering at coastal areas, we quote: (i) the approach based on the signal slew rate results in burst time T B shorter than 3 min for both water level and current signals; (ii) ensemble time T E for acquiring water temperature, water level, currents and winds and other meteorological variables should be defined as 10 min; (iii) for all variables, records should be acquired at 1 Hz rate, except for waves that should be acquired at 2 Hz; (vi) under low currents and severe sea conditions, the burst time T B should be increased to 10 min and the ensemble time T E increased to 20 min; (v) the observation period aiming at modeling tasks should be longer than 24 days; (vi) the application of a bulk air-sea interaction algorithm is not practical for most engineering applications as it requires additional variables which are not usually available. With the current instrument technology, the typical setup should be adequate for field campaigns with duration of one month or even more; monitoring periods of at least one month encompass different tidal patterns and present a good probability for covering different meteorological conditions. water temperature p pressure H local depth U i vertically averaged current magnitude ρ air air density C D wind drag coefficient C DN neutral drag coefficient W 10 wind speed 10 m above surface W 0 wind speed at sea surface W Z wind speed at height z above surface ψ u (ζ) atmospheric stability function ζ stability parameter T A water temperature at the surface T 2 temperature measured 2 m above the surface q 2 air specific humidity measured 2 above the surface p a atmospheric pressure H S significant wave height T P wave peak period D P wave peak direction ω angular frequency T P sampling time T B burst time T E ensemble time
|
v3-fos-license
|
2021-05-24T08:39:42.334Z
|
2021-06-01T00:00:00.000
|
235129480
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.isprsjprs.2021.04.003",
"pdf_hash": "55e362f929c1ce5cef548213c325e1a324d3e65c",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46014",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "55e362f929c1ce5cef548213c325e1a324d3e65c",
"year": 2021
}
|
pes2o/s2orc
|
Meta-classification of remote sensing reflectance to estimate trophic status of inland and nearshore waters
Common aquatic remote sensing algorithms estimate the trophic state (TS) of inland and nearshore waters through the inversion of remote sensing reflectance (Rrs (λ)) into chlorophyll-a (chla) concentration. In this study we present a novel method that directly inverts Rrs (λ) into TS without prior chla retrieval. To successfully cope with the optical diversity of inland and nearshore waters the proposed method stacks supervised classification algorithms and combines them through meta-learning. We demonstrate the developed methodology using the waveband configuration of the Sentinel-3 Ocean and Land Colour Instrument on 49 globally distributed inland and nearshore waters (567 observations). To assess the performance of the developed approach, we compare the results with TS derived through optical water type (OWT) switching of chla retrieval algorithms. Metaclassification of TS was on average 6.75% more accurate than TS derived via OWT switching of chla algorithms. The presented method achieved > 90% classification accuracies for eutrophic and hypereutrophic waters and was > 12% more accurate for oligotrophic waters than derived through OWT chla retrieval. However, mesotrophic waters were estimated with lower accuracy from both our developed method and through OWT chla retrieval (52.17% and 46.34%, respectively), highlighting the need for improved base algorithms for low moderate biomass waters. Misclassified observations were characterised by highly absorbing and/or scattering optical properties for which we propose adaptations to our classification strategy.
Introduction
Eutrophication is the process whereby nutrient enrichment leads to excessive primary production of phytoplankton (cyanobacteria and algae) in water bodies (Conley et al., 2009;Smith et al., 2006). The main causes of eutrophication are non-point pollution from agricultural practices, urban development and energy production and consumption (Glibert et al., 2005;Mainstone and Parr, 2002). Increasing frequency and extent of phytoplankton blooms can have implications for ecosystem services and health (Heisler et al., 2008;Lewis et al., 2011;Nixon, 1995). In affected waters, cyanobacteria may produce cyanotoxins which adversely affect human and animal health (Codd, 2000;Merel et al., 2013).
Naturally, lentic waters such as lakes are significant emitters of the greenhouse gases carbon dioxide (CO 2 ), nitrous oxide (N 2 O) and methane (CH 4 ) (Cole et al., 2007;DelSontro et al., 2018). Enhanced eutrophication due to anthropogenic climate change is expected to increase aquatic CH 4 emissions from lentic waters by 30 -90% over the next century (Beaulieu et al., 2019;Tranvik et al., 2009). Over the last decades, several frameworks have been developed to assess and manage eutrophication. Carlson (1977) proposed a Trophic State Index (TSI) linking transparency (Secchi disk depth (zSD [m])), surface phosphorus (P [mg/l]) and phytoplankton chlorophyll-a (chla [mg/m 3 ]) concentrations to the trophic state (TS) of lakes. The index partitioned TS into three classes: oligo-, meso-and eutrophic. In later work Carlson and Simpson (1996) introduced an additional TS class (hypereutrophic) to include extreme biomass scenarios. More recently, other parameters linked to water optical properties, such as turbidity (NTU) and colour scales, were employed for the retrieval of TS (Binding et al., 2007;Lehmann et al., 2018;Wang et al., 2018).
Of the aforementioned TSI parameters, in situ measurements of chla are most frequently used to estimate TS. Chla is a reliable proxy directly for phytoplankton biomass and indirectly for primary production (Carlson, 1977;Huot et al., 2007;Kasprzak et al., 2008). In situ derived chla is a core indicator in monitoring programs such as the European Water Framework Directive or the U.S. Clean Water Act (Carvalho et al., 2008;Keller and Cavallaro, 2008;Søndergaard et al., 2005). While the extraction of chla from in situ collected water samples has few, and likely low, associated uncertainties, this monitoring approach cannot be scaled up to include remote sites and short-lived phytoplankton bloom phenomena (Schaeffer et al., 2013;Tyler et al., 2016). Aquatic remote sensing complements in situ measurements for the estimation of surface water concentrations by providing a spatial and temporal observation advantage (Mouw et al., 2015).
In aquatic remote sensing the inherent optical properties (IOPs, i.e. absorption, backscatter and fluorescence) of water and the optically active constituents (OACs), namely phytoplankton pigments (ϕ(λ)), nonpigmented particles (nap(λ)) and the absorption by the chromophoric fraction of dissolved organic matter (a cdom (λ)[1/m]), impact the remote sensing reflectance (Rrs (λ, sr − 1 )) vector (Gordon et al., 1988;Morel and Prieur, 1977). Rrs (λ, sr − 1 ) is defined as the ratio of water-leaving radiance L w ( μW cm − 2 sr − 1 nm − 1 ) to total downwelling irradiance E d ( μW cm − 2 nm − 1 ) : Rrs (λ) is thus the critical optical property to derive information from a water body about OACs dispersed in the water column (O'Reilly et al., 1998). The retrieval of phytoplankton chla concentration, or the phytoplankton absorption component, a ϕ (λ)[1/m], can be expressed as a function estimation problem that requires inversion of Rrs (λ): whereby x is the quantity to invert Rrs (λ) for Siegel (1997, 1995). The inversion of Rrs (λ) is known to be mathematically ill-posed, as multiple combinations of IOPs can result in the same Rrs (λ) vector and may thus cause ambiguity in the inversion (Defoin-Platel and Chami, 2007;Sydor et al., 2004). OAC compositions and concentrations strongly vary across inland and nearshore waters, thus accurate modelling of Eq. 2 has led to the development of numerous chla retrieval algorithms over the past decades (see reviews by Blondeau-Patissier et al., 2014, Matthews, 2011, Odermatt et al., 2012, Tyler et al., 2016. Chla retrieval algorithms may be divided into two categories: empirical and semi-analytical. As the name implies, algorithms of the former category are based on empiricism, in which a functional relationship between an OAC and the optical Rrs (λ) vector is established from field observations and domain knowledge. Popular examples are the Fluorescence Line Height (FLH) (Gower et al., 1999), the Maximum Peak-Height (MPH) (Matthews et al., 2012) and Maximum Chlorophyll Index (MCI) (Gower et al., 2005) algorithms, which use band arithmetic to relate spectral phenomena associated with phytoplankton to the concentration of chla.
Machine learning (ML) algorithms also belong to the empirical category. Typically, ML algorithms are based on non-linear regression models developed with large datasets consisting of field and/or simulated observations Pahlevan et al., 2021). Regression approaches can also be used to retrieve IOPs such as a ϕ (λ) (Craig et al., 2012). Retrieved a ϕ (λ) is then scaled to chla concentration.
Algorithms of the second category, semi-analytical solution algorithms (SAA), invert Rrs (λ) for IOPs . SAA base the retrieval on physical reasoning, but partly employ statistical methods (hence the term 'semi'). In the inversion for a ϕ (λ), SAA show many variants and differ in their definition of the a ϕ (λ) spectral shape, the method to calculate the magnitude of a ϕ (λ) and the defined relationship between Rrs (λ) and a ϕ (λ).
The scaling of a ϕ (λ) to chla derived from SAA or regression approaches can be significantly confounded in optically complex inland and nearshore waters due to pigment packaging and the contribution of accessory pigments to absorption (Bricaud et al., 1995;Simis et al., 2007). Unless this variability is accounted for, non-linear effects in the relationship between a ϕ (λ) and chla will also affect TS estimation.
To reduce retrieval errors Rrs (λ) can be assigned into previously defined and distinct optical water types (OWTs) (Moore et al., 2014;Spyrakos et al., 2018). OWTs are then utilised to guide the retrieval, since a single chla algorithm in practice often shows limited accuracy across a range of OWTs. OWT switching and blending of several algorithms following prior classification into known OWTs has become established practice (Eleveld et al., 2017;Neil et al., 2019).
Whether empirical or SAA algorithms are included in an OWT scheme or stand-alone, they estimate the TS of a water body indirectly by inverting Rrs (λ) for chla or by scaling a ϕ (λ) to chla. The retrieved concentration then indicates a TS class. In a recent study, Shi et al. (2019) outlined that significant uncertainties may propagate into TS estimation due to the limited precision associated with inversion for chla. To overcome intermediate chla retrieval when TS information is ultimately required, Shi et al. (2019) developed an approach that directly relates the light absorption coefficient of OACs to TS using the quasi-analytical algorithm (QAA) by Lee et al. (2002).
In this study, we develop a methodology to overcome issues associated with indirect TS derivation through inversion of Rrs (λ) into chla (or a ϕ (λ)). To accomplish this, our method inverts for TS classes directly through modelling of the TSI system as a classification task. To retrieve TS classes instead of a chla concentration value a classification algorithm is required. For the classification of TS we establish a relationship between the Rrs (λ) vector and a TS class through an in situ dataset (n = 2184) of co-located chla and Rrs (λ) measurements.
We recognise the limited validity of a single algorithm for the information retrieval across many OWTs. However, instead of common switching or blending of algorithms through OWT schemes, we stack multiple classification models and combine their TS class predictions through a higher-level classifier.
To illustrate classification of TS, we define our dataset in this study as D = {(y i , x i ), i = 1, …, N}, where y i is the TS class and x i is a vector representing the Rrs (λ) values of the i-th instance. Examples of vector classification algorithms (classifiers) are decision trees, support vector machines, neural networks and k-nearest neighbours (Ham et al., 2005;Mou et al., 2017). The aim of vector classifiers is to learn statistically meaningful patterns of observations through the minimisation of a defined loss criterion (Vapnik, 1999). In practice, different statistical approaches overlap because they learn the same properties for a given Rrs (λ) vector. Each classifier model is the result of a statistical learning process generating unique class decision boundaries. Therefore, while one algorithm may fail to correctly predict the true TS class, another may succeed (Polley and van der Laan, 2011;Ting and Witten, 1999). The combination of individual learners is the baseline for ensemble learning. The idea explored here is to construct a strong single learner from several weak learners.
Two of the most popular ensemble methods are bagging (Breiman, 1996) and boosting (Freund and Schapire, 1996). Bagging combines the predictions of weak learners using different bootstrap samples of the training set. Boosting sequentially trains a series of weak learners with weighted versions of the training set based on the performance of previously constructed learners. Wolpert (1992) proposed a linear combination of individual models to form an ensemble and named it "stacked generalisation" also known as "stacking". van der Laan et al. (2007) have extended the original stacking approach with a cross-validation framework and coined it "Super Learner". The classification framework presented here is based on the concept by van der Laan et al. (2007). In the context of this research, we call this higher-level classification algorithm the "meta-classifier". The meta-classifier acts as a meta instance, learning from the decisions of each individual model to predict TS classes for Rrs (λ).
In this study we develop an algorithm for the direct classification of Rrs (λ) into TS. To successfully cope with the optical diversity of inland and nearshore waters, we explore the concept of stacking classifiers in a meta-learning scheme. We evaluate the method on the multispectral resolution of the Sentinel-3A Ocean and Land Colour Instrument (OLCI) for 49 inland and nearshore sites. An established practice is to estimate TS through inversion of Rrs into chla, whereby multiple chla algorithms may be combined in an OWT scheme to address in-water optical complexity. The developed meta-classifier involves multiple classification algorithms, thus we assess the utility of our approach through comparison with TS derivation via OWT switching of several chla retrieval algorithms.
Methods
In situ bio-optical data were sourced from LIMNADES (Lake Biooptical Measurements and Matchup Data for Remote Sensing: https:// limnades.stir.ac.uk/). We assembled a subset of LIMNADES with 2751 samples of co-located in situ chla and hyperspectral Rrs (λ) measurements. The datasets and measurement protocols of OACs and the derivation of Rrs (λ) are described in Spyrakos et al. (2018) (see Table 1 herein). Rrs (λ) in all datasets were measured just above the water surface (0 + ). We did not apply an additional correction to Rrs (λ), assuming the measurements were made under optimal viewing angles and quality controlled during the original study. For brevity, we omit the wavelength-dependency for the remainder of the paper.
For the development of the classification method two independent datasets were created: one for training the classification algorithms and one for evaluating their performance. A priority in the development process was to avoid the allocation of observations from the same water body to both training and test datasets. Mixing or randomising the in situ measurements across both datasets would introduce knowledge about the water bodies included in the test set to the classification algorithms in the training stage and prevent an independent evaluation of the proposed method. We therefore split the entire dataset using the first letter of the water bodies names: A-D (n = 567, 20%) for testing and E-Y (n = 2184, 80%) for training.
Radiometric data pre-processing
The in situ hyperspectral Rrs measurements from the training and test datasets were spectrally resampled to the multispectral band configuration of Sentinel-3A OLCI, normalised and subsequently assigned a TS class (Fig. 1). These steps are detailed below.
Resampling of the hyperspectral data was necessary to combine Rrs from multiple sources and sensors. The spectral data were convolved to the spectral response function (SRF) of OLCI because the observation frequency, spectral sensitivity and resolution of this sensor are relevant to observe the TS of inland and nearshore waters (Kravitz et al., 2020). To convolve Rrs for each OLCI band i, we calculated the values of the spectral albedo for the bands: where (λ) is the wavelength, (λ i ) is the center wavelength in the i-th spectral band, ϕ i (λ) is the SRF of the i-th spectral band, (λ 1 , λ 2 ) are the boundary wavelengths of the considered spectral range, R is the spectral albedo and R(λ i ) is the mean spectral albedo in the i-th spectral band. The mean albedo values within the bands represent the spectral albedo seen by the sensor and are often also called the spectral signature of the viewed surface. We note that the convolution was performed on Rrs rather than the mathematically correct L w and E d measurements. Any effects are considered negligible at the 10-nm bandwidth of OLCI (Burggraaff, 2020). Several previous optical classification studies classified untreated Rrs, which is sensitive to both amplitude and spectral shape (Lee et (Binding et al., 2008;Binding et al., 2010;Binding et al., 2013;Bradt, 2012;Dall'Olmo et al., 2003;Dall'Olmo et al., 2005;Dall'Olmo and Gitelson, 2006;Gitelson et al., 2007;Gitelson et al., 2008;Gurlin et al., 2011;Li et al., 2013;Li et al., 2015;Schalles, 2006 (Bradt, 2012;Dall'Olmo et al., 2003;Dall'Olmo et al., 2005;Gitelson et al., 2008;Li et al., 2013;Li et al., 2015;Moore et al., 2014;Schalles, 2006) USTIR 29 Lake Balaton (Hungary) (Riddick et al., 2015) UL 14 Lake Bogoria (Kenya) (Tebbs et al., 2013) particularly relevant to optically complex water bodies, the reflectance vector has been normalised prior to the classification Mlin et al., 2011;Spyrakos et al., 2018;Xi et al., 2015;Xi et al., 2017). The amplitude of Rrs is strongly influenced by particulate (back-) scattering, while the shape is primarily affected by a ϕ (λ), a cdom (λ) and the absorption by non-pigmented particles (a nap (λ)[1/m]) (Roesler et al., 1989). In this study we followed the prior normalisation approach of Rrs to emphasise the shape: where r n (in units of nm − 1 ) indicates the normalised spectrum obtained through trapezoidal integration between λ 1 (412 nm) and λ 2 (753 nm) (Mlin and Vantrepotte, 2015). The resampled and normalised reflectance datasets are displayed in Fig. 2. To enable the inversion for TS we established a relationship between Rrs and the TS classes. We used the TSI definition by Carlson (1977) and the extension made for hypereutrophic waters by Carlson and Simpson (1996). The TSI definition provides a logarithmic model to interpret chla concentrations as indicators for TS classes. We separated our Rrs dataset into four reflectance TS classes (oligo-, meso-, eutro-and hypereutrophic) based on the chla concentrations measured from co-located in situ water samples ( Table 2). The compiled dataset covers a wide range of bio-optical conditions and trophic states found across inland and nearshore waters. Fig. 3 displays the logarithmic distribution of chla [mg/m 3 ], total suspended matter (TSM [g/m 3 ]), and a cdom (443) [1/m] for the training and test sets. The minimum and maximum values of the OACs are given in Table 3. The training set was used multiple times throughout the classification scheme, whereas the resultant classification algorithms were applied once to the test set for evaluation.
Meta-classification of remote sensing reflectance
In our dataset D = {(y i , x i ), i = 1, …, N}, y i represents the trophic class values and x i the reflectance vector values of the i-th instance. To classify an instance, a library L with k = 4 base classification algorithms (base-classifiers), i.e. L = {k 1 , k 2 , …, k 4 }, was created. The library is a collection of vector classifiers. We invoked the k-th base-classifier in L to predict the class for each instance x i , along with its true TS classification y i . Combining these predictions along with the true trophic class vector led to a new dataset, the level-zero data. The level-zero dataset was treated as the training ground for a new learning problem subsequently solved by an additional classification algorithm, the meta-classifier.
Base-classifiers
We trained the meta-classifier with the predictions of four baseclassifiers characterised by different statistical assumptions: eXtreme Gradient Boosting (XGBoost), LightGBM (LGBM), Naïve Bayes (NB) and a neural network (NN). The classifier assumptions and the training procedure, involving the stacking of base-classifiers to fit the metaclassifier, are as follows.
The first two classifiers are XGBoost and LGBM (Chen and Guestrin, 2016;Ke et al., 2017). Both classifiers have their statistical origin in gradient boosting machines (GBM) combined with decision trees as base-learners (GBDT) (Freund and Schapire, 1997;Friedman et al., 1998;Friedman, 2000). GBDTs create new models sequentially to provide more accurate estimates of the target variable. The principle is to construct new learners that focus on weak areas already learnt, or in statistical terms, construct learners correlated with the negative gradient of the used loss function (Natekin and Knoll, 2013). For reviews on boosting algorithms see Bühlmann and Hothorn (2007), Schapire (2003). The prediction of the XGBoost algorithm at each iteration t is based on the defined objective function Ĵ : where and The objective function Ĵ (t) consists of two parts: L(θ) and Ω(θ). θ describes the parameters in the equation. L(θ) is a differentiable convex loss function that measures the difference (residual) between a class prediction ŷ i and y i at the t-th iteration. The goal of the optimisation process is to construct a tree structure that minimises a loss function in each iteration. The updated tree structure in each iteration learns from the previous tree's model decision and uses the residuals to fit a new residual tree. To construct models that generalise and avoid overfitting, Eq. 5 denotes a regularisation term Ω(θ). T in Eq. 7 is the number of terminal nodes in a tree and γ the learning rate (between 0 -1). γ is multiplied by T to enable tree pruning. Terminal nodes and the learning rate are hyper-parameters that we optimised in separate steps (see where λ is an additional regularisation parameter, and w j enables to control the weights of the tree leaves (Goodfellow et al., 2016). Ω(θ) prevents overfitting and allows a better generalisation of the constructed model.
In this study we used the multi-class logarithmic loss function (mlogloss) for both XGBoost and LGBM. Mlogloss measures the performance of the models with an output probability value between 0 and 1 and increases when the predicted probability diverges from the actual class label: where N is the number of observations, M the number of TS class labels, y ij a variable with the predicted class label and p ij is the classification probability output by the classifier for the i-th instance and the j-th label (Bishop, 1995;Hsieh, 2009). Solving Eq. 8 becomes challenging and computationally demanding. Therefore, Eq. 8 is transformed using a second-order Taylor expansion (Bishop, 2006). The transformation allows the objective function to depend only on the first and second order derivatives of a point in the loss function, also speeding up the process. The main difference between XGBoost and LGBM is the tree construction process. Both classifiers can capture highly non-linear feature-target class relationships. The models can be precisely controlled by tuning a set of hyper-parameters. In addition, each classifier can be trained on both small and large datasets making them suitable for any given classification task. The third classifier in our ensemble is a Naïve Bayes (NB) probabilistic model based on the Bayes' Theorem: where P(y i ⃒ ⃒ x) is the conditional probability that a reflectance spectrum x belongs to a trophic class y i . The Bayes' rule specifies how this conditional probability can be calculated from the features (wavelengths) of the reflectance vectors of each trophic class, and likewise the unconditional probability (Lewis, 1998). The NB classifier calculated the probability of each trophic class for a given reflectance and output the trophic class with the highest one. We specifically wanted to include the assumption that for some reflectance spectra independent, single wavelengths are dominant, and hence strongly influence the class assignment. Since the wavelengths of multispectral reflectance vectors are at least partly correlated, the NB assumption is naive. In our ensemble of classifiers, NB is one of several base-classifiers generating a TS prediction. Following the theory of stacked generalisation, the metaclassifier should recognise when the NB assumptions apply through evaluation of the predictions for each test reflectance. In case the NB assumptions hold, high prediction accuracies are expected and thus the meta-classifier could prioritise the NB predictions in the decisionmaking to generate a final TS estimate. In case the NB assumptions do not apply, the accuracies will be low and lead to higher influence on the meta-classifier of other, more accurate base-classifiers. As the fourth base-classifier we used a NN. NNs have shown success
Fig. 2.
Hyperspectral in situ data (top row) and the resulting Sentinel-3A OLCI resampled and normalised multispectral reflectance spectra (bottom row) for both training (green) and test measurements (blue).
Table 2
TSI classification after Carlson and Simpson (1996) and assigned reflectance spectra per class in our training and test sets. n is the number of observations. across a diverse set of waters due to their aptitude to approximate nonlinear input-output functions (Brockmann et al., 2016;Doerffer and Schiller, 2007;Hieronymi et al., 2017;Ioannou et al., 2013;Krasnopolsky et al., 2002;Krasnopolsky et al., 2018). In this study, a NN with one layer and multiple hidden neurons h j was trained. The output of the NN for the test dataset was given by: with where x i and y are input and output vectors, respectively; w j and w j are weights, a j and â are fitting parameters and ϕ is the non-linear hyperbolic tangent activation function (Hsieh, 2009). In Eq. 10, n is the number of the non-linear activation function, ϕ in Eq. 11. The defined objective function Ĵ (t) (Eq. 5) for the NN minimised the mlogloss function (Eq. 8) and the regularisation term Ω ( θ ) = 1 2 ‖ w ‖ 2 2 , also known as weight decay. The NN was trained using backpropagation (Goodfellow , 2016). For the multi-class output layer, we used the standard softmax function (Bishop, 2006). It is worth noting that the NN with one layer can be considered shallow, whereas it is becoming more common to use "deeper" NNs characterised by more layers. We could have added additional layers or used a more advanced architecture such as a mixture density network (MDN), as recently demonstrated for inland and coastal waters in Pahlevan et al. (2020). However, the intent of this paper is to present metaclassification, and not to showcase various NN architectures. Further, a deeper NN increases training time and requires more in situ measurements which are naturally limited. For this application, we therefore opted to keep the NN architecture as basic as possible.
Our library L consists of a diverse set of base-classifiers and underlying statistical models that forwarded the unique information learnt about the reflectance spectra, constituting each trophic class, to the meta-classifier.
Meta-classifier
The meta-classification training and prediction procedures were multi-step processes, as we illustrate schematically in Fig. 4. All classifiers were trained using a v-fold cross-validation scheme with v = 5. Cross-validation enables performance assessment of the classification algorithms during the training process (Schaffer, 1993). The process of training the meta-classifier on the predictions of the base-classifiers in our library L = {k 1 , k 2 , …, k 4 } was as following: 1. We split the reflectance training set into 5 exclusive folds of n/v = 2184/5 ∼ 437.
For each fold
(a) Reflectance spectra in fold v i were validation data (hold-out set), while the remaining observations (80% of the reflectance spectra) constituted the training set. Each base-classifier was fit on the training set. (b) With each base-classifier we predicted the outcome ŷ i for each reflectance instance x i in a validation set v i . The resulting loss of each base-classifier was estimated between the true outcome y i and its prediction ŷ i for all reflectance spectra.
(c) For each classifier, the v-estimated loss rates over the v-validation sets were averaged resulting in the cross-validated loss. For each reflectance the model of the respective base-classifier with the smallest cross-validated loss was selected for subsequent use by the meta-classifier.
Combined with the true trophic class y i , we stacked the crossvalidated predictions made on the training set of each base learner to generate a vector of level-zero predictions: pzero train = {(p i , x i ) = 1, …, N}. This important step constituted the training set of the metaclassifier, where each feature in pzero train was a single prediction of the base-classifiers. For each prediction, we knew the true outcome y i and hence provided the meta-classifier the necessary training data. The meta-classifier learnt which of the base-classifiers predicted the true trophic class y i for each training reflectance. We used a separate NN as the meta-classifier. We selected a NN because of its high approximation capability to learn the non-linear decision boundaries necessary to distinguish between the base-classifier predictions. The task of the metaclassifier NN was to select the most accurate base-classifier for each reflectance and assign a final trophic class. The training procedure of the meta-classifier required the level-zero predictions pzero train .
In the application, each base-classifier in L predicted a trophic class for each reflectance in our test set, resulting in a vector with level-zero test predictions: pzero test = {(p i , x i ) = 1, …, N}. These predictions were then stacked and provided to the meta-learner NN to estimate a final trophic class for each test reflectance.
In this study we utilised the open-source ML-ensemble Python library that interfaces with Skicit-Learn (Flennerhag, 2017;Pedregosa et al., 2011).
Hyper-parameter optimisation
We optimised the learning process of the considered classifiers through hyper-parameter optimisation (HPO). Given a learner A of any of the base-classifiers k in our library L, hyper-parameters were exposed λ x Λ. Tuning hyper-parameters changed the way model A learnt to correctly classify training reflectance spectra in the dataset D. For example, a hyper-parameter of the base-classifiers XGBoost and LGBM limits the maximum depth of the constructed tree. Further, the NNs Fig. 4. Schematic diagram of the training and application processes included in the meta-classification framework. require a selection of neurons in a layer (as opposed to weights that are model parameters learnt during training). Mathematically, HPO can be represented as: where f(x) is the objective function to minimise (or maximise) -such as the mlogloss -and x * is the lowest (or highest) value of a function for a set of hyper-parameters we have drawn from a domain X . In practice, X was a previously defined grid of hyper-parameters. f is a black-box function and has a large set of hyper-parameter combinations that are computationally costly to evaluate. The search that optimises f is often either manually performed or accomplished by selecting randomly from a set of hyper-parameters. Another option is to search through a grid consisting of a substantial combination of all possible hyper-parameter configurations (Bergstra et al., 2011;Bergstra and Bengio, 2012;Thornton et al., 2012). Because our meta-classification approach involved the training of classifiers with several hyper-parameters, a manual, random or grid search approach was considered unpractical. These search approaches are time intensive and susceptible to missing an optimal hyper-parameter configuration. In this study, we instead followed the concept of Bayesian optimisation (Jones et al., 1998;Streltsov and Vakili, 1999). As in the Naïve Bayes classifier, Bayesian optimisation is based on the Bayes' Theorem stating that the posterior probability (or hypothesis) M of a learner (or model) A given data points D is proportional to the likelihood of D given M multiplied by the prior probability of M: Bayesian optimisation methods can be understood as a sequence process that builds a probabilistic model by keeping track of past evaluation results (Brochu et al., 2010). A probabilistic model is build by mapping hyper-parameters to a probability of a score on the objective function f.
One can represent this model as P(C|x), where C is the score for each hyper-parameter x. In literature, the model P(C|x) is called utility (or surrogate) function u. In this study, we built the model with a Gaussian Process (GP) as the prior probability model on f (Rasmussen and Williams, 2005). GPs have become a standard in Bayesian optimisation (Martinez-Cantin, 2014;Snoek et al., 2012). The surrogate function u was then optimised for and based on the posterior distribution for sampling the next set of hyper-parameters (x t+1 ): To find the next best point to sample f from, a point was chosen that maximised an acquisition (or selection) function, here the expected improvement (EI): where x is the current optimal set of hyper-parameters. Maximising x was the objective to improve upon f most. f was continually evaluated against the true objective function until a defined maximum of iterations was reached. By continually updating the surrogate probability model, Bayesian reasoning led to reliable results. The next set of hyperparameters was selected based on the previous performance history instead of a costly grid search through all possible hyper-parameter combinations. Several libraries exist that implement Bayesian optimisation. Here we used Scikit-Optimize, as it was built on top of Scikit-Learn (Head et al., 2018).
Optical water type switching to derive trophic status
To assess the performance of the developed meta-classifier, we compare it against derivation of TS through OWT switching of chla retrieval algorithms (see Fig. 5).
Our OWT switching approach is based on three previous studies. First, the OWTs for all Rrs in our training and test datasets are available from Spyrakos et al. (2018) (Fig. 6). Second, our dataset is almost identical (98% common) to the one used in the study by Neil et al. (2019) (n = 2807, compared to n = 2751 herein). Neil et al. (2019) assessed the performance of 48 chla algorithms on their dataset, resulting in one best-performing algorithm per OWT (see Table 5 therein). Since the datasets of the two studies are nearly identical, the performance results of Neil et al. (2019) are considered valid for the dataset of the present study. Third, Neil et al. (2019) recommended chla algorithms for groups of OWTs when applied to independent, unknown data (such as the test set herein). We slightly modified the selection of algorithms, based on recent performance evaluations from the European Copernicus Global Land Service (Simis et al., 2020). Four chla algorithms were thus assigned to groups of OWTs (Table 4). Restriction of the OWT scheme to four chla algorithms increases the quality of the exercised comparison, as the meta-classifier is equally based on four base-learners.
For the retrieval of chla through OWTs two approaches exist. The first approach uses the most dominant/highest OWT membership score a Rrs received in the clustering process performed in Spyrakos et al. (2018). Chla is then retrieved with an assigned algorithm per OWT. In the present study we refer to this approach as OWT switching of chla algorithms. The second approach utilises the highest n OWT membership scores per Rrs to retrieve chla with n algorithms. The n retrieved chla values are then weighted to reflect the OWT membership scoring, resulting in a blended chla value (Moore et al., 2014).
The blending procedure varies depending on the value of n and the definition of the weighting function. Since the largest impact on the chla retrieval originates from the algorithm chosen for each OWT, we simplified the process and utilised the highest OWT membership score per Rrs.
The meta-classifier was trained with 80% observations of the overall dataset. The coefficients of the chla algorithms included in the switching scheme were thus re-calibrated solely with measurements included in the respective OWT group of the training data set. For example, in our OWT switching scheme, the OC2 algorithm was assigned for OWTs 3, 9, 10, 13, which combined constitute 511 observations in the training set. Using these OWT group measurements of the training set, the coefficients of the OC2 fourth-order polynomial were estimated using nonlinear least squares fitting. As a result of the dataset split into independent training and test sets (which did not take into account OWT memberships), all measurements included in OWT 13 were assigned to the training set. Therefore the OC2 algorithm could only be applied to measurements of OWTs 3, 9 and 10 included in the test set.
We did not re-calibrate the coefficients of the Gons and Gilerson 2band algorithms, as the number of required chla-specific absorption (a * ϕ (λ)[m 2 g − 1 ] and backscatter [1/m] measurements included in the respective OWT training groups was low and thus insufficient for this purpose.
Chla from a ϕ (665)[1/m] retrieved by QAA Mishra was estimated using the equation by Bricaud et al. (1998): where a and b were calibrated with training data included in OWT group 7.
Unlike the meta-classifier that operated on normalised Rrs (see Section 2.1), the chla retrieval algorithms were applied to nonnormalised Rrs at corresponding OLCI wavelengths.
Each algorithm retrieved chla for the test Rrs measurements contained in the assigned group of OWTs. TS was subsequently derived from the retrieved chla concentration according to the TS class ranges depicted in Table 2.
Performance evaluation
To evaluate the base-classifiers independently of the meta-classifier and vice versa, we compared them to a separate support vector machine (SVM) classifier (Cortes and Vapnik, 1995). Identical to the baseclassifiers, we used the same training and test sets and procedures to train the SVM.
For the evaluation of TS classifications either through metaclassification or derived via conventional chla retrieval, we calculated the following metrics: 1. Overall Accuracy (OA). Accuracy of all reflectance instances x correctly classified into each of the four TS classes M, divided by the total number of test samples (n = 567): 2. Average Accuracy (AA). Average classification accuracy across all four trophic classes: 3. Kappa Coefficient (Kappa). Percentage agreement corrected by the level of agreement that could be expected due to chance alone: where p 0 was the accuracy and p e was the probability of agreement by chance (Cohen, 1960;Congalton, 1991).
OA and AA are not equal because each trophic class holds a different number of samples. Because of the dataset split procedure, the dataset suffers from an imbalance between the classes (see Table 2). Using OA alone lacks precision because the smaller number of samples in the trophic classes 1 and 2 have less impact on the final accuracy score than class 3 (eutrophic). Hence, we calculated AA for all classification models. Because the eutrophic class has the highest amount of samples, large differences between OA and AA may indicate a biased classification model. We included the Kappa coefficient to estimate the probability of a correct class assignment by chance alone.
In the comparison of the meta-classifier against TS derived via OWT switching, we evaluated the results of the regression of chla retrieved from an algorithm (estimated (E)) versus the in situ chla values (observed (O)).
We assessed the residuals of E i − O i with log-transformed metrics, as they enable a robust assessment of the algorithms over large concentration ranges of chla (O'Reilly and Werdell, 2019;Seegers et al., 2018) : Fig. 5. OWT switching scheme of chla algorithms to derive TS. OWT clustering of the dataset was performed in Spyrakos et al. (2018). Chla algorithm selection was based on benchmark results from Neil et al. (2019) for this dataset and modifications undertaken in Simis et al. (2020). Each group of OWTs was assigned one chla algorithm. Algorithm coefficient calibration was performed on the respective OWT group training data and the re-calibrated algorithms were applied to the test observations of the respective OWT group. TS was derived from the retrieved chla value based on the TS class ranges defined in Tabl.e 2.
1. Bias, which quantifies the average difference between estimated chla and the observed in situ value and is robust to systematic errors produced by an algorithm: 2. Mean Absolute Error (MAE), which captures the error magnitude accurately but can be sensitive to outliers: 3. Median Absolute Percentage Error (MdAPE), which is outlierresistant. For each sample (i): wherex is the median of Additionally, we provide the slope of the linear regression fit to enable comparisons with previously published results. We omit the coefficient of determination (r 2 ) as it lacks a response to bias and is sensitive to outliers and thus subject to false interpretation (Seegers et al., 2018).
Meta-classification
For the meta-classifier LGBM and XGBoost were base-learners.
LGBM and XGBoost performed similarly with slightly higher prediction accuracies by LGBM across all classes. Differences were primarily due their distinct approaches to build the decision trees. Both models constituted balanced base-learners without major prediction failures for any of the TS classes.
Table 4
Chlorophyll-a algorithms included in the OWT switching scheme. Calibration coefficients for each model highlighted in bold. explained by the higher number of test samples in the eutrophic class 3 (n = 332) and the disproportionate impact on the overall accuracy metric. Consequently, AA becomes a more relevant metric because it incorporates the imbalanced dataset distribution. The NB AA score was approximately 15% lower at 60.22%. NB assumptions only applied to eutrophic and partially hypereutrophic waters (68.22%). For oligotrophic waters, the NB classifier performed comparable to the other classifiers.
OWTs
The base-classifier NN achieved the highest accuracies for oligotrophic systems (61.11%). Compared to LGBM and XGBoost, the results were inferior for mesotrophic (45.65%) and hypereutrophic waters (82.24%). However, as for the other classifiers, the NN scored high accuracies for eutrophic waters (92.17%). Whereas for oligotrophic and eutrophic waters the prediction accuracies by the NN were competitive, the model's predictions were not balanced across the entire set of TS classes. It is to observe that higher accuracies for the oligotrophic class were accompanied by lower precision for mesotrophic waters. Similarly, the eutrophic class was retrieved with high precision, whereas less accurate predictions for hypereutrophic waters were made. Because the NN in this study is considered shallow, adding depth to the architecture may stabilise the predictions made across the TS classes. Therefore, more thorough experiments with different NN architectures need to be undertaken than they were exercised in this study. For exemplification of the meta-classifier concept the NN sufficed to add meaningful information to the ensemble of base-learners.
The non-base SVM classifier scored the highest accuracy for mesotrophic waters (63.04%, 6.52% more accurate than the highest baselearner prediction by LGBM (56.52%)) and hypereutrophic waters (0.94% compared to LGBM predictions). SVM predictions were 10.87% and 1.87% more accurate than from the meta-classifier for these two Table 5 Classification accuracies of the different classifiers for the test set shown in percentages. The highest accuracy in each row is shown in bold. Fig. 7. Classification matrices for predictions made by all classifiers on the independent test set (n = 567). The percentage of reflectance spectra assigned per TS class is shown. Yellow colours indicate high, purple colours low percentages per classifier. TS classes are denoted as 1 = Oligotrophic, 2 = Mesotrophic, 3 = Eutrophic, 4 = Hypereutrophic.
classes, which in sum represents a significant performance gain. However, the SVM misclassified a large proportion of the eutrophic class (73.79% compared to 92.16% by the meta-classifier), reducing all performance metrics significantly.
In the present study, the SVM functioned as a standalone comparison model which therefore could not be incorporated into the ensemble of base-learners. However, given the performance gains of the SVM over other base-learners for mesotrophic and hypereutrophic waters, the addition of the SVM to the ensemble should be investigated. Before adding the SVM, it needs to be clarified whether the eutrophic misclassifications are a primary consequence of the model's more accurate mesotrophic and hypereutrophic classifications. If included, it is furthermore important to validate if the lower accuracies for eutrophic waters can be accurately handled by the meta-classifier without an overall performance loss for this class. Only if the meta-classifier can discard misclassifications accurately, performance gains of the SVM for mesotrophic and hypereutrophic waters would improve overall metaclassifier accuracies. Inclusion of the SVM into the ensemble of baselearners would also require to re-train the meta-classifier NN.
The meta-classifier achieved the highest classification accuracies across all performance metrics (OA: 83.92%, AA: 75.41%, Kappa: 71.72%) and the oligotrophic class 1 (66.67%). In comparison, the baseclassifiers' average accuracy for oligotrophic waters was 56.25%. The meta-classifier improved on this score by 10.42%. Compared to the oligotrophic class, the meta-classifier did not improve over the most accurate base-classifiers for mesotrophic waters. The decision-making process of the meta-classifier to prioritise a reliable base-classifier became increasingly complex in the case of strongly differing baseclassifier predictions. For mesotrophic waters, the meta-classifier had to discard the poor performing base-classifiers NB (22.83%) and NN (45.65%) in favour of the more accurate XGBoost (53.26%) and LGBM (56.52%). The meta-classifier was not able to entirely dismiss the NB and NN classifiers compared to the most reliable performance achieved by the base-learner LGBM. Despite the poor performances by NB and the NN, the meta-classifier scored 52.17% prediction accuracy for mesotrophic waters. Since the selection of a base-classifier for a reflectance was learnt using the training data, choosing incorrect classifiers for a reflectance of the test set was an expected outcome in heterogeneous classification scenarios.
In contrast, the meta-classifier generated highly accurate results for eutrophic and hypereutrophic waters (92.16% and 90.65%, respectively), which were significantly higher than for the oligo-and mesotrophic classes. Confusion by the meta-classifier between these two classes was below 10%. Out of the four chla algorithms, the OC2 algorithm performed accurately for low chla concentrations, but struggled toestimate higher chla waters accurately (MAE of 0.79 [mg/m 3 ], negative bias (-0.19)). The OC2 stagnated at approximately 14 mg/m 3 of chla, which can be explained by the saturation of the polynomial to calculate higher chla concentrations. The same stagnation can be observed in the algorithm's application to the training data it was calibrated with (grey hexagons in Fig. 8, metrics not shown).
Optical water type switching of chla algorithms
The failure of the OC2 algorithm to retrieve chla accurately for higher concentrations is an expected outcome, since the OC2 algorithm was designed for clear waters, where phytoplankton dominates the Fig. 8. Performance evaluation of chla retrieval algorithms included in the OWT switching scheme: OC2, Gilerson 2-band, Gons and QAA Mishra. Coloured circles represent algorithm retrievals for measurements included in the respective OWT test groups (n = 567). For illustrative purposes, grey hexagons represent algorithm retrievals for the respective OWT training groups the algorithms were calibrated with. Metrics are shown for test data. optical properties. The retrieval result of OC2 indicates that the blue and green areas of the test Rrs were not only changing as a function of phytoplankton. Thus the blue/green ratio of the OC2 algorithm led to inaccurate retrievals. TS derivation following the chla retrieval through OWT switching is shown in Fig. 9. The accuracy of the OWT chla algorithm switching approach to derive TS for the test dataset was 79.54% (OA), 68.66% (AA) and 63.38 % (Kappa). As for the meta-classifier, the largest errors occurred for oligo-and mesotrophic waters, whereas the retrieval was highly accurate for eutro-and hypereutrophic waters (> 85% accuracy). For oligotrophic waters, the meta-classifier was 12.38% more accurate (66.67% versus 54.29%, respectively) and 5.83% more precise in the classification of mesotrophic waters than derived through OWT switching of chla algorithms (52.17% and 46.34%, respectively). The developed meta-classifier was slightly more accurate for eutrophic (4.12%) and hypereutrophic waters (4.67%). Using the AA metric that incorporates the imbalance of samples per TS class, the meta-classifier was on average 6.75% more accurate than the OWT switching of chla algorithms (75.41% and 68.66%, respectively).
Misclassifications of oligo-and mesotrophic classes
Both the meta-classifier and OWT switching scheme misclassified a high percentage of oligo-and mesotrophic reflectance spectra. Here we investigate the misclassifications of the meta-classifier that are higher for both classes when derived through OWT switching of chla algorithms.
The meta-classifier misclassified 19.44% reflectance spectra of the oligotrophic class as mesotrophic and 38.04% reflectance spectra of the mesotrophic class were falsely classified as eutrophic (see Fig. 7). None of the oligotrophic and mesotrophic test waters were misclassified as hypereutrophic. To investigate the misclassifications, we plotted the distributions of the OACs per TS class of the training and test sets (Fig. 10). Based on the TS definition and the split of measurements into training and test sets after each Rrs was assigned a TS class, the two datasets showed almost identical chla concentrations within each class. Greater variation occurred only in the hypereutrophic class for which a maximum chla [mg/m 3 ] concentration was not defined. In contrast, TSM [g/m 3 ] concentrations and a cdom (443)[1/m] strongly varied between the oligo-and mesotrophic classes. Since chla concentrations were low for both the oligo-and mesotrophic classes, TSM was dominated by inorganic particle loads, leading to highly turbid and strongly scattering water properties.
Based on the constituent medians of the OACs, the optical properties of the oligotrophic class in the training set were mostly dominated by phytoplankton chla, as a cdom (443) The misclassified reflectance spectra of both the oligotrophic and mesotrophic waters reflect the influence of high sediment loads (Fig. 11). The Rrs vectors of misclassified oligotrophic instances (19.44% as mesotrophic and 13.89% as eutrophic) do not reflect a significant reduction in Rrs values at 560 nm to 620 nm that characterises correctly assigned oligotrophic class observations. Moreover, misclassifications show high reflectance values in the red to near-infrared part of the spectrum. The reflectance spectra are similar in shape and magnitude compared to the training data of the mesotrophic and eutrophic waters. A comparable pattern can be observed for the 35 misclassifications of the mesotrophic class (classified as eutrophic), wherein both shape and magnitude are similar to the training vectors of the eutrophic class.
The reflectance spectra contained in the test sets of the two lowest TS classes were influenced by higher concentrations of absorbing a cdom (λ) and/or concentrations of scattering particles than represented in the provided training data. Consequently, the corresponding Rrs vectors were substantially less present in the training sets, which influenced the learning of the classifiers. Without appropriate representation of these waters, the classifies were unable to adjust their class decision boundaries accordingly. For the classifiers in the training stage, the corresponding Rrs vectors were more similar to those abundant in higher trophic classes, which consequently led to incorrect TS predictions on the test set.
Meta-learning
A single retrieval algorithm often has limited suitability for use over a range of optically complex waters. Meta-learning represents a novel approach to handle the limits of individual algorithms. In this study, the prediction accuracies of the base-classifiers and the SVM strongly varied across the four TS classes. Overall, the meta-classifier was able to identify with high precision the correct and incorrect TS class predictions made by the individual base-classifiers. The high classification accuracy achieved by the meta-classifier over the separate SVM and the base-classifiers validate the stacking theory. Training a meta-learner on the predictions of base-learners can result in significant prediction improvements and reduces the dependency on individual algorithms. Meta-learning also decreases the requirement for knowledge about the performance of a single retrieval algorithm prior to its application to unseen observations. Inherently, the meta-learner has access to the prediction performance of each base-learner during the application through the provided level-zero predictions. Independent of the encountered water type the meta-learner can thus decide on a specific base-learner for each observation. Fig. 9. Classification matrix for TS predictions on the independent test set (n = 567) derived from OWT switching of chla algorithms. The percentage of reflectance spectra assigned per TS class is shown. Yellow colours indicate high, purple colours low percentages. TS classes are denoted as 1 = Oligotrophic, 2 = Mesotrophic, 3 = Eutrophic, 4 = Hypereutrophic.
Direct trophic status classification
By directly classifying Rrs into TS, the presented method avoided some of the issues inherent to TS derivation via chla. For example, the meta-classifier was not confronted with the task of scaling a ϕ (λ) to chla. Naturally, phytoplankton is part of TSM and produces dissolved organic matter. Indirectly, specific phytoplankton groups favour certain water conditions, and also cluster by turbidity or dissolved organic matter loads (e.g. due to riverine influence). When higher or lower phytoplankton absorption efficiency is correlated to changes in a cdom (λ) or a nap (λ), the classifiers can incorporate the resulting influence on the absorption budget in their decision-making through the varying contribution of these IOPs to the Rrs vector.
The base-classifiers were trained using Rrs with previously assigned TS classes. Notably, the chla values used to define the TS class ranges were unknown to the classifiers during the training process. Since the TS class ranges were a function of chla, the base-classifiers learnt this functional relationship indirectly. Corresponding Rrs vectors were treated by the classifiers without knowledge of the OAC concentrations. Consequently, the classifiers learnt to define a TS class decision boundary in their feature space through the provided Rrs vectors, whereby the input features corresponded to the values at the band positions of OLCI. Additional bands may be added to the Rrs vectors in the training stage to improve the optical distinction of TS classes.
Adaptation of the classification framework
In misclassified turbid oligotrophic and mesotrophic waters the optical properties were dominated by high inorganic particle loads. These properties weakened the established relationship between chla and the Rrs vectors that defined the TS class assignment. However, scenarios where biological productivity is light-limited due to high suspended sediment loads are common in natural waterbodies (e.g. rivers) and must thus be better incorporated into the presented classification scheme. To adapt the classification method to turbid waters, other optical parameters can be employed for the TS class assignment. For example, the TSI definition by Carlson (1977) enables to relate transparency (in the form of zSD measurements), which is inversely related to turbidity, to TS. The TS class assignment would then be based on the relationship of transparency to Rrs. While this TS class assignment could be useful for turbid waters, it likely has its own limitations. Therefore, the TS class assignment should ideally be based on the encountered water conditions, warranting the definition of an optical criterion to switch between chla and zSD in the assignment. The use of other optical or water colour indicator parameters to classify Rrs directly into TS might require a different TSI definition than used in the present study.
Conclusion
This is the first study that demonstrates direct classification of Rrs into TS to overcome issues that are inherent to TS derivation via chla. For the classification of TS, we stacked unique base-classifiers in a metalearning scheme. The classifiers of this study were trained with a large in situ dataset of co-located Rrs and chla measurements (n = 2184). When applied to test observations (n = 567), the developed approach demonstrated that direct meta-classification of TS can significantly outperform indirect TS derivation via OWT switching of chla algorithms. The meta-classifier estimated eutrophic and hypereutrophic waters with > 90% prediction accuracy, making the proposed method a reliable tool to assess and monitor eutrophication of inland and nearshore waters. Our method was able to improve retrieval accuracies for oligo-and mesotrophic waters over OWT switching of chla algorithms by 5-12%. Nevertheless, accurate classification of TS from low -moderate biomass waters influenced by high TSM concentrations and/or a cdom (λ) remains a primary challenge to solve.
The classifiers of the presented study were trained with 80% of the dataset. Improvements to the developed approach can be based on the inclusion of additional base-learners such as the SVM and the re-training of the classifiers with the entire dataset. In addition, the TS class assignment may be based on other optical TS indicators. Performance improvements for the oligo-and mesotrophic waters are therefore likely. In this study we exemplified the algorithm on the multispectral resolution of Sentinel-3A OLCI. After resampling of the training dataset, the algorithm can, however, be applied to other sensors such as the Multispectral Instrument (MSI) of the Sentinel-2 satellite to enable crossmission retrievals of TS with the same method.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
v3-fos-license
|
2018-04-03T01:01:45.843Z
|
2013-07-09T00:00:00.000
|
9466464
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2013/465213.pdf",
"pdf_hash": "db63ca8c1b5ffb823ac40c17f6f62d34fb32a160",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46015",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "2fb2e5cd0b13041bd167810586c598acc45f4dab",
"year": 2013
}
|
pes2o/s2orc
|
Use of Metarhizium anisopliae Chitinase Genes for Genotyping and Virulence Characterization
Virulence is the primary factor used for selection of entomopathogenic fungi (EPF) for development as biopesticides. To understand the genetic mechanisms underlying differences in virulence of fungal isolates on various arthropod pests, we compared the chitinase genes, chi2 and chi4, of 8 isolates of Metarhizium anisopliae. The clustering of the isolates showed various groups depending on their virulence. However, the analysis of their chitinase DNA sequences chi2 and chi4 did not reveal major divergences. Although their protein translates have been implicated in fungal virulence, the predicted protein structure of chi2 was identical for all isolates. Despite the critical role of chitin digestion in fungal infection, we conclude that chi2 and chi4 genes cannot serve as molecular markers to characterize observed variations in virulence among M. anisopliae isolates as previously suggested. Nevertheless, processes controlling the efficient upregulation of chitinase expression might be responsible for different virulence characteristics. Further studies using comparative “in vitro” chitin digestion techniques would be more appropriate to compare the quality and the quantity of chitinase production between fungal isolates.
Introduction
Entomopathogenic fungi (EPF) based products are being developed for the control of insect pests in agricultural systems [1][2][3]. Entomopathogenic fungi infect their hosts through the cuticle and do not need to be ingested like bacteria, viruses, and protozoa [4]. During the process of infection, EPF secrete chitinase to digest insect cuticle [5][6][7][8].
The entomopathogenic fungus M. anisopliae produces at least six types of chitinases [9,15,19]. However, the respective role of these proteins in the process of pathogenicity as well as their contribution to virulence on arthropod pests has not been clearly elucidated [20,21]. Nonetheless, chitinase chi2 gene isolated from M. anisopliae var. anisopliae strain E6 has been reported to be responsible for virulence in the genus M. anisopliae [22]. Overexpression of chi2 constructs showed higher efficiency in host killing, while the absence of the same chitinase reduced fungal infection efficiency [20]. Recent studies on differential expression of chitinase genes in vitro and in vivo established the role of substrate differences in the process of pathogenesis [23]. To understand the role of chitinase genes underlying differences in virulence between fungal isolates, we compared the virulence against various arthropod pests and characterized the chitinase genes of 8 isolates of M. anisopliae from the International Centre of Insect Physiology and Ecology (icipe)'s Arthropod Germplasm Centre.
Statistical Analysis.
Records on the performance of each of the M. anisopliae isolates were obtained from icipe archives. Virulence data (percentage mortality and lethal time to mortality (LT)) of each isolate was used in the cluster analysis. For each pest, a virulence factor for each isolate was determined by using the average mortality value of the total percentage mortality of all isolates. The same procedure was used for LT values. Data were then subjected to a -mean clustering model to determine the difference in their virulence. The centroid, which is the mean vector of each cluster, was used to define cluster membership of each isolates. The within-groups inertia was used as a criterion to define cluster compactness.
The number of clusters was fixed at 4 ( = 4) according to the major taxonomic groups that were considered in this study. Missing values were estimated. A factor analysis based on Spearman correlation (Quartimax rotation) was used to determine the relation between the isolates. The number of iterations performed was 11 and the overall iterations were 200. All statistical analyses were performed using XLSTAT-Pro (Version 7.2, 2003, Addinsoft, Inc., Brooklyn, NY, USA); the significance level was set at = 0.05.
Sequence Diversity and Phylogeny.
Chitinase nucleotide sequences were edited and aligned to remove ambiguous base calls before they were translated into proteins using Geneious [25]. A search to identify protein sequences similar to chi2 and chi4 was performed using tBLASTx algorithm of NCBI GenBank. Geneious Software was used to estimate phylogeny with the neighbour-joining, minimum evolution, or maximum parsimony method. A dendrogram was constructed using Molecular Evolutionary Genetics Analysis (MEGA) software version 4.0 with 10,000 bootstrap replicates [26]. All methods returned trees with similar topology and approximate bootstrap values; therefore only the neighborjoining tree is presented. Percentage homology among similar chitinases to chi2 and chi4 were computed using MEGA software.
The 3D structure was predicted using Swiss PdB Viewer, v 4.0.1 (http://www.expasy.org/spdbv/). The conserved residues of the Carbohydrate Insertion Domain (CID) [27] were identified through multiple sequence alignment with the characterized chitinase genes.
Clustering of Insects Based on the Virulence Data of the M.
anisopliae Isolates. The grouping of the arthropods into clusters based on virulence data showed that cluster1 (inertia = 0.0) includes fruit-fly species C. rosa and C. capitata; clus-ter2 (inertia = 8.2) comprises ornamental pests such as F. occidentalis, M. sjostedti, L. huidobrensis, and T. urticae; cluster 3 (inertia = 9.3) includes five hosts belonging to various taxonomic groups: C. cosyra, P. duboscqi, T. evansi, M. michaelseni, and C. puncticollis. Cluster 4 with the highest inertia of 73.1% corresponded to the highest diversity of arthropods pests (Table 3).
Relation between the M. anisopliae Isolates.
Factor analysis using correlation matrix showed various levels of similarity between the isolates based on their performances on the 11 insect pests. ICIPE7 has similarities with ICIPE20, ICIPE69, and ICIPE78 whereas ICIPE20 is closely related to ICIPE41 and ICIPE62; ICIPE 30 is only related to ICIPE7 and ICIPE78, although the correlations were not strong; ICIPE41 was strongly related to ICIPE62, ICIPE63, and ICIPE69; ICIPE62 and ICIPE63 have closed virulence patterns as IMI330189. ICIPE20 and ICIPE41 also are related to IMI330189. There were also similarities in virulence patterns between ICIPE78, ICIPE20, and ICIPE41 (Table 4).
Analysis of Chitinase2 Gene Sequence.
Comparison of the chi2 nucleotide sequences from all selected M. anisopliae isolates originating from three different parts of Africa showed no differences in the open reading frames composed of 229 amino acid residues. However, when compared with the similar chitinase sequences retrieved from NCBI database, there were differences in amino acid composition (Figure 2).
Homology
Modeling of Chitinase2. The Swiss-Pdb Viewer (http://www.expasy.org/spdbv/) server was used to predict the 3D structure of chi2. The conserved residues of the Carbohydrate Insertion Domain (CID, Y × R and V × I) were present in all selected M. anisopliae isolates that exhibited no differences in their coding regions. In M. anisopliae var. acridum the "Y × R" motif is replaced by "Y × K" (Figure 4).
Analysis of Chitinase4 Gene Sequence.
All M. anisopliae var. anisopliae isolates had identical chi4 nucleotide sequences. After the editing process to remove the ambiguous base calls a BLAST analysis using chi4 sequence on NCBI GenBank database revealed highest amino acid identities to M. anisopliae var. anisopliae M34412, ARSEF7524, and M. anisopliae var. acridum IMI330189 ( Figure 5).
Discussion
The clustering analyses based on virulence data on various taxonomic groups revealed differences between the icipe's isolates. Cluster1 comprises fruit flies C. rosa and C. capitata, against which ICIPE 20 is most virulent, although other isolates have been reported to be pathogenic [28]. ICIPE20 also fits in Cluster2, which comprises L. huidobrensis, F. occidentalis, T. urticae, and M. sjostedti against which it has been reported to be pathogenic [29][30][31][32]. Cluster2 also accommodates ICIPE 69 which has been reported to be virulent against thrips [2,29,33] and is currently commercialised for the control of insect pests of horticulture in Africa [32]. ICIPE7 which has been reported to be most virulent isolate against T. urticae [30] can also be considered in that cluster. Cluster3, on the other hand, includes flies, termites, and mites and therefore involves a larger number of isolates. Previous records on their virulence indicate that ICIPE7, ICIPE20, ICIPE30, ICIPE78, and ICIPE62 could be included in that cluster because of their virulence on T. urticae, M. michaelseni, and C. puncticollis [30,34,35]. ICIPE69 has been reported to be the least virulent isolate against M. michaelseni [36] and thus cannot be considered in that cluster. This may explain the absence of thrips species in cluster3. Cluster4 comprises 11 arthropod pests, suggesting that each of the isolates is virulent to some extent to each of these pests and their related species. For instance, ICIPE30 has been used for the control of tsetse fly Glossina spp. [30,37]. ICIPE7, which is virulent against mites T. urticae and T. evansi [30,38], is also indicated for the control of the tick species Rhipicephalus appendiculatus and R. pulchellus [39,40], both belonging to Acari group. ICIPE78, known to be the most virulent isolate for the control of T. evansi [37,41], is closely related to ICIPE7.
All the M. anisopliae isolates used in this study showed the same chi2 and the same chi4 protein structure despite the fact that they originated from different localities in Africa. Only IMI330189 (M. anisopliae var. acridum) which originated from Niger had a nonsynonymous substitution in the chi4 sequence. The analysis of the common predicted structure of the chitinase showed folding patterns and conserved amino acids of the Carbohydrate Insertion Domain (CID) described in many fungal species [9,27,44] including NCBI outgroup sequences.
Chitinase gene chi2 was reported to be mainly responsible for M. anisopliae virulence [20,23]. The present molecular results suggest either that chitinase genes are differentially regulated (i.e., different expression levels) in different isolates or that there are other parameters that affect the process of infection. Regarding the first hypothesis, chi2 gene has been reported to be upregulated by chitin (which serves as a carbon source to the fungus) in conditions of fungus autolysis, and is downregulated by glucose [25]. Chitin composition of insect cuticle can affect chitinase production level [23,45], which would justify the difference in virulence. Since insect pests have special cuticle compositions, the virulence of EPF may vary accordingly, even between life stages [23]. In that regard, Moritz [46] reported that adult thrips and larvae have different cuticle structures, which could explain, in part, the difference in susceptibility to EPF between arthropod pests [32,[47][48][49]. Posttranscriptional regulation of chitinase genes [50] may also account for the observed virulence difference in our isolates. This needs to be further investigated by comparing chitinase gene expression of isolates with different virulence patterns. Additionally, other relevant factors, such as conidiation and toxin production genes, that affect fungal virulence need to be considered as well. Niassy et al. [32] observed that ICIPE 69 produced more conidia than ICIPE 20 and ICIPE 7 and was virulent to larvae of F. occidentalis. Fang et al. [24] demonstrated that gene disruption of a conidiation-associated gene (cag8) in M. anisopliae resulted in the lack of conidiation on agar plates and on infected insects reduced mycelial growth and decreased virulence, suggesting the involvement of cag8 in the modulation of conidiation, virulence, and hydrophobin synthesis in M. anisopliae. All these gene-regulatory processes need to be considered when developing molecular techniques for genotyping EPF.
|
v3-fos-license
|
2018-12-18T15:45:21.978Z
|
2018-03-18T00:00:00.000
|
56353425
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.ifgoiano.edu.br/periodicos/index.php/multiscience/article/download/106/84",
"pdf_hash": "b9e1ea8e93c5360a7b5e0d9d2bd2b4cef3b0bfed",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46017",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"sha1": "b9e1ea8e93c5360a7b5e0d9d2bd2b4cef3b0bfed",
"year": 2018
}
|
pes2o/s2orc
|
Blind source separation by multiresolution analysis using AMUSE algorithm
Algorithms for blind source separation have been extensively studied in the last years. This paper proposes the use of multiresolution analysis in three decomposition levels of the wavelet transform, such as a preprocessing step, and the AMUSE algorithm to separate the source signals in distinct levels of resolution. Results show that there is an improvement in the estimation of the signals and in the mixing matrix even in noisy environment if compared to the use of AMUSE only.
INTRODUCTION
Blind Source Separation (BSS) is encountered in various branches of applied mathematics: medical applications such as EEG, ECG, fetal ECG, MEG and fMRI; in telecommunications such as multiuser detection; as a tool for financial analysis, helping to minimize the risk in investment strategy; in audio separation; in feature extraction, which allows implementing pattern recognition systems (HYVÄRINEN et. al, 2001;COMON;JUTTEN, 2010).
Many researchers have considered the inclusion of preprocessing steps using the Discrete Wavelet Transform (DWT) (LÓ et al., 2011;MISSAOUI et al., 2011;MIJOVIC et al., 2011;TALBI et al., 2012;SHAYESTEH;FALLAHIAN, 2010).Some of them use the DWT to remove noise.Others exploit the decomposition into several frequency bands where the particularities of the signals can be emphasized, as in Huang et al. (2003) which proposes a parallel architecture to separate signal in low and high frequencies.Another approach is the recognition of images as in Leo et al. (2003) that uses DWT and BSS method for detecting ball in soccer game, aided by neural network.
In this work, it is proposed a BSS method that uses the Multiresolution Analysis (MRA) proportioned by the DWT as a preprocessing step.The advantage of the proposed method is that there is no necessity to return to time domain, since the observed signals are stored, and after the identification of the mixing matrix, in different resolution levels, they are used to obtain the separated signals.The proposed method consists in an update to AMUSE, which means the inclusion of the wavelet preprocessing step aiming the improvement of the estimated sources.
MATERIAL AND METHODS
The methods used in this research were the wavelet transform and AMUSE algorithm.As material, male and female speech signals recorded in common environment were used.
Blind Source Separation
BSS problems are characterized as MIMO (multiple-input-multiple-output) systems, where each output is the combination of multiple inputs (sources) with some noise.The inputs of this system and the subsystem that mixes the source signals are both unknown.
Let = [ 1 () … ()] be the observation vector of the output of the system.The input vector is 0 = [ 1 () … ()] , composed by the source signals and = [ 1 () … ()] is the additive noise vector, where [ ⋅ ] denotes the transposed vector and is the time. 0 ∈ ℝ × is the actual matrix that characterizes the mixing system.If we consider the case of instantaneous mixtures, the output of the system is seen as = 0 0 + (1) In order to solve the problem, 0 and 0 must be estimate only through .The main difficulty is the lack of information.Therefore some assumptions must be made, so that a model can be proposed.
Theoretically, 0 can be any non-singular matrix.However, in practical situations the waveform of the source should be preserved, so that the estimated signals are intelligible, as well as the estimated sources are not of the same order or magnitude (YEREDOR, 2010).The following theorem establishes a relationship with this property.
Theorem 1
A relationship ℜ that preserves the waveform of source signals is an equivalence relation on BSS model defined as a couple of ordered pairs (′, ′) and ( 0 , 0 ) such that: where is a scaling matrix and a permutation matrix.The proof of this theorem can be found in Tong et al. (1991).
Therefore, it is not sufficient to estimate a mixing matrix and sources.These estimates should be related through ℜ as it preserves the waveform of the actual source signals and also the direction of the column vectors of 0 , differing only by norm and/or permutation.
Algorithm to solve the BSS problem
In this work we use AMUSE (Algorithm for Multiple Unknown Signals Extraction) to solve the BSS problem.In an overview, AMUSE explores the temporal structure of sources projecting the observation vector in an orthogonal space.
In this space, the n largest singular values of the covariance matrix of the observation vector are distinct (CHICHOCKI; AMARI, 2003).
Then, it is possible to find a solution to the BSS problem by the eigenvalues decomposition of such matrix.
Multiresolution Analysis
The Multiresolution Analysis (MRA) allows the observation of a signal at different scales, and in each scale it is possible to obtain more or less information.
For other resolution levels, the output is again filtered.When a signal is decomposed by the DWT a MRA process is performed (DAUBECHIES, 1992;STRANG;NGUYEN, 1997).
Proposed method
Our proposal is to insert a preprocessing step before using AMUSE algorithm.This step will act as a filtering process in the signal decomposed by the DWT, i.e., in the wavelet domain.
The separation is performed in each resolution level and it is not necessary to return to time domain, because the observation vector, in the th resolution, is only used to calculate the mixing matrix and then, in order to estimate the sources, it is used to multiply the observation vector in the time domain.In the BSS model a filtering process can be implemented without changing its structure (HYVÄRINEN et. al, 2001), as shown in eq. ( 5), considering eq. ( 1): In a matrix form, filtering operation can be seen in eq. ( 6) (WEEKS, 2007): Obviously the filter must operate in the signals such that for a given solution (′, ′) , (′, ′)ℜ( 0 , 0 ), i.e., the filtering process does not change the waveform of the source signals.It is clear that is was expected to happen with the proposed method.Since the DWT implements only a matrix multiplication with downsampling by factor 2. However, as shown in Figure ( 1), the waveform of the signal is modified from some level of decomposition.Therefore the AMR should be used in some levels, as done in the following experiments.
Performance measures
Among the several useful measures for evaluating BSS algorithms, one of the most used is (Amari metric), proposed by Amari (KAWAGUCHI et al. 2012).It compares the elements of = ̂0 −1 0 , with the th column of given by .The closer to zero it is the better is estimation of the mixing matrix.Such measure is defined in (7), its not-normalized version: The Source-to-Interference-Ratio -SIR, proposed by Radu et al. (2003) and defined in ( 8), is used to measure the interference among the sources in the BSS process.SIR values are given in dB (decibels) and the higher they are the lower is the interference among the sources.
RESULTS AND DISCUSSION
In this section two experiments are performed in order to evaluate the efficiency of the proposed method, based on the metrics presented in section 6. Results are obtained for AMUSE and for the proposed method.In the implementation of the proposed method, three wavelet decomposition levels are tested ( = 1,2,3).
Experiment One
In this experiment two Portuguese speech signals are considered, one with male voice and the other with female voice, both sampled at 16 kHz rate with 7s of duration.
First, sources are mixed without noise and results are presented in Table (1) and Figure (2 Observing Tables (1) and (2) it is perceptible the efficiency of the proposed method over AMUSE.When noise-free signals are considered, the proposed method overcomes AMUSE in the first and second wavelet decomposition levels, with better value for = 1 and better for = 2.
When noisy signals are considered, the proposed method also overcomes AMUSE in the in the first and second wavelet decomposition levels, but now better and values are reached when = 2.It is worth noting that for the two cases, the use of a third wavelet decomposition level is not so advantageous, since results are becoming worst from second level on.
Experiment Two
This experiment considers as source signals, for mixture, the Portuguese pronunciations of the letters "i" and "a" by a male speaker.Signals are sampled at 16kHz rate with 1s of duration.Again, mixtures of noise-free signals and noisy signals are tested, with noisy signals corrupted with white Gaussian noise.Table (3) and Figure (4 Observing Tables (3) and ( 4) it is again perceptible the efficiency of the proposed method over AMUSE and, in the two considered cases, better results are reached for = 2.When = 3, results become worst, under acceptable values, because there is few information and the waveform of the source signals change.
In general, analyzing results presented in Tables (1) to (4), a wavelet preprocessing step improves AMUSE, since results are superior than simple AMUSE.Considering average values for and it is worth saying that the ideal preprocessing wavelet decomposition level is = 2.After = 2 , there is a clear deterioration in the sources.
Another fact that should be noted is that in a noisy environment the proposed method has better results than AMUSE only.Figure ( 6) shows, for different signal-to-noise-ratio (SNR), how this preprocessing step contributes for better estimation of the source signals, i.e., higher SIR.
CONCLUSION
In this paper an improvement for AMUSE BSS algorithm was proposed, which consists in the wavelet decomposition of the mixture before the actual BSS process.
Results showed significant improvements in the estimation of the sources up to the second wavelet resolution level in all experiments, reaching approximately 18dB of improvement in average.
Experiments also showed that it is not worth using more than two wavelet resolution levels, since the use of a third resolution level deteriorates the sources.Such worsening in the estimation is due to the fact that there is not enough information to source separation, since the third wavelet level has only 1/8 of samples of the observation vector.
In noisy environments results were slightly worse, since the separation does not ignore noise, estimating it along with the sources.In fact each estimated source is a linear combination of the noise sources present in the sensors.Therefore, in order to attenuate noise, the structure to acquire sources should have more sensors than sources.
Further works should consider the use of other wavelet filters, trying to get the better (or optimal) wavelet filter for BSS process.The case of more sensors than sources should be also studied.
5) where is a matrix used in the filtering process and 0 # and # are the filtered signals.In the DWT decomposition four Daubechies wavelet coefficients are used:
Figure 5 .
Figure 5. SIR for different SNR to AMUSE and Proposed Method ). After, before the mixing process, both signals are corrupted by Gaussian white noise, now results are in Table (2) and Figure (3).Waveform of the signals of Table 1 in = 2 Figure 2. Waveform of the signals of Table2 in j=2 ) presents results when BSS is performed with noisefree signals and results for noisy signals are presented in Table(4) and Figure (5).Waveform of the signals ofTable 2 in j=2.
|
v3-fos-license
|
2022-06-15T06:17:45.089Z
|
2022-06-13T00:00:00.000
|
249645618
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-022-13771-4.pdf",
"pdf_hash": "f27e7491781dc53be8ce50f9a7fb5c56f30fc123",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46019",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"sha1": "3c697a3cd72728737c104a39040447aae4dc59d1",
"year": 2022
}
|
pes2o/s2orc
|
Modeling and simulation of high energy density lithium-ion battery for multiple fault detection
Lithium-ion battery, a high energy density storage device has extensive applications in electrical and electronic gadgets, computers, hybrid electric vehicles, and electric vehicles. This paper presents multiple fault detection of lithium-ion battery using two non-linear Kalman filters. A discrete non-linear mathematical model of lithium ion battery has been developed and Unscented Kalman filter (UKF) is employed to estimate the model parameter. Occurrences of multiple faults such as over-charge, over-discharge and short circuit faults between inter cell power batteries, affects the parameter variation of system model. Parallel combinations of some UKF (bank of filters) compare the model parameter variation between the normal and faulty situation and generates residual signal indicating different fault. Simulation results of multiple numbers of statistical tests have been performed for residual based fault diagnosis and threshold calculation. The performance of UKF is then compared with Extended Kalman filter (EKF) with same battery model and fault scenario. The simulation result proves that UKF model responses better and quicker than that of EKF for fault diagnosis.
www.nature.com/scientificreports/ series of reduced order observers 17 can be applied on battery-pack for fault detection. Some researchers proposed model-based short circuit fault analysis using advanced techniques like indentation 18 , nail penetration 19 , fabrication with defect structures 20 and thermal runaway at extreme high temperatures 21 . In another model, output voltages and the actual output voltages of batteries can be compared during the EV operation process and the alarm system will be triggered when the absolute value of the voltage difference exceeds the threshold 22,23 . Also, Kalman filter finds its effective application for diagnosis of the fault in lithium-ion batteries 24,25 in particular when optimal filter exhibits strong robustness with noisy signal. The model based fault detection methods facilitated with very high robustness can be used to detect faults of battery accurately. Adaptive Kalman filter based fault diagnosis for lithium ion battery is under consideration by many researchers [26][27][28] . Adaptive Kalman filter can estimate states of battery parameter by the process and measurement noise covariance adjusting which is not possible in case of Extended Kalman filter where information on noise statistics are considered to be the prerequisite for proper functioning of the filter otherwise it may lead to inaccurate results. Recently overcharge and overdischarge of battery fault is discussed 29 . A review paper on Fault Mechanisms, Fault Features, and Diagnosis Procedures are discussed 30 .
Considering the wide application of lithium-ion batteries in various devices, it is desirable to manufacture batteries which will have higher energy density, power density and service life. The failure due to over-charge, over-discharge, short circuit between inter cells of lithium-ion battery could lead to performance degradation and system fault which in turn may cause inconvenience, faster aging and higher cost of maintenance, thermal runaway or even explosion. Therefore, it is imperative to design a reliable and robust battery management system for early detection of the faults of the battery during service condition. The overall performance is greatly dependent on critical functions such as State-of-Charge (SOC) and State-of-Health (SOH) estimations, over-charge, and under-charge protections etc. From the practical point of view, estimation of three faults, namely over-charge, over-discharge and short circuit fault between inter cell power of lithium-ion batteries will certainly improve the reliability and efficiency of the devices, gadgets, electric and hybrid electric vehicles etc.
It has been found that some published research papers concentrate only internal short circuit fault 18-20 of the battery pack and some other works describes fault such as over-charge, over-discharge etc. No researcher has considered all these faults simultaneously of lithium-ion battery in their work using model based method. Most of the researchers have concentrated model based method using a single technique that is residual evaluation for estimation of the faults of the batteries 22,[24][25][26] . The novelty of present work is to model based fault detection occurs on lithium-ion battery pack for over-charge, over discharge and short circuit fault between inter cell power of lithium-ion batteries simultaneously. In the present study, a systematic model based fault detection scheme is proposed using a bank of Unscented Kalman filter (UKF) on lithium ion battery pack model for multiple fault detection such as over-charge, over-discharge and short circuit fault between inter cell power of lithium-ion batteries. A statistical test has been performed for residual based fault diagnosis and threshold calculation. The performance of UKF then compared with bank of Extended Kalman filter (EKF) on same battery model with same fault scenario. Depending on battery usage, different model of battery such as experimental, empirical, electrochemical are used. The battery model is considered as an extension of the The venin model where overcharged, over-discharged and short circuit fault between inter cell power of lithium-ion batteries are taken as fault parameter. The proposed work is divided into two parts: (a) experimental (b) simulation. In experimental part battery cells are monitored offline for long time interval in case of over-charging and over-discharging and parameter variation due to over-charging, over-discharging are measured. A 123 26650 LiFePO 4 battery (3.3 Volts, 2.5 Ah) cell was used in the experiment. Electrochemical impedance spectroscopy (EIS) technique is used to extract the circuit parameter variation during overcharging and discharging of the battery which is reflected in Tables 2 and 3. The parameter variations are incorporated in the battery model during simulation and run by two bank of filter such as UKF and EKF. The lithium ion battery states are estimated and also residual signal is generated by comparing estimated and measured output for each individual power cell using UKF bank. It has been shown that the UKF based fault diagnosis proves significant result when compared with EKF based approach.
Proposed fault diagnosis scheme on battery pack using UKF/EKF bank
A model-based fault detection scheme for a battery pack using bank of UKF or EKF is represented in Fig. 1. To diagnose the fault due to overcharge, over discharge or short circuit fault in a battery pack, a bank of UKF or EKF works in parallel with the system. A series of voltage and current sensors are connected to the battery pack to measure voltage and current in each cell of battery pack. The various parameters, states of battery model can www.nature.com/scientificreports/ be measured by sensor provided data. The state space model of equivalent battery pack is designed and UKF or EKF banks are processed to get the estimated states of the system. The estimated data from filter and sensor provided data are compared and residual signal is generated. The mean of residual signal indicate the existence of fault in the system.
Residual signal generation.
The discrete state space model of any non-linear time invariant system (with fault) can be expressed as where, x(k), u(k) and y(k) denotes the state vector, input signal and system output vector respectively at time step k. Nonlinear functions f() and g() are continuously differentiable with respect to time and F T (k) implies the occurrences of fault at time step k.
The discrete state space model of nonlinear Kalman Filter is given by where, x(k) and y(k) denotes estimated state vector and estimated output vector of the filter at time step k respectively. Where w(k) and v(k) are independent zero mean Gaussian process and measurement noise. The process noise variance Q k and measurement noise variance R k are expressed as.
From Eq. (2) and Eq. (4), the residual signal is expressed as where F( ) is function of process w(k) and measurement noise v(k) sequence.
If there exist any fault in the system F T (k) ), the filter output indicates the non-zero mean (NZM) residual sequences which is the summation of Gaussian noise and existing fault as given in Eq. (7). Simultaneous occurrences of multiple faults in the system each state of the filter output is indicated by NZM residual sequences.
A multiple fault diagnosis scheme is explained in the flowchart as shown in Fig. 2. When a system is affected by n number of different faults such as F T1 , F T2 , . . . ..F Tn , a bank of filters are utilized by incorporating each fault separately. The discrete state equation of each filter is represented as: The output equation of each filter are described by The residual of each filter is the difference between the system output and the filtered output. Residual of each filter are expressed as www.nature.com/scientificreports/ The summary of UKF algorithm is given in Table 1. Residual based multiple fault diagnosis using UKF/EKF is shown in the flowchart given in Fig. 2. For i no. of cells are monitored by voltage or current sensor, if any fault occurs, the estimated state of filter output will not match with the sensor output data as a result Non Zero Mean (NZM) residual signal obtained. When fault does not occur in the system it shows output as Zero Mean (ZM) residual of process and measurement noise. www.nature.com/scientificreports/
Battery modeling
Model based fault diagnosis method is implemented using electrochemical properties of a battery. An extension of the venin model is presented which is already applied for various fault diagnosis and state estimation problem. The extended model is used because of the complexity in computation of partial differential equations in electrochemical models. A second order battery model of an additional RC parallel circuit element as shown in Fig. 3 is considered to represent the electrochemical phenomenon of cells. The parameters are interfacial impedance, reactivity distribution of the electrode and the resistance of electron and ion migration. The equivalent circuit The current (I) shows charging/discharging current of the system, the performance of a battery pack is greatly affected by the parameters like current, internal resistance and terminal voltage. These parameters are responsible to regulate inconsistency quality, the mode of connection, the variable capacity of cells at different discharge current rate, etc. The resistance-capacitance electrical circuits can be used to model a third order system for battery cells. The each elements of the circuit are the function of SOC and temperature. In the present study the temperature is kept constant, the voltage is varied as a function of SOC and aging dynamics have been kept aside in the model. The significant aspect to be considered is that, the signature fault which may occur in the battery while in operation can be modeled to study behavior of the system under abnormal situations. Effective control of fault estimation also improves the battery life to a large extent. The failure of battery due to overcharging leads to generation of excessive heat due to increase in temperature may cause violent thermal runaways. Moreover, the detrimental copper plating which occurs at the negative electrode of the battery significantly influences the failure of over discharging leading to further thermal runaways. Different types of variation in parameters are noticed during failure of the battery cells due to overcharging and over discharging. It is observed that the increase in bulk resistance (R b ) is more during overcharging than that of over discharging. Also, the charge transfer resistance (R 1 , R 2 ) varies proportionally with both overcharging and over discharging condition. The variation of double layer capacitance (C 1 ) and the charge transfer capacitor (C 2 ) show a steep increase with over discharging, but the same is very small with gradual dipping in nature is seen in case of overcharging. The dynamic equations of the equivalent model of the battery can be represented by where, V T , V 1 and V 2 denote the terminal voltage and capacitor voltage across C 1 and C 2 respectively. Open circuit voltage V oc is a nonlinear function of SOC and described by where, coefficients C k , for k = 0,1,2,……..,m are obtained from OCV-SOC characteristic shown in Fig. 4. The SOC, calculated by the coulomb counting method is given as: www.nature.com/scientificreports/ where, C a is the battery available capacity, and η is the coulomb efficiency that is the function of the current and temperature. η = 1 for charging 0.95 for discharging .
The model parameter are kept constant neglecting changes occurred due to ageing effect. To simulate with the discrete Kalman filter, the filter model is discretized using Taylor series expansion and neglecting higher terms given as These can be expressed as state variable form as
Simulation result and discussion
A 123 26650 LiFePO 4 battery (3.3 Volts, 2.5 Ah) cell was used in the experiment. Tables 2 and 3 illustrate the impedance spectroscopy results for the selected circuit parameters variation when the battery cell was under over-charge and over-discharge fault conditions. During over-charge condition battery cell is kept with 120% charge and 100% nominal discharge while during over-discharge condition it is kept in reverse way. In each fault condition spectroscopy measurement for parameter variation of some specific cycles are taken and shown in Tables 2 and 3. Various faults in lithium ion battery cells can be observed by different parameter variation in battery during operation. The paper primarily focused on over-charging (OC) fault, over-discharging (OD) fault and short circuit fault between inter cell power of lithium-ion batteries. The OC condition is achieved by charging the battery to 120% and 100% nominal discharge at a favorable current rate. The variation of system parameters � SOC(k + 1) Table 3. Variation of system parameters (over-discharge). www.nature.com/scientificreports/ such as R b , R 1 , R 2 , C 1 and C 2 , which significantly contributed in faults during OC and over-discharging (OD) of battery cell parameter variation as seen in the impedance spectroscopy are shown in Tables 2 and 3. Sinusoidal current as input signal is used as charging or discharging current of the model. The terminal voltage, state of charge, voltage across C 1 and C 2 in each sampling time is evaluated from the Eqs. (21) and (22).The battery model is run by bank of UKF and EKF to calculate the estimated state of charge, voltage across C 1 and C 2 in each sampling time with healthy and faulty state while the input signal is corrupted with Gaussian white noise with process noise covariance Q k and measurement noise covariance R k are taken as Q = 10 -6 1 0 0 0 1 0 0 0 1 and R = 1 × 10 -6 respectively.
Simulation result deals with performance comparison between UKF and EKF while fault diagnosis of lithium ion battery of electric vehicle. Over-charging, over-discharging and short circuit faults are experimented in battery model and each case for fault diagnosis bank of UKF and EKF are operated. The three states of battery models, those are state of charge, voltage across C 1 and C 2 are estimated and compared to get residual signal in each time step.
The charging current is taken as input signal considered as I = 5sin100πt with initial values of voltage across charge transfer capacitance and double layer capacitances are taken as 0.1 V each. The model is simulated with healthy condition and at 50th sampling instant a fault is injected as overcharge and at 120th sampling instant second fault occurs. As the system is modeled with three state variables as SOC, V 1 and V 2 , the occurrences of any fault will affect the states of battery model. By comparing the true state and estimated state during healthy and faulty condition is easily detected by residual signal generation.
Single fault diagnosis.
In the proposed battery model is first run healthy condition and at 50th sampling instant a fault is injected as overcharge. Figure 5 represents the true state and estimated state of SOC by EKF and UKF. Figure 6 represents the residual of SOC of both the filter. For both cases the change of residual signal from zero to other value is more appropriate in UKF than EKF. www.nature.com/scientificreports/ When over discharging fault occurs at 120th second, the true voltage and estimated voltage across charge transfer capacitance by both filters are represented by Fig. 9. The residual signal of both the filters is shown in Fig. 10.
The shift of residual for second fault is clear for UKF than EKF. Under this condition the residual for SOC is unaffected showing zero.
Conclusion
In the present study, a discrete non-linear mathematical model of lithium-ion battery has been developed for multiple fault detection using two non-linear Kalman filters. The performance comparison using bank of UKF and EKF for single and simultaneous occurrences of multiple fault diagnosis such as over-charge, over-discharge and short circuit fault between inter cell power in lithium-ion battery has been carried out. In the proposed fault diagnosis scheme both (UKF and EKF) bank of filters are employed separately on lithium-ion battery model during normal and faulty situation so that the filters output and measured output are compared to generate residual signals. It has been shown from the simulation results of statistical test that residual signal under no fault indicates zero mean signal within threshold value whereas it exceeds the threshold value with non-zero mean signal during faulty condition. The comparison result for both the filter (UKF and EKF) from simulation study proves that UKF model exhibits better and quicker response than that of EKF for multiple fault diagnosis of lithium-ion battery model.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.